Benedict_Espinosa

joined 3 days ago
[–] Benedict_Espinosa@lemmy.world 1 points 11 minutes ago

True, but despite that it's not possible to tell him to fuck off either. Europe has no good options here.

[–] Benedict_Espinosa@lemmy.world 1 points 1 day ago (1 children)

No, quite the opposite. You claimed that "It's their money to invest" without knowing if it's the case or not.

No, that's not what I said. Do you have trouble with reading?

[–] Benedict_Espinosa@lemmy.world -1 points 2 days ago* (last edited 2 days ago)

I do indeed, and I think that it's a remarkably disingenuous and biased take.

[–] Benedict_Espinosa@lemmy.world 3 points 2 days ago (1 children)

Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won't do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.

[–] Benedict_Espinosa@lemmy.world 3 points 2 days ago* (last edited 2 days ago) (3 children)

Probably the same kind of guardrails that they already have - teaching LLMs to recognise patterns of potentially harmful behaviour. There's nothing impossible in that. Shutting LLMs down altogether is a straw man and extreme example fallacy, when the discussion is about regulation and guardrails.

Discussing the damage LLMs do does not, of course, in any way negate the damage that social media does. These are two different conversations. In the case of social media there's probably government regulation needed, as it's clear by now that the companies won't regulate themselves.

[–] Benedict_Espinosa@lemmy.world 3 points 2 days ago (5 children)

It's not about banning or refusing AI tools, it's about making them as safe as possible and regulating their usage.

Your argument is the equivalent of "guns don't kill people" or blaming drivers for Tesla's so-called "full self-driving" errors leading to accidents, because "full self-driving" switches itself off right before the accident, leaving the driver responsible as the one who should have paid more attention, even if there was no time left for him to react.

[–] Benedict_Espinosa@lemmy.world 1 points 2 days ago* (last edited 2 days ago) (5 children)

It's their money to invest however they want only if it comes with no strings attached, no obligations to use it for a specific purpose. We don't know if this is the case, so there's no basis to argue that they can do whatever they want with it.

What is corruption? It's a form of dishonesty that is undertaken by a person or an organization that is entrusted in a position of authority. That certainly seems to be the case here - not by SpaceX and xAi as such, but by Musk and his involvement in the government.

And Musk has everything to do with Grok.

[–] Benedict_Espinosa@lemmy.world 9 points 2 days ago* (last edited 2 days ago)

A kind of computerised profascist authoritarian dystopia, a combination "1984" and "Brave New World" with technocratic oligarchy, total surveillance, killer robots and unsafe self-driving cars in the world increasingly subject to natural catastrophes due to steadily worsening climate change.

[–] Benedict_Espinosa@lemmy.world -2 points 2 days ago (2 children)

Name a thing that is unbiased. It's generally significantly less biased than humans are.

[–] Benedict_Espinosa@lemmy.world 6 points 2 days ago (3 children)

What could they realistically do, when Trump controls all branches of government from Congress to the Supreme Court? It can be argued that they should have shut down his government in March, when they had the chance to reject the spending bill - but what can they do now?

view more: next ›