this post was submitted on 02 Jan 2026
16 points (90.0% liked)

politics

27417 readers
3315 users here now

Welcome to the discussion of US Politics!

Rules:

  1. Post only links to articles, Title must fairly describe link contents. If your title differs from the site’s, it should only be to add context or be more descriptive. Do not post entire articles in the body or in the comments.

Links must be to the original source, not an aggregator like Google Amp, MSN, or Yahoo.

Example:

  1. Articles must be relevant to politics. Links must be to quality and original content. Articles should be worth reading. Clickbait, stub articles, and rehosted or stolen content are not allowed. Check your source for Reliability and Bias here.
  2. Be civil, No violations of TOS. It’s OK to say the subject of an article is behaving like a (pejorative, pejorative). It’s NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
  3. No memes, trolling, or low-effort comments. Reposts, misinformation, off-topic, trolling, or offensive. Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
  4. Vote based on comment quality, not agreement. This community aims to foster discussion; please reward people for putting effort into articulating their viewpoint, even if you disagree with it.
  5. No hate speech, slurs, celebrating death, advocating violence, or abusive language. This will result in a ban. Usernames containing racist, or inappropriate slurs will be banned without warning

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.

That's all the rules!

Civic Links

Register To Vote

Citizenship Resource Center

Congressional Awards Program

Federal Government Agencies

Library of Congress Legislative Resources

The White House

U.S. House of Representatives

U.S. Senate

Partnered Communities:

News

World News

Business News

Political Discussion

Ask Politics

Military News

Global Politics

Moderate Politics

Progressive Politics

UK Politics

Canadian Politics

Australian Politics

New Zealand Politics

founded 2 years ago
MODERATORS
16
Who Controls AI Exactly? (talkingpointsmemo.com)
submitted 3 weeks ago* (last edited 3 weeks ago) by silence7@slrpnk.net to c/politics@lemmy.world
 

Its worth reading the article rather than trying to answer the headline

top 11 comments
sorted by: hot top controversial new old
[–] FuglyDuck@lemmy.world 16 points 3 weeks ago (1 children)

the people that own it.

Keep in mind what we're calling "AI" isn't artificial general intelligence (C.F. Kryten, Data or R2D2). the most visible AI is a Learned Language Model- basically a predictive algorithm that goes through it's training material and says "99% of the time, when someone says '69', people respond 'nice', therefore, when people say '69' I should respond with 'nice'."

Or, with AI image gen, it knows that when some one asks it for an image of a hand holding a pencil, it looks at all the artwork in it's training database and says, "this collection of pixels is probably what they want".

But the models don't know why 69 is nice, nor what a hand is. It just spits out the proper response based on statistical probability.

The thing is that the 'proper' response can be weighted by giving priority to certain responses- or rejecting certain responses- based on whatever motives the owner has. Take Grok as an example, and it's blatant framing of Musk as the Greatest Man who Ever Lived™, but whoever weighted those responses failed to consider what happens when you ask if Musk is the best nazis or whatevers. You'll notice those responses suddenly changed after people started figuring out how to game the prompts to get them.

AI chatbots are the mouthpiece of whoever owns it... and it gives a level of sophistication that we've never seen before in the billionaire's attempts to manipulate us.

[–] riskable@programming.dev 4 points 3 weeks ago (2 children)

Or, with AI image gen, it knows that when some one asks it for an image of a hand holding a pencil, it looks at all the artwork in it's training database and says, "this collection of pixels is probably what they want".

This is incorrect. Generative image models don't contain databases of artwork. If they did, they would be the most amazing fucking compression technology, ever.

As an example model, FLUX.dev is 23.8GB:

https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

It's a general-use model that can generate basically anything you want. It's not perfect and it's not the latest & greatest AI image generation model, but it's a great example because anyone can download it and run it locally on their own PC (and get vastly superior results than ChatGPT's DALL-E model).

If you examine the data inside the model, you'll see a bunch of metadata headers and then an enormous array of arrays of floating point values. Stuff like, [0.01645, 0.67235, ...]. That is what a generative image AI model uses to make images. There's no database to speak of.

When training an image model, you need to download millions upon millions of public images from the Internet and run them through their paces against an actual database like ImageNET. ImageNET contains lots of metadata about millions of images such as their URL, bounding boxes around parts of the image, and keywords associated with those bounding boxes.

The training is mostly a linear process. So the images never really get loaded into an database, they just get read along with their metadata into a GPU where it performs some Machine Learning stuff to generate some arrays of floating point values. Those values ultimately will end up in the model file.

It's actually a lot more complicated than that (there's pretraining steps and classifiers and verification/safety stuff and more) but that's the gist of it.

I see soooo many people who think image AI generation is literally pulling pixels out of existing images but that's not how it works at all. It's not even remotely how it works.

When an image model is being trained, any given image might modify one of those floating point values by like ±0.01. That's it. That's all it does when it trains on a specific image.

I often rant about where this process goes wrong and how it can result in images that look way too much like some specific images in training data but that's a flaw, not a feature. It's something that every image model has to deal with and will improve over time.

At the heart of every AI image generation is a random number generator. Sometimes you'll get something similar to an original work. Especially if you generate thousands and thousands of images. That doesn't mean the model itself was engineered to do that. Also: A lot of that kind of problem happens in the inference step but that's a really complicated topic...

[–] jordanlund@lemmy.world 2 points 3 weeks ago

I did stumble on an interesting AI use that seems super legit for creatives:

There's an AI powered app for a specific brand of guitar amplifier. If you want your guitar to sound like a particular artist or a particular song, you tell it via a natural language input and it does all the adjustments for you.

You STILL have to have the personal talent to, you know, PLAY the guitar, but it saves you hours of fiddling with dials and figuring out what effects and pedals to apply to get the sound you're looking for.

Video, same player, same guitar, same amp, multiple sounds:

https://youtube.com/shorts/wsGj4zsfOuQ

From a purely artistic perspective, this would be like asking AI for a Pantone or RGB palette set for a specific work of art. All it's doing is telling you the colors so you can avoid doing all the research and mixing yourself.

How you USE those colors? That's on you!

[–] FuglyDuck@lemmy.world 1 points 3 weeks ago (1 children)

This is incorrect. Generative image models don’t contain databases of artwork. If they did, they would be the most amazing fucking compression technology, ever. ... snip... The training is mostly a linear process. So the images never really get loaded into an database, they just get read along with their metadata into a GPU where it performs some Machine Learning stuff to generate some arrays of floating point values. Those values ultimately will end up in the model file.

Where does it get read from? a database, right? yeah. that's called a database. It may not be a large massive repository of art to rival the Vatican's secret collection, but it is a database of digital art.

as for it being complex... yeah. that's why I kept it simple and glossed over all the complex stuff that's not really, you know. relevant to the question of who owns it.

[–] riskable@programming.dev 2 points 3 weeks ago* (last edited 3 weeks ago)

No, a .safetensors file is not a database. You can't query a .safetensors file and there's nothing like ACID compliance (it's read-only).

Imagine a JSON file that has only keys and values in it where both the keys and the values are floating point numbers. It's basically gibberish until you go through an inference process and start feeding random numbers through it (over and over again, whittling it all down until you get a result that matches the prompt to a specified degree).

How do the "turbo" models work to get a great result after one step? I have no idea. That's like black magic to me haha.

[–] ExtremeDullard@piefed.social 7 points 3 weeks ago* (last edited 3 weeks ago)

AI was trained on massive amounts of stolen data, and could only happen because of Big Data. By definition, AI is the evil child of surveillance capitalism.

Therefore the ones who control it are the surveillance capitalists.

[–] AshMan85@lemmy.world 5 points 3 weeks ago

Billionaires.

[–] lemmie689@lemmy.sdf.org 4 points 3 weeks ago
[–] jordanlund@lemmy.world 4 points 3 weeks ago

Kind of a crappy article:

"If it’s a truly transformative technology the manner of the transformation and the values encoded into it will be set now. That is a political question."

Which the author pretty much completely ignores...

[–] reddig33@lemmy.world 3 points 3 weeks ago (1 children)

I wanna know what movie that silver dressed dude with the gun is from.

[–] jordanlund@lemmy.world 4 points 3 weeks ago

Alfred Molina, Boogie Nights:

https://youtu.be/nxvyRy2L0o0

It's referenced in the article. 😉