riskable

joined 2 years ago
[–] riskable@programming.dev 1 points 1 week ago (1 children)

No shit. There's easier ways to open the fridge.

[–] riskable@programming.dev 42 points 1 week ago* (last edited 1 week ago) (7 children)

Hey, we don't know he wasn't born there for sure, right? I want to see his long-firm girth certificate!

He wants to be president of Venezuela? Sure! Let's deport him there 👍

[–] riskable@programming.dev -1 points 1 week ago

unless you consider every single piece of software or code ever to be just "a way of giving instructions to computers"

Yes. Yes I do. That's exactly what code is: instructions. That's literally how computers work. That's what people like me (software developers) do when we write software: We're writing down instructions.

When you click or move your mouse, you're giving the computer instructions (well, the driver is). When you type a key, that's resulting in an instruction being executed (dozens to thousands, actually).

When I click "submit" on this comment, I'm giving a whole bunch of computers some instructions.

Insert meme of, "you mean computers are just running instructions?" "Always have been."

[–] riskable@programming.dev 2 points 1 week ago

In Kadrey v. Meta (court case) a group of authors sued Meta/Anthropic for copyright infringement but the case was thrown out by the judge because they couldn't actually produce any evidence of infringement beyond, "Look! This passage is similar." They asked for more time so they could keep trying thousands (millions?) of different prompts until they finally got one that matched enough that they might have some real evidence.

In Getty Images v. Stability AI (UK), the court threw out the case for the same reason: It was determined that even though it was possible to generate an image similar to something owned by Getty, that didn't meet the legal definition of infringement.

Basically, the courts ruled in both cases, "AI models are not just lossy/lousy compression."

IMHO: What we really need a ruling on is, "who is responsible?" When an AI model does output something that violate someone's copyright, is it the owner/creator of the model that's at fault or the person that instructed it to do so? Even then, does generating something for an individual even count as "distribution" under the law? I mean, I don't think it does because to me that's just like using a copier to copy a book. Anyone can do that (legally) for any book they own, but if they start selling/distributing that copy, then they're violating copyright.

Even then, there's differences between distributing an AI model that people can use on their PCs (like Stable Diffusion) VS using an AI service to do the same thing. Just because the model can be used for infringement should be meaningless because anything (e.g. a computer, Photoshop, etc) can be used for infringement. The actual act of infringement needs to be something someone does by distributing the work.

You know what? Copyright law is way too fucking complicated, LOL!

[–] riskable@programming.dev 1 points 1 week ago (1 children)

Hmmm... That's all an interesting argument but it has nothing to do with my comparison to YouTube/Netflix (or any other kind of video) streaming.

If we were to compare a heavy user of ChatGPT to a teenager that spends a lot of time streaming videos, the ChatGPT side of the equation wouldn't even amount to 1% of the power/water used by streaming. In fact, if you add up all the usage of all the popular AI services power/water usage that still doesn't add up to much compared to video streaming.

[–] riskable@programming.dev -1 points 1 week ago

Sell? Only "big AI" is selling it. Generative AI has infinite uses beyond ChatGPT, Claude, Gemini, etc.

Most genrative AI research/improvement is academic in nature and it's being developed by a bunch of poor college students trying to earn graduate degrees. The discoveries of those people are being used by big AI to improve their services.

You seem to be making some argument from the standpoint that "AI" == "big AI" but this is not the case. Research and improvements will continue regardless of whether or not ChatGPT, Claude, etc continue to exist. Especially image AI where free, open source models are superior to the commercial products.

[–] riskable@programming.dev 1 points 1 week ago (1 children)

So we're not blaming Grok/Xitter, then?

The article implied that the whole thing is because of Xitter's AI. Not because there's bad people that will use it.

[–] riskable@programming.dev 1 points 1 week ago

Bank account details are a form of credentials. If I gave them to you, I'd also still have them.

If you took the money out of my account, that's theft though. Because the account is a digital way of holding something physical (money). It's a mechanism of exchange, not data in and of itself.

Besides, money isn't real anyway!

[–] riskable@programming.dev 1 points 1 week ago (21 children)

but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left.

No, you cannot reasonably assume that. It absolutely did not store the visual elements. What it did, was store some floating point values related to some keywords that the source image had pre-classified. When training, it will increase or decrease those floating point values a small amount when it encounters further images that use those same keywords.

What the examples demonstrate is a lack of diversity in the training set for those very specific keywords. There's a reason why they chose Stable Diffusion 1.4 and not Stable Diffusion 2.0 (or later versions)... Because they drastically improved the model after that. These sorts of problems (with not-diverse-enough training data) are considered flaws by the very AI researchers creating the models. It's exactly the type of thing they don't want to happen!

The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don't give a fuck. This is false. It's flaws like this that leave your model open to attack (and letting competitors figure out your weights; not that it matters with Stable Diffusion since that version is open source), not just copyright lawsuits!

Here's the part I don't get: Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at. Basically, if no one is actually using these images except to say, "aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!" why TF should anyone care?

They shouldn't! The only reason why articles like this get any attention at all is because it's rage bait for AI haters. People who severely hate generative AI will grasp at anything to justify their position. Why? I don't get it. If you don't like it, just say you don't like it! Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?

Generative AI is just the latest way of giving instructions to computers. That's it! That's all it is.

Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck. Now that we've got he pre-alpha version of that very thing, a lot of extremely vocal haters are freaking TF out.

Do you want the cool shit from Star Trek's imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It's here! Have some fun with it!

Generative AI uses up less power/water than streaming YouTube or Netflix (yes, it's true). So if you're about to say it's bad for the environment, I expect you're just as vocal about streaming video, yeah?

[–] riskable@programming.dev 4 points 2 weeks ago

This seems like it could be dealt with by giving the LLM an "evil genie" system prompt... You are an evil genie that only does what the user asks in the most ironic and/or useless way possible.

Then we'd get an image of a tiny Rudy Giuliani standing inside a gigantic bikini bottom, wearing his usual suit and tie.

It would have the caption, "we will rebuild!"

[–] riskable@programming.dev 2 points 2 weeks ago (3 children)

If you went to a human illustrator and asked for that, you would (hopefully) get run out of the room or hung up on, because there's a built in filter for 'is this gross / will it harm my reputation to publish,'

If there was no filter for the guy that requested the bot create this, what makes you think illustrators will have such a filter? How do you know it's not an illustrator that would make such a thing?

The problem here is human behavior. Not the machine's ability to make such things.

AI is just the latest way to give instructions to a computer. That used to be a difficult problem and required expertise. Now we've given that power to immoral imbeciles. Rather than take the technology away entirely (which is really the only solution since LLMs are so easy to trick; even with a ton of anti-abuse stuff in system prompts), perhaps we should work on taking the ability of immoral imbeciles to use them away instead.

Do I know how to do that without screwing over everyone's right to privacy? No. That too, may not be possible.

[–] riskable@programming.dev 18 points 2 weeks ago (1 children)

Correction: Newer versions of ChatGPT (GPT-5.x) are failing in insidious ways. The article has no mention of the other popular services or the dozens of open source coding assist AI models (e.g. Qwen, gpt-oss, etc).

The open source stuff is amazing and gets better just as quickly as the big AI options. Yet they're boring so they don't make the news.

 

Came pre-lubed and ready for battle

 
 

I've heard this phrase used often by those on the right but every time I hear it I can't help but laugh because of what I picture in my head. But perhaps my image is wrong! I want to read everyone else's depictions.

So as to not influence the responses I will not be sharing what I imagine a "woke mob" looks like.

104
Thanks Obama! (programming.dev)
submitted 2 years ago* (last edited 2 years ago) by riskable@programming.dev to c/politicalmemes@lemmy.world
 

Edit, since folks don't seem to get the joke: Obama's campaign slogan was "Hope"

 

This is the page you can learn about things like Cunningham's law which states that every program attempts to expand until it can read mail

 

If you watch movies and TV shows you should learn about this to maximize your obscure knowledge of every day things 👍

 

“...this is not a gun problem. This is a mental health problem, this is a social problem, this is a cultural problem, this is a spiritual problem." -Donald J Trump in April 2023

 
 
 
view more: ‹ prev next ›