BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 5 points 2 months ago (2 children)

Nvidia and California College of the Arts Enter Into a Partnership

Oh, I'm sure the artists enrolling at the CCA are gonna be so happy to hear they've been betrayed

The collaboration with CCA is described in today’s announcement as aiming to “prepare a new generation of creatives to thrive at the intersection of art, design and emerging technologies.”

Hot take: There is no "intersection" between these three, because the "emerging technologies" in question are a techno-fascist ideology designed to destroy art for profit

[–] BlueMonday1984@awful.systems 11 points 2 months ago

And Copilot hallucinated all the way through the study.

HORRIFYING: The Automatic Lying Machine Lied All The Way Through

The evaluation did not find evidence that time savings have led to improved productivity, and control group participants had not observed productivity improvements from colleagues taking part in the M365 Copilot pilot.

SHOCKING: The Mythical Infinite Productivity Machine Is A Fucking Myth

At least 72% of the test subjects enjoyed themselves.

Gambling and racism are two of the UK's specialties, and AI is very good at both of those). On this statistic, I am not shocked.

[–] BlueMonday1984@awful.systems 2 points 2 months ago

Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?

If there is, I haven't heard of it. To try and preemptively coin one, "artificial industry" ("AI" for short) would be pretty fitting - far as I can tell, no industry has unmoored itself from reality like this until the tech industry pulled it off via the AI bubble.

Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)

I genuinely forgot the metaverse existed until I read this.

[–] BlueMonday1984@awful.systems 9 points 2 months ago

New post from tante: The “Data” Narrative eats itself, using the latest Pivot to AI as a jumping off point to talk about synthetic data.

[–] BlueMonday1984@awful.systems 3 points 2 months ago

Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

At this point, I'm gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Units^tm^ which are theirs to steal and theirs alone.

[–] BlueMonday1984@awful.systems 14 points 2 months ago (8 children)

Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

Adding my own two cents, the rise of gen-AI has definitely played a role here - I'm gonna quote Baldur Bjarnason directly here, since he said it better than I could:

[–] BlueMonday1984@awful.systems 5 points 2 months ago

If AI slop is an insult to life itself, then this shit is an insult to knowledge. Any paper that actually uses "synthetic data" should be immediately retracted (and ideally destroyed altogether), but it'll probably take years before the poison is purged from the scientific record.

Artificial intelligence is the destruction of knowledge for profit. It has no place in any scientific endeavor. (How you managed to maintain a calm, detached tone when talking about this shit, I will never know.)

[–] BlueMonday1984@awful.systems 5 points 2 months ago (1 children)

Saw an AI-extruded "art" "timelapse" in the wild recently - the "timelapse" in question isn't gonna fool anyone who actually cares about art, but it's Good Enough^tm^ to pass muster on someone mindlessly scrolling, and its creation serves only to attack artists' ability to prove their work was human made.

This isn't the first time AI bros have pulled this shit (Exhibit A, Exhibit B), by the way.

[–] BlueMonday1984@awful.systems 5 points 2 months ago

Burke and Goodnough are working to rectify the report. That sounds like removing the fake stuff but not the conclusions based on it. Those were determined well ahead of time.

In a better world, those conclusions would've been immediately thrown out as lies and Burke and Goodnough would've been immediately fired. We do not live in a better timeline, but a man can dream.

[–] BlueMonday1984@awful.systems 5 points 2 months ago

This isn't the first time I've heard about this - Baldur Bjarnason's talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.

Funnily enough, calling a poisoned LLM as a "sleeper agent" wouldn't be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though

[–] BlueMonday1984@awful.systems 11 points 2 months ago

Not to mention, Cursor's going to be training on a lot of highly sensitive material (sensitive data, copyrighted code, potential trade secrets) - the moment that shit starts to leak, all hell's gonna break loose on the legal front.

[–] BlueMonday1984@awful.systems 7 points 2 months ago

With AI, of course

view more: ‹ prev next ›