this post was submitted on 01 May 2026
244 points (91.8% liked)

PC Gaming

14596 readers
698 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] arcine@jlai.lu 38 points 9 hours ago (1 children)

In 2018 I was also enthusiastic about OpenAI, I didn't expect them to try to destroy the world. They were litterally selling themselves on doing the exact opposite.

[–] Tollana1234567@lemmy.today 2 points 55 minutes ago

it likely was a grift from the start.

[–] mlg@lemmy.world 17 points 10 hours ago* (last edited 10 hours ago) (1 children)

I forget but wasn't anthropic mostly made up of former OpenAI engineers after Altman went off the deep end?

Not that it makes them any better, but I'm pretty sure GPT-3 was the nexus point in the current mess we have now, which means they hopped off right after it was internally finished and made their own company.

2018 was a full 2 years before that point, and back then AI was still primarily stuff like OpenCV, Pytorch projects, etc. that were things you could legitimately run on one or two workstation GPUs or even a cheap tensor core addon if you didn't want to run on CPU.

[–] unglueclass23@programming.dev 1 points 1 hour ago

2018 was a full 2 years before that point, and back then AI was still primarily stuff like OpenCV, Pytorch projects, etc. that were things you could legitimately run on one or two workstation GPUs or even a cheap tensor core a

It was around the time when OpenAI showcased OpenAI five. At that time I was still playing Dota 2 and I found it really impressive. Here's a decent summary video

[–] vane@lemmy.world 7 points 11 hours ago* (last edited 11 hours ago)

GPT-2 was released 14 February 2019 so they were actually cool gaming AI opensource research foundation in 2018.

[–] VibeSurgeon@piefed.social 13 points 17 hours ago

Remember that OpenAI were the people behind the DotA-playing model, back in 2017, called OpenAI Five.

[–] fox2263@lemmy.world 46 points 23 hours ago (1 children)
[–] bridgeenjoyer@sh.itjust.works 8 points 16 hours ago* (last edited 16 hours ago) (1 children)

Insane how true this is.

No wonder 1999 feels like a million years ago.

[–] UPGRAYEDD@lemmy.world 1 points 18 minutes ago

I was there Gandalf. 1000 years ago. When neo deleated me.

[–] stoly@lemmy.world 56 points 1 day ago (2 children)

I love how everyone is so desperate to make Gabe to be a terrible person.

[–] Smaile@lemmy.ca 13 points 20 hours ago

It's crazy for people to take a stance against him considering how much he's done to protect the hobby from less reputable corps.

[–] osanna@lemmy.vg 3 points 1 day ago (3 children)

What would you call someone who has $1B worth of yachts while we're all struggling to eat and pay bills?

[–] Soulg@ani.social 4 points 4 hours ago (1 children)

I'm a lot less angry at him than the ones who seemingly use their wealth exclusively to make our lives worse. He just has a bunch of money to buy boats. He's the last on the list for me

[–] osanna@lemmy.vg -1 points 4 hours ago* (last edited 4 hours ago) (1 children)

yes, but valve still exploits people. They put on steam sales explicitly to get people to buy more. Especially poor people. More, that they otherwise might not be able to afford just because "it's on sale!"

[–] Threeme2189@sh.itjust.works 1 points 2 hours ago (1 children)

Oh no, they lower prices sometimes. Whatever shall we do except buy, buy, buy!?

[–] osanna@lemmy.vg 1 points 2 hours ago

Right. Because valve puts games on special just because they are good people. It’s absolutely NOT so people will buy more games than they otherwise would have. Definitely not. Nope

[–] nik282000@lemmy.ca 6 points 8 hours ago (1 children)

Like Torvalds, Stallman and Wozniak, Gabe Newell had an idea of how he thought technology should work and has never let up on it, it's a shame they couldn't profit as much as Gabe.

The yacht hobby is a bit much but compared to the standard billionaire hobbies, destroying the planet, rape, and murder, he's pretty mild.

[–] Avicenna@programming.dev 1 points 1 hour ago

You don't call it a shame when it is a conscious choice:

"Wozniak has discussed his personal disdain for money and accumulating large amounts of wealth. He told Fortune magazine in 2017, "I didn't want to be near money, because it could corrupt your values ... I really didn't want to be in that super 'more than you could ever need' category.""

[–] Bigfishbest@lemmy.world 8 points 22 hours ago (1 children)
[–] osanna@lemmy.vg 6 points 21 hours ago

insert "they're the same thing" meme

[–] PonyOfWar@pawb.social 308 points 1 day ago (3 children)

Obligatory reminder that billionaires are not our friends. But also, donating to AI research in 2018 is quite a different matter than if he had done so in recent years. Most people in tech were somewhere between neutral and enthusiastic towards machine learning back then and few foresaw the monster it would become. Doubt he's as enthusiastic nowadays, considering what it did to Valve's hardware ambitions.

[–] greybeard@feddit.online 196 points 1 day ago (1 children)

OpenAI, back then, was also a very different organization. They were mostly a non-profit, claiming to be a research organization who's goals were to ensure AI benefited all of humanity. Hell, I'd say Whisper, which that OpenAI did release, was very positive for humanity. It was when Sam Altman saw big dollar signs in GPT2+ that things started changing fast.

[–] zout@fedia.io 54 points 1 day ago (2 children)

Very much this, in 2023 there was a falling out between Altman and the board of OpenAI over this, and Altman was kicked out. However some big shareholders (Microsoft) made a stink and reversed it.

[–] DragonTypeWyvern@midwest.social 1 points 39 minutes ago

And then they faked an employee letter and Lemmy sucked Altman's dick as the board was forced to resign in turn for having principles.

I remember your sins, hive mind.

[–] timestatic@feddit.org 18 points 1 day ago (1 children)

I think many employees close to Altman also went to strike or theaten to leave. But I think he's bad for the (now) company. They should've stayed non-profit

[–] howrar@lemmy.ca 4 points 16 hours ago

It wasn't "many employees close to Altman". It was the entire company, including the people who initiated the process of getting him kicked out. The whole thing made absolutely no sense.

[–] pulsewidth@lemmy.world 23 points 1 day ago (2 children)
[–] FauxLiving@lemmy.world 32 points 1 day ago (23 children)

If you can mentally separate the technology from the capitalist orgy around trying to shoehorn LLMs into every possible thing, he's not wrong.

The technology has promise, but the reality of what it can be useful for is complete overshadowed by the hype frenzy declaring the end of all knowledge workers and creatives.

LLMs are significantly better at translation than anything we've been able to design, for instance. But that's not flashy, it doesn't generate seed funding or lure investors so it's largely not what people think of when they hear "AI".

load more comments (23 replies)
load more comments (1 replies)
[–] 4grams@awful.systems 15 points 1 day ago

Right, he might be a little further down, but he’s absolutely still on the list. There are no good billionaires.

[–] commander@lemmy.world 58 points 1 day ago* (last edited 1 day ago) (1 children)

We acting like people in the art community weren't hyped up over AI until they started generating images. Before chatgpt, it was all about automating coding/it and other jobs that arent considered art. Back then it was all about how everyone could pursue their passions. The only people not excited were all the transportation employees and factory workers that had been told by the general public how excited they were to replace them

[–] loonsun@sh.itjust.works 31 points 1 day ago

As a social scientist, pre Chat GPT NLP was like opening a whole new world of possibilities. We could finally at scale analyze one of the richest sources of behavioural data in an empirical statistically driven manner.

Now, even as I do research with NLP to continue these goals, I can't bring myself to every defend these tools. If they disappeared tomorrow, we'd lose a tool but we'd prevent so much undue suffering

[–] qaz@lemmy.world 110 points 1 day ago* (last edited 1 day ago) (1 children)
[–] brucethemoose@lemmy.world 30 points 1 day ago* (last edited 1 day ago) (1 children)

The writing was on the wall for years. I remember memes about Altman in machine learning forums/chatrooms circa 2020, and especially 2021.

Nothing's changed. Anyone in the space who actually looked at what he was doing, knew. Yet the bulk of the public (and investors) lapped the Tech Bro stuff up.

[–] mojofrododojo@lemmy.world 22 points 1 day ago (1 children)

Aaron Swartz said Altman was a sociopath years before AI was a gleam in anyone's eye.

The technologies with the worst potential outcomes will always be pioneered by people with no ethical or moral hangups getting in the way.

[–] bitjunkie@lemmy.world 11 points 1 day ago (1 children)

Which unfortunately are the same techs that will be elevated by our present economic structure, precisely because those traits are what enable them to make (or grift) a shitload of money.

[–] mojofrododojo@lemmy.world 8 points 1 day ago (2 children)

see:

Leaded Fuel and CFCs - the same fuckin guy!? goddamn hope there is a hell

load more comments (2 replies)
[–] Kolanaki@pawb.social 62 points 1 day ago (4 children)

I mean, I probably would have invested in AI prior to seeing LLMs in action, too, hoping I was funding the cool kind of AI, not this lame shit.

load more comments (4 replies)
[–] Rentlar@lemmy.ca 66 points 1 day ago* (last edited 1 day ago)

At that time it was still kind of a research project than a "it's going to take over everything" hype and FUD machine.

His opinions on AI today seem more enthusiastic than I would be, but well clear of the delusional level of AI-boosters.

[–] ChicoSuave@lemmy.world 34 points 1 day ago

Was this article commissioned by Tim Sweeney?

[–] ParlimentOfDoom@piefed.zip 35 points 1 day ago

Before OpenAI about faced on being open?

[–] timestatic@feddit.org 14 points 1 day ago* (last edited 1 day ago)

Back then they were still deep into research and the Open part in their name actually meant something. I don't like much about Musk but I feel like its true that they deceived people that supported their initial mission just to go private when the market went haywire for AI. I feel like them shedding their non-profit status shouldn't have been an option as so many people donated to them in good faith

load more comments
view more: next ›