this post was submitted on 22 Sep 2025
1081 points (99.1% liked)

Microblog Memes

9745 readers
1660 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
1081
Very much smart people (piefedimages.s3.eu-central-003.backblazeb2.com)
 
top 50 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 304 points 2 months ago (10 children)

The majority of "AI Experts" online that I've seen are business majors.

Then a ton of junior/mid software engineers who have use the OpenAI API.

Finally are the very very few technical people who have interacted with models directly, maybe even trained some models. Coded directly against them. And even then I don't think many of them truly understand what's going on in there.

Hell, I've been training models and using ML directly for a decade and I barely know what's going on in there. Don't worry I get the image, just calling out how frighteningly few actually understand it, yet so many swear they know AI super well

[–] waigl@lemmy.world 90 points 2 months ago* (last edited 2 months ago) (5 children)

And even then I don’t think many of them truly understand what’s going on in there.

That's just the thing about neural networks: Nobody actually understands what's going on there. We've put an abstraction layer over how we do things that we know we will never be able to pierce.

[–] notabot@piefed.social 58 points 2 months ago (1 children)

I'd argue we know exactly what's going on in there, we just don't necessarily, know for any particular model why it's going on in there.

[–] GreenMartian@lemmy.dbzer0.com 22 points 2 months ago (2 children)

But, more importantly, who is going on in there?

[–] Klear@quokk.au 11 points 2 months ago (3 children)

And how is it going in there?

[–] GreenMartian@lemmy.dbzer0.com 23 points 2 months ago

Not bad. How's it going with you?

[–] jqubed@lemmy.world 8 points 2 months ago (1 children)

That’s what we’re trying to find out! We’re trying to find out who killed him, and where, and with what! Tim Curry in Clue shouting the above text

load more comments (1 replies)
load more comments (1 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 23 points 2 months ago* (last edited 2 months ago) (2 children)

Ding ding ding.

It all became basically magic, blind trial and error roughly ten years ago, with AlexNet.

After AlexNet, everything became increasingly more and more black box and opaque to even the actual PhD level people crafting and testing these things.

Since then, it has basically been 'throw all existing information of any kind at the model' to train it better, and then a bunch of basically slapdash optimization attempts which work for largely 'i dont know' reasons.

Meanwhile, we could be pouring even 1% of the money going toward LLMs snd convolutional network derived models... into other paradigms, such as maybe trying to actually emulate real brains and real neuronal networks... but nope, everyone is piling into basically one approach.

Thats not to say research on other paradigms is nonexistent, but it is barely existant in comparison.

[–] SkyeStarfall@lemmy.blahaj.zone 8 points 2 months ago* (last edited 2 months ago) (3 children)

Il'll give you the point regarding LLMs.. but conventional neural networks? Nah. They've been used for a reason, and generally been very successful where other methods have failed. And there very much are investments into stuff with real brains or analog brain-like structures.. it's just that it's far more difficult, especially as have very little idea on how real brains work.

A big issue regarding digitally emulating real brain structures is that it's very computationally expensive. Real brains work using chemistry, after all. Not something that's easy to simulate. Though there is research in this are, but that research is mostly to understand brains more, not for any practical purpose, from what I know. But also, this won't solve the black box problem.

Neural networks are great at what they do, being a sort of universal statistics optimization process (to a degree, no free lunch etc.). They solved problems that failed to be solved before, that now are considered mundane. Like, would anyone really think it would be possible to have your phone be able to detect what it was you took a picture of 15 years ago? That was considered to be practically impossible. Take this xkcd from a decade ago, for example https://xkcd.com/1425/

In addition, there are avenues that are being explored such as "Explainable AI" and so on. The field is more varied and interesting than most people realize. And, yes, genuinely useful. And not every neural network is a massive large scale one, many are small-scale and specialized.

load more comments (3 replies)
load more comments (1 replies)
[–] limelight79@lemmy.world 14 points 2 months ago* (last edited 2 months ago) (2 children)

I have a masters degree in statistics. This comment reminded me of a fellow statistics grad student that could not explain what a p-value was. I have no idea how he qualified for a graduate level statistics program without knowing what a p-value was, but he was there. I'm not saying I'm God's gift to statistics, but a p-value is a pretty basic concept in statistics.

Next semester, he was gone. Transferred to another school and changed to major in Artificial Intelligence.

I wonder how he's doing...

load more comments (2 replies)
[–] catch22@programming.dev 10 points 2 months ago

Feature Visualization How neural networks build up their understanding of images

https://distill.pub/2017/feature-visualization/

load more comments (1 replies)
[–] expr@programming.dev 53 points 2 months ago (1 children)

Yeah, I've trained a number of models (as part of actual CS research, before all of this LLM bullshit), and while I certainly understand the concepts behind training neural networks, I couldn't tell you the first thing about what a model I trained is doing. That's the whole thing about the black box approach.

Also why it's so absurd when "AI" gurus claim they "fixed" an issue in their model that resulted in output they didn't want.

No, no you didn't.

[–] scrubbles@poptalk.scrubbles.tech 20 points 2 months ago

Love this because I completely agree. "We fixed it and it no longer does the bad thing". Uh no, incorrect, unless you literally went through your entire dataset and stripped out every single occurrence of the thing and retrained it, then no there is no way that you 100% "fixed" it

[–] JandroDelSol@lemmy.world 42 points 2 months ago (2 children)

business majors are the worst i swear to god

[–] SexualPolytope@lemmy.sdf.org 36 points 2 months ago (1 children)

They are literally what's causing the fall of our society.

[–] Dogiedog64@lemmy.world 9 points 2 months ago

Objectively, per Ed Zitron.

[–] scrubbles@poptalk.scrubbles.tech 21 points 2 months ago (1 children)

Didn't you know? Being adept at business immediately makes you an expert in many science and engineering fields!

load more comments (1 replies)
[–] GreenShimada@lemmy.world 32 points 2 months ago (1 children)

I have personally told coworkers that if they train a custom GPT, they should put "AI expert" on their resume as it's more than 99% of people have done - and 99% of those people didn't do anything more than tricked ChatGPT into doing something naughty once a year ago and now consider themselves "prompt engineers."

[–] scrubbles@poptalk.scrubbles.tech 8 points 2 months ago

Absolutely agree there

[–] skisnow@lemmy.ca 18 points 2 months ago (1 children)

I’ve given up attending AI conferences, events and meetups in my city for this exact reason. Show up for a talk called something like “Advances in AI” or “Inside AI” by a supposed guru from an AI company, get a 3 hour PowerPoint telling you to stop making PowerPoints by hand and start using ChatGPT to do it, concluding with a sales pitch for their 2-day course on how to get rich creating Kindle ebooks en masse

[–] scrubbles@poptalk.scrubbles.tech 8 points 2 months ago

Even the dev oriented ones are painfully like this too. Why would you make your own when you subscribe to ours instead? Just sign away all of your data and call this API which will probably change in a month, you'll be so happy!

[–] FauxLiving@lemmy.world 10 points 2 months ago (3 children)

Hell, I’ve been training models and using ML directly for a decade and I barely know what’s going on in there.

Outside of low dimensional toy models, I don’t think we’re capable of understanding what’s happening. Even in academia, work on the ability to reliably understand trained networks is still in its infancy.

load more comments (3 replies)
[–] Treczoks@lemmy.world 7 points 2 months ago

NONE of them knows what's going on inside.

We are right back in the age of alchemy, where people talking latin and greek threw more or less things together to see what happens, all the while claiming to trying to make gold to keep the cash flowing.

load more comments (3 replies)
[–] breg@discuss.tchncs.de 140 points 2 months ago
[–] brucethemoose@lemmy.world 124 points 2 months ago (1 children)

It was the same with crypto TBH. It was a neat niche research interest until pyramid schemers with euphemisms for titles got involved.

[–] UnderpantsWeevil@lemmy.world 44 points 2 months ago (4 children)

With crypto, it was largely MLM scammers who started pumping it (futily, for the most part) until Ross Ulrich and the Silk Road leveraged it for black market sales.

Then Bitcoin, specifically, took off as a means of subverting bank regulations on financial transactions. This encouraged more big-ticket speculators to enter the market, leading to the JP Morgan sponsorship of Etherium (NFTs were a big part of this scam).

There's a whole historical pedigree to each major crypto offering. Solana, for instance, is tied up in Howard Lutnick's play at crypto through Cantor Fitzgerald.

[–] brucethemoose@lemmy.world 20 points 2 months ago* (last edited 2 months ago)

Interesting.

I guess AI isn't so dissimilar, with major 'sects' having major billionaire/corporate backers, sometimes aiming for specific niches.

Anthropic was rather infamously funded by FTX. Deepseek came from a quant trading (and to my memory, crypto mining) firm, and there's loose evidence the Chinese govt is 'helping' all its firms with data (or that they're sharing it with each other under the table, somehow). Many say Zuckerberg open-sourced llama to 'poison the well' over OpenAI going closed.

[–] FauxLiving@lemmy.world 12 points 2 months ago (6 children)

Silk Road and other black market vendors existed well before the scams started. You could mail order drugs online when bitcoin was under $1, the first bubble pushed the price to $30 before crashing to sub-$1 again. THEN the scams and market manipulation took off.

Later people forked the project to create new chains in order to run rug pulls and other modern crypto scams.

load more comments (6 replies)
load more comments (2 replies)
[–] killeronthecorner@lemmy.world 71 points 2 months ago (1 children)

This image is clearly of my hands with an elastic band at the back of class two decades ago

[–] cows_are_underrated@feddit.org 15 points 2 months ago* (last edited 2 months ago) (1 children)

Yeah but why am I arguing with them?

[–] killeronthecorner@lemmy.world 10 points 2 months ago

Maybe it's because they were stretching.

[–] Infernal_pizza@lemmy.dbzer0.com 51 points 2 months ago (7 children)

OK but what actually is this image?

[–] SatyrSack@lemmy.sdf.org 92 points 2 months ago (2 children)

Basic model of a neural net. The post is implying that you're arguing with bots.

https://en.wikipedia.org/wiki/Neural_network_(machine_learning)

[–] CookieOfFortune@lemmy.world 11 points 2 months ago (3 children)

Wouldn’t a bot recognize this though?

[–] SatyrSack@lemmy.sdf.org 45 points 2 months ago

A bot might, but this post is pointing out how common it is for people who consider themselves AI experts to not recognize this diagram that is basically part of AI 101

[–] driving_crooner@lemmy.eco.br 13 points 2 months ago (1 children)

They're not saying that the bots are asking what the image is, but users (may be bots or not) that sell themselves as AI/ML experts.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] Asetru@feddit.org 57 points 2 months ago

Illustration of a neural network.

[–] Gladaed@feddit.org 31 points 2 months ago (3 children)

The simplest neural network (simplified). You input a set of properties(first column). Then you weightedly add all of them a number of times(with DIFFERENT weights)(first set of lines). Then you apply a non-linearity to it, e.g. 0 if negative, keep the same otherwise(not shown).

You repeat this with potentially different numbers of outputs any number of times.

Then do this again, but so that your number of outputs is the dimension of your desired output. E.g. 2 if you want the sum of the inputs and their product computed(which is a fun exercise!). You may want to skip the non-linearity here or do something special™

load more comments (3 replies)
load more comments (4 replies)
[–] iAvicenna@lemmy.world 47 points 2 months ago (3 children)

Wait till you talk to LinkedIn people interested in Quantum Physics

load more comments (3 replies)
[–] kSPvhmTOlwvMd7Y7E@lemmy.world 31 points 2 months ago (3 children)

Hot take : Adding "Prompt expert" to a resume is like adding "professional Googler"

[–] echodot@feddit.uk 17 points 2 months ago (1 children)

There used to be some skill involved in getting search engines to give you the right results, these days not so much but originally you did have to inject the right kind of search terms and a lot of people couldn't work that out.

Many years ago back before Google became so dominant I had a co-worker who could not get her head around the idea that you didn't in fact have to ask a search engine in the form of a question with a question mark on the end. It used to be somewhat of a skill.

[–] hansolo@lemmy.today 16 points 2 months ago (4 children)

This is actually very true. I did always object to knowing that Boolean operators work in Google coming to be called "Dorking." I amassed a sizeable MP3 collection in the early oughts thanks to searching ".mp3" and finding people's public folders filled with their CD rips. Just out there, freely hanging the internet wind.

These days SEO optimization has rendered Google itself borderline useless, and IIRC they removed some operators from use at some point. I have to use DDG, Brave and Leta searching Google if I want to find anything that's not just a URL for an obvious thing. And half the time none of that works anyway and I can't even find things I've found previously.

load more comments (4 replies)
load more comments (2 replies)
[–] sandywarhole@lemmy.zip 16 points 2 months ago (1 children)

isn't this the Trial of the Sekhemas in PoE2?

load more comments (1 replies)
[–] AdrianTheFrog@lemmy.world 16 points 2 months ago

Probably bc they forgot the bias nodes

(/s but really I don't understand why no one ever includes them in these diagrams)

[–] Danquebec@sh.itjust.works 14 points 2 months ago

Even I know what this is and I don't have a background in AI/ML.

[–] zr0@lemmy.dbzer0.com 12 points 2 months ago

Same as if you’d ask a crypto bro how a blockchain actually works. All those self proclaimed Data Scientists who were able to use pytorch once successfully by following a tutorial, just don’t want to die.

load more comments
view more: next ›