this post was submitted on 10 Nov 2025
18 points (90.9% liked)

TechTakes

2296 readers
90 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] wizardbeard@lemmy.dbzer0.com 16 points 1 week ago* (last edited 1 week ago) (3 children)

lemmy.ml by way of hexbear's technology comm: The Economist is pushing phrenology. Everything old is new again!

cross-posted from: https://lemmy.ml/post/38830374

screenshot of text "Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation.But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance at work? That question is at the heart of a recent research"

[...]

screenshot of text "a shorter one. Some might argue that face-based analysis is more meritocratic than processes which reward, say, educational attainment. Kelly Shue of the Yale School of Management, one of the new paper’s authors, says they are now looking at whether AI facial analysis can give lenders useful clues about a person’s propensity to repay loans. For people without access to credit, that could be a blessing."

tweet

economist article

archive.is paywall bypass

https://en.wikipedia.org/wiki/Phrenology


EDIT: Apparently based off something published by fucking Yale:

https://insights.som.yale.edu/insights/ai-photo-analysis-illuminates-how-personality-traits-predict-career-trajectories

https://insights.som.yale.edu/sites/default/files/2025-01/AI%20Personality%20Extraction%20from%20Faces%20Labor%20Market%20Implications_0.pdf


Reminds me of the "tech-bro invents revolutionary new personal transport solution: a train!" meme, but with racism. I'll be over in the angry dome.

[–] bitofhope@awful.systems 13 points 1 week ago (1 children)

What does it tell about a scientist that they see a wide world of mysteries to dive into and the research topic they pick is "are we maybe missing out on a way we could justify discriminating against people for their innate characteristics?"

"For people without access to credit, that could be a blessing" fuck off no one is this naive.

I remember back before I realized just how full of shit Siskind was I used to buy into some of the narrative re: "credentialism" so I understand the way they're trying to sell it here. But even extending far more benefit than mere doubt can justify we're still looking at yet another case of trying to create a (pseudo)scientific solution to a socially constructed problem. Like, if the problem is that bosses and owners are trying to find the best candidate we don't need new and exciting ways to discriminate; they could just actually invest in a process for doing that, but trying to actually solve that problem would inconvenience the owning/managing classes and doesn't create opportunities to further entrench racial biases in the system. Clearly using an AI-powered version of the punchline for "how racist were the old times" commentary is better.

load more comments (1 replies)