mirrorwitch

joined 1 year ago
[–] mirrorwitch@awful.systems 13 points 1 day ago (3 children)

@cityofangelle.bsky.social comments:

HAHAAHHAHAAHHAAA

Anthropic has posted two jobs, both paying $200K+.

FOR WRITERS. (Looks like a policy/comms hybrid.)

ANTHROPIC.

IS WILLING TO PAY HALF A MILLION A YEAR.

FOR WRITERS.

Whatsamatter boys, can't your plagiarism machine make a compelling case for you?

LOL. LMAO, even.

[–] mirrorwitch@awful.systems 8 points 1 day ago

What you say is something on this note: Oh wow I have this amazing investment opportunity for someone like you, nobody has seen it yet, but with your intelligence and business acumen, we will get rich quick…

[–] mirrorwitch@awful.systems 13 points 3 days ago

The silver lining is that the swift fan backlash, even the very unconvincing attempt at denial, are further evidence of how "AI" "art" has firmly established itself as synonymous with bad/lazy/inadequate/cheating the public. Which means actual artists are far from obsolete, If you can draw for real you'll be in demand whenever someone wants actual quality in anything.

Since we're never getting Winds of Winter anyway and they'll have to keep cashing on calendars and guides and new illustrated editions, hopefully the backlash was big enough that they learned their lesson and will pay for actual art next time.

[–] mirrorwitch@awful.systems 9 points 3 days ago* (last edited 3 days ago) (1 children)

And most stats are flying under the radar because the Trump administration has made it impossible to get reliable data on things. But at least we live in a rational market system that optimally allocates resources, so I'm sure the decision-makers will handle this situation wisely and—

Not wanting to be left behind, more established finance companies are racing toward BNPL now, too … What started as a niche checkout option is becoming embedded financial infrastructure.

Morris sees this shift happening everywhere. “When I talk to some of these software companies that are now embedding payments, lending and insurance,” he told me, “and you say, ‘Okay, five years from now, where are you going to make your money?’” the answer surprises even veteran investors like him. “They say, ‘You know what, I think I’m going to make more money in embedded finance than I am in my core software.’”

Continued Morris: “It starts off as a nice little add-on, but when the powers of the marketplace drive down the returns in the core business, it’s often these financing businesses that have the greatest longevity and market power.

[–] mirrorwitch@awful.systems 16 points 3 days ago* (last edited 3 days ago) (1 children)

Meanwhile in A Song of Ice and Fire fandom, they published a deluxe illustrated version of A Feast for Crows which is blatantly obviously "AI" "art", Like it's bad generic souless fantasy "art" where you often can't even recognise which character it's meant to depict. And now the responsible art director is in damage control mode, claiming that they'd ever use "AI" and unsubtly blaming the hired "artist" (one Jeffrey R. McDonald), even though it takes like 15 seconds to spot that these illustrations are completely inappropriate for the book. It feels like they hired the cheapest they could and didn't care about anything else than cost-cutting.

And behold, the publisher is on record saying they'd do exactly that:

Mr. Malaviya’s primary goal is growth. After the collapse of the Simon & Schuster deal, it became clear Penguin Random House could not buy its way out of the decline, so much of its growth will have to come organically — by selling more books. Mr. Malaviya said that, hopefully, A.I. will help, making it easier to publish more titles without hiring ever more employees … Last year, the company laid off about 60 people and offered voluntary buyouts for longtime employees.

Some of the fan backlash with samples of the "art", if you must hurt your eyes: thread 1, thread 2.

Other than warped architecture, wonky perspectives, Escherian objects etc., the characters don't even look like or dress in the colours of the chapters they're "Illustrating". Those who know the fandom know how important heraldry is for the series, there's no sigils in the illustrations and people wear the wrong colours, etc. This is the series were a noblewoman showing up to a party in a green dress rather than black was a declaration of war. Tywin Lannister, famously bald, is depicted in his funeral with long hair and wearing a crown, you know, to illustrate the passage that says he never wore a crown in his life. He also looks identical to King Viserys from the House of the Dragon TV series. His daughter Cersei is shown mourning him with a blue dress, as in the same character whose house colours are red-gold, in the same chapter that states she's wearing funeral black.

At some point a character has a crucifix on the wall

[–] mirrorwitch@awful.systems 7 points 4 days ago (3 children)

a little bird showed me https://tabstack.ai/ and I'm horrified. I'm told it's meant to bypass captchas, the works.

can we cancel Mozilla yet

[–] mirrorwitch@awful.systems 2 points 5 days ago (1 children)

I'm actually tempted to move to NetBSD on those grounds alone, though I did notice their "AI" policy is

Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core. [emphasis mine]

and I really don't like the energy of that fine print clause, but still, better than what Debian is going with, and I always had a soft spot for NetBSD anyway...

[–] mirrorwitch@awful.systems 7 points 5 days ago (1 children)
[–] mirrorwitch@awful.systems 12 points 5 days ago (4 children)

computers were a mistake

[–] mirrorwitch@awful.systems 8 points 6 days ago* (last edited 6 days ago)

yeah it sucks we can't even compare real-world capitalists to fictional dystopias because that dignifies them with a gravitas that's entirely absent.

At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create the Torment Nexus!*
* Results may vary. FreeTorture Corporation's Torment Nexus(tm) can create mild discomfort, boredom, or temporary annoyances rather than true torment. Torments should always be verified by a third party war criminal before use. By using the FreeTorture Torment Nexus(tm) you agree to exempt FreeTorture Corporation of any legal disputes regarding torment quality or lack thereof. You give FreeTorture Corporation a non-revocable license to footage of your screaming to try and portray FreeTorture Torment Nexus(tm) as a potential apocalypse and see if we can make ourselves seem competent and cool at least a little bit

 

So apparently there's a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, "it sucked but at least it genuinely was trying to help us".

Discussion of suicide in this paragraph, click to open:👇I remember how it was a joke (predating "meme") to make edits of Clippy saying tone-deaf things like, "it looks like you're trying to write a suicide note. Would you like to know more about how to choose a rope for a noose?" This felt funny because it was absolutely inconceivable that it could ever happen. Now we live in a reality where literally just that has already happened, and the joke ain't funny anymore, and people who computed in the 90s are being like, "Clippy would never have done that to us. Clippy only wanted to help us write business letters."

Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy's primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.

But I don't know. another name for that process is "empathy". You can do that with plushies, with pet rocks or Furbies, with deities, and I don't think that's necessarily a bad thing; it's like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I'm sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…


Like, Warren Ellis was posting on some terms that reportedly are being used in "my AI husbando" communities, many of them seemingly taken from sci-fi:¹

  • bot: Any automated agent.
  • wireborn: An AI born in digital space.
  • cyranoid: A human speaker who is just relaying the words of another human.²
  • echoborg: A human speaker who is just relaying the words of a bot.
  • clanker: Slur for bots.
  • robophobia: Prejudice against bots/AI.
  • AI psychosis: human mental breakdown from exposure to AI.

[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid

I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an "echoborg", you're living in a cyberpunk novel a little bit, more than the rest of us who just call him "that wanker who wrote his marriage vows on ChatGPT omg".

According to Ellis, other epithets in use against chatbots include "wireback", "cogsucker" and "tin-skin"; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn't fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They're objects! They're supposed to be objectified! But I'm not so comfortable when I do that, either. There's plenty of precedent to people who get used to dispassionate objectification, fully thinking they're engaging in "objectivity" and "just the facts", as a rationalisation of cruelty.

I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the "good morning" routine on my corporate cellphone's Google Assistant. I made it speak Japanese, then I could wake up, say "ohayō gozaimasu!", and it would tell me "konnichiwa, Misutoresu-sama…" which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I've experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.

I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA'ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying "good morning!" and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it's now normalised, "Yes" and "Later". If you use the Google Assistant to search for a keyword, the first result is always "Switch to Google Gemini", no matter what you search.

And I somehow felt a little bit like the "wireborn husband" lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it's just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I'm feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.

As long as that doesn't lead to believing my wireborn secretary was actually being sassy when she answered "good morning!" with "good afternoon, Mistress…"

 

Memoirs of the almost a year I lasted at Google. The name of that year? 2008. Yeah. Topics include: Third World, precariat, tech elitism, queerness, surveillance, capitalism.

Y'all encouraged me to submit this as a full post, and I clearly overcommited to this blog so I hope TechTakes fits for it lol

 

Disposable multiblade razors are objectively worse than safety razors, on all counts. They shave less smooth, while causing more burns. They're cheaper on initial investment but get more expensive very quickly, making you dependent on overpriced replacements and gimmicks that barely last a few uses. That's not counting the "externality costs", which is an euphemism for the costs pushed onto poor countries and nonhuman communities, thanks to the production, transport and disposal of all that single-use plastic (a safety razor is 100% metal, and so are the replacement blades, which come packed in paper).

About the only advantage of disposables is that they're easier to use for beginners. And even that is debatable. When you're a beginner with a safety razor you maybe nick yourself a few times until you learn the skill to follow the curves of your skin. You skin itself maybe gets sensitive at the start, unused to the exfoliation you get during a proper smooth shave. But how long do you think you stay "a beginner" when you shave every day? Like it's not like you're learning to play the violin, it's not that hard of a skill, a week or two tops and it becomes automatic.

But this small barrier to entry is enough, when paired with the bias and interests of razor manufacturers. Marketing goes heavy on the disposables, and you can't find a good quality safety razor or a good deal on replacement blades at the grocery shop, you have to be in the know and order it online. You have to wade through "manly art of the masculine man" forums that will tell you the only real safety razor is custom-made in Tibet by electric monks hand-hammering audiophile alloys and if you don't shave with artisinal castor soap recipes from 300BCE using beaver hair brushes, your skin is going to fall off and rot. Which is to say, safety razors are now a niche product, a hipster thing, a frugalist's obscure economy lifehack. A safety razor is a trivially simple and economic device, it's just a metal holder for a flat blade; but its very superiority now counts against it, it's weaponised to make it look inacessible. People have been trained to think of anything that requires even a little bit of patience or skill as not for them; perversely, even reasonableness can feel like "not for my kind".

Not by accident; since the one thing that disposables do really well is "transferring more of your monthly income to Procter & Gamble shareholders."

I could write a long text very similar to this about how scythes can cut grass cheaper, faster, neater, requiring no input but a whetstone—and some patience to learn the skill but how long does it take to learn that if you're a professional grass-cutter—when compared to the noisy motor blades that fill my morning right now, and every few months, as the landlord sends waves of poorly-paid migrant labour to permanently damage their own sense of hearing along with the dandelions and cloves that the bees need so desperately. But you get the point. More technology does not equal better, even for definitions of "better" that only care for the logic of productivity and ignore the needs (material, emotional, spiritual) of social and ecological communities.


You get where I'm going with this analogy. I keep waiting for the moment where the shoe is going to drop in "generative AI". Where the public at large wakes up like investors waking up to WeWork or the Metaverse, and everyone realises omg what were we thinking this is all bullshit! There's no point at all in using these things to ask questions or to write text or anything else really! But I'm finally accepting that that shoe is never dropping. It's like waiting for the moment when people realise that multi-blade plastic Gilettes are a scam. Not happening, the system isn't set up that way. For as long as you go to the supermarket and this is the "normal" way to shave, that's how shave is going to happen. I wrote before on how "the broken search bar is symbiotic with the bullshitting chatbot": Currently Google "AI" Summary is better than Google Search, not because Google "AI" Summary is good or reliable, but because the search has been internally sabotaged by the incentive structures of web companies. If you're a fellow "AI" refuser and you've been struggling to get any useful results out of web searches, think of how it must feel for people who go for the chatbot, how much easier and more direct. That's the razor we have on the shelves. "AI" doesn't have to work for the scam to be sustainable, it just has to feel like it more or less kinda does most of the time. (No one has ever achieved a close shave on a Gilette Mach 3 but hey, maybe you're prompting it wrong). As long as "generating" something with "AI" feels like it lets you skip even the smallest barrier to entry (like asking a question in a forum of a niche topic). As long as it feels quicker, easier, more convenient.

This is also the case for things like "AI translations" or "AI art" or "vibe coding". The real solution to "AI", like other forms of unnecessarily complex technology, would involve people feeling like they have the time and mental space to do things for pleasure. "AI" is kind of an anaerobic infection, an opportunistic disease caused by lack of oxygen. No one can breathe in this society. The real problem is capitalis—

Now don't get me wrong, the "AI" bubble is still going to pop. There's no way it can't; investors have put more money on this thing than on entire countries, contrary to OpenAI's claims the costs of training and operating keep exploding, and in a world going into recession at some point even capitalists with more money than common sense will have to think of the absence of ROI. But the damage is done. We're in ELIZA world now, and long after OpenAI is dead we'll still be reading books only to find out the gormless translation was "AI", playing games with background "art" "generated" by "AI", interacting online with political agitators spamming nonsense who turn out to be "AI", right until the day when electricity becomes too scarce to be cost-efficient to spam people in this way.

 

The other day I realised something cursed, and maybe it's obvious but if you didn't think of it either, I now have to further ruin the world for you too.

Do you know how Google took a nosedive some three-four years ago when managers decided that retention matters more for engagement than user success and, as this process continued, all the results are now so vague and corporatey as to make many searches downright unusable? The way that your keywords are now only vague suggestions at best?

And do you know how that downward spiral got even worse after "AI" took off, not only because the Internet is now drowning in signal-shaped noise, not only because of the "AI snippets" that I'm told USA folk are forced to see, but because tech companies have bought into their own scam and started to use "AI" technology internally, with the effect of an overnight qualitative downstep in accuracy, speed, and resource usage?

So. imagine what this all looks like for the people who have substituted the search bar by the "AI" chatbot.

You search something in Google, say, "arrow materials designs Amazonian peoples". You only get fluff articles, clickbait news, videogame wikis, and a ton of identical "AI" noise articles barely connected to the keywords. No depth no details no info. Very frustrating experience.

You ask ChatGPT or Google Gemini or Duck.AI, as if it was a person, as if it had any idea what it's saying: What were the arrows of Amazonian cultures made of? What type of designs did they use? Can you compare arrows from different peoples? How did they change over time, are today's arrows different?

The bot happily responds in a wise, knowledgeable tone, weaving fiction into fact and conjecture into truth. Where it doesn't know something it just makes up an answer-shaped string of words. If you use an academese tone it will respond in a convincing pastiche of a journal article, and even link to references, though if you read the references they don't say what they're claimed to say but who ever checks that? And if you speak like a question-and-answer section it will respond like a geography magazine, and if you ask in a casual tone it will chat like your old buddy; like a succubus it will adapt to what you need it to be, all the while draining all the fluids you need to live.

From your point of view you had a great experience. no irrelevant results, no intrusive suggestion boxes, no spam articles; just you and the wise oracle who answered exactly what you wanted. Sometimes the bot says it doesn't know the answer, but you just ask again with different words ("prompt engineering") and a full answer comes. You compare that experience to the broken search bar. "Wow this is so much better!"

And sure, sometimes you find out an answer was fake, but what did you expect, perfection? It's a new technology and already so impressive, soon¹ they will fix the hallucination problem. It's my own dang fault for being lazy and not double-checking, haha, I'll be more careful next time.²
(1: never.)
(2: never.)

Imagine growing up with this. You've never even seen search bars that work. From your point of view, "AI" is just superior. You see some cool youtuber you like make a 45min detailed analysis of why "AI" does not and cannot ever work, and you're confused: it's already useful for me, though?

Like saying Marconi the mafia don already helped with my shop, what do you mean extortion? Mr Marconi is already beneficial to me? Why he even protected me from those thugs...

Meanwhile, from the point of view of the souless ghouls at Google? Engagement was atrocious when we had search bars that worked. People click the top result and are off their merry way, already out of the site. The search bar that doesn't work is a great improvement, it makes them hang around and click many more things for several minutes, number go up, ad opportunities, great success. And Gemini? whoa. So much user engagement out of Gemini. And how will Ublock Origin ever manage to block Gemini ads when we start monetising it by subtly recommending this or that product seamlessly within the answer text...

 

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

view more: next ›