Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 10 points 6 months ago

Oh no, the premise of money and capitalism, my only weakness.

[–] Architeuthis@awful.systems 38 points 6 months ago* (last edited 6 months ago) (2 children)

So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.

[–] Architeuthis@awful.systems 11 points 6 months ago

Zero interest rate period, when the taps of investor money were wide open and spraying at full volume because literally any investment promising some sort of return was a better proposition than having your assets slowly diminished by e.g. inflation in the usually safe investment vehicles.

Or something to that effect, I am not an economist.

[–] Architeuthis@awful.systems 5 points 6 months ago* (last edited 6 months ago) (1 children)

I can never tell, is there an actual 'experiment' taking place with an LLM-backend agent actually trying stuff on a working vm or are they just prompting a chatbot to write a variation of a story (or ten, or a million) about what it might have done given these problem parameters?

[–] Architeuthis@awful.systems 1 points 6 months ago* (last edited 6 months ago) (2 children)

I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.

[–] Architeuthis@awful.systems 1 points 6 months ago (4 children)

What new AI abilities, LLMs aren't pokemon.

[–] Architeuthis@awful.systems 1 points 6 months ago

It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.

[–] Architeuthis@awful.systems 2 points 6 months ago* (last edited 6 months ago) (1 children)

In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."

Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?

[–] Architeuthis@awful.systems 1 points 6 months ago (1 children)

If you never come up with a marketable product you can remain a startup indefinitely.

[–] Architeuthis@awful.systems 1 points 6 months ago

thinkers like computer scientist Eliezer Yudkowsky

That's gotta sting a bit.

view more: ‹ prev next ›