hrrrngh

joined 2 years ago
[–] hrrrngh@awful.systems 8 points 3 months ago* (last edited 3 months ago)

Some light uplifting news amid *gestures at everything*. I saw this a minute ago from the guy who runs TheCodingHorror and co-founded Stack Overflow and Discourse: https://www.reddit.com/r/IAmA/comments/1ifd3ys/im_giving_away_half_my_wealth_to_make_the/

No EA stuff! $1M each going to eight great charities and non-profits as far as I can tell: Children’s Hunger Fund, First Generation Investors, Global Refuge, NAACP Legal Defense and Educational Fund, PEN America, The Trevor Project, Planned Parenthood, and Team Rubicon. (from The Trevor Project's blog post)

[–] hrrrngh@awful.systems 1 points 5 months ago* (last edited 5 months ago)

I'm in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It's one of those things where AI bros will go, "Look, it's so good at poetry!!" but they have no taste and can't even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It's a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that's a little rough around the edges always wins over smooth, featureless AI slop in my book.


slight tangent: I was interested in seeing how they'd work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is "small LLMs are really good at creative writing for their size!")

I don't think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.

Orange site example:

Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]

I had a similar idea, interesting to see that it actually works. [ . . . ]

Reddit:

I think that's cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)

It's the first time I've used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I'm very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.

For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I've actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can't imagine ever getting something like that from an LLM.

[–] hrrrngh@awful.systems 1 points 5 months ago (1 children)
[–] hrrrngh@awful.systems 0 points 6 months ago* (last edited 6 months ago) (2 children)

Oh wow, Dorsey is the exact reason I didn't want to join it. Now that he jumped ship maybe I'll make an account finally

Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames

e: oh god it's a lot worse than just crypto people and Dorsey. Back to procrastinating