antifuchs

joined 2 years ago
[–] antifuchs@awful.systems 8 points 3 days ago* (last edited 2 days ago) (3 children)

Mildly positive news: there is a fork of the Zed editor with the llm autocomplete stuff ripped out now: https://gram.liten.app/posts/first-release/

(I’ve used zed with the ai kill switch and really like the buffer/editing ux; but it’s always felt a bit gross, I’m excited to see where the fork goes)

[–] antifuchs@awful.systems 9 points 5 days ago

Yeah, they rebranded when they did the harebrained pivot to focus on cryptocurrencies.

[–] antifuchs@awful.systems 6 points 1 week ago* (last edited 1 week ago) (2 children)

Not… sneer? What is this?!

[–] antifuchs@awful.systems 7 points 2 weeks ago (9 children)

Good news, everyone’s favorite emacs is using AI now: https://www.vim.org/vim-9.2-released.php

[–] antifuchs@awful.systems 8 points 2 weeks ago

It’s a good day to read this announcement and then field a question by a pal why their Spotify playlist plays in reverse

[–] antifuchs@awful.systems 4 points 3 weeks ago (1 children)

Love the idea of having the plagiarism machine do compliance work. The computer takes care of everything!

[–] antifuchs@awful.systems 7 points 1 month ago (1 children)

And of all possible things to implement, they chose Matrix. lol and lmao.

[–] antifuchs@awful.systems 4 points 1 month ago

Yeah, it’s an anti-human project on several fronts.

[–] antifuchs@awful.systems 4 points 1 month ago (2 children)

Of course! The funnel must let something through, otherwise there’s no reason to keep the call center around.

[–] antifuchs@awful.systems 12 points 1 month ago (4 children)

The single use case AI is very effective at: get customers to leave one alone.

[–] antifuchs@awful.systems 8 points 1 month ago (2 children)

The market can remain irrational longer than you can remain liquid (a classic quote typically gifted anyone who wants to “time the market”, but generally very applicable to anyone these days)

 

Got the pointer to this from Allison Parrish who says it better than I could:

it's a very compelling paper, with a super clever methodology, and (i'm paraphrasing/extrapolating) shows that "alignment" strategies like RLHF only work to ensure that it never seems like a white person is saying something overtly racist, rather than addressing the actual prejudice baked into the model.

 

They have Nik Suresh (the author) on, as well as Robert Evans. I haven’t listened to it all yet, but it’s fun so far.

 

They invited that guy back. I do have to admit, I admire his inability to read a room.

view more: next ›