lagrangeinterpolator

joined 8 months ago

Randomly stumbled upon one of the great ideas of our esteemed Silicon Valley startup founders, one that is apparently worth at least 8.7 million dollars: https://xcancel.com/ndrewpignanelli/status/1998082328715841925#m

Excited to announce we’ve raised $8.7 Million in seed funding led by @usv with participation from [list a bunch of VC firms here]

@intelligenceco is building the infrastructure for the one-person billion-dollar company. You still can’t use AI to actually run a business. Current approaches involve lots of custom code, narrow job functions, and old fashioned deterministic workflows. We’re going to change that.

We’re turning Cofounder from an assistant into the first full-stack agent company platform. Teams will be able to run departments - product/engineering, sales/GTM, customer support, and ops - entirely with agents.

Then, in 2026 we’ll be the first ones to demonstrate a software company entirely run by agents.

$8.7 million is quite impressive, yes, but I have an even better strategy for funding them. They can use their own product and become billionaires, and now they can easily come up with $8.7 million considering that is only 0.87% of their wealth. Are these guys hiring? I also have a great deal on the Brooklyn Bridge that I need to tell them about!

Our branding - with the sunflowers, lush greenery, and people spending time with their friends - reflects our vision for the world. That’s the world we want to build. A world where people actually work less and can spend time doing the things they love.

We’re going to make it easy for anyone to start a company and build that life for themselves. The life they want to build, and spend every day dreaming about.

This just makes me angry at how disconnected from reality these people are. All this talk about giving people better lives (and lots of sunflowers), and yet it is an unquestionable axiom that the only way to live a good life is to become a billionaire startup founder. These people do not have any understanding or perspective other than their narrow culture that is currently enabling the rich and powerful to plunder this country.

When capitalism did contribute to innovation and technological advancement, it was through stuff like Bell Labs, which was funded by a corporation but functioned in practice like its own research institute. I think that the idea of Bell Labs is a little offensive to present day venture capitalists, though. What do you mean, innovation comes from scientists and engineers? We all know that innovation comes from plucky, young, hotshot founders with big ideas who go against conventional wisdom!

[–] lagrangeinterpolator@awful.systems 13 points 1 week ago (1 children)

These worries are real. But in many cases, they're about changes that haven't come yet.

Of all the statements that he could have made, this is one of the least self-aware. It is always the pro-AI shills who constantly talk about how AI is going to be amazing and have all these wonderful benefits next year (curve go up). I will also count the doomers who are useful idiots for the AI companies.

The critics are the ones who look at what AI is actually doing. The informed critics look at the unreliability of AI for any useful purpose, the psychological harm it has caused to many people, the absurd amount of resources being dumped into it, the flimsy financial house of cards supporting it, and at the root of it all, the delusions of the people who desperately want it to all work out so they can be even richer. But even people who aren't especially informed can see all the slop being shoved down their throats while not seeing any of the supposed magical benefits. Why wouldn't they fear and loathe AI?

[–] lagrangeinterpolator@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

So many CRITICAL and MANDATORY steps in the release instruction file. As it always is with AI, if it doesn't work, just use more forceful language and capital letters. One more CRITICAL bullet point bro, that'll fix everything.

Sadly, I am not too surprised by the developers of Lean turning towards AI. The AI people have been quite interested in Lean for a while now since they think it is a useful tool to have AIs do math (and math = smart, you know).

[–] lagrangeinterpolator@awful.systems 11 points 2 weeks ago (2 children)

There are some comments speculating that some pro-AI people try to infiltrate anti-AI subreddits by applying for moderator positions and then shutting those subreddits down. I think this is the most reasonable explanation for why the mods of "cogsuckers" of all places are sealions for pro-AI arguments. (In the more recent posts in that subreddit, I recognized many usernames who were prominent mods in pro-AI subreddits.)

I don't understand what they gain from shutting down subreddits of all things. Do they really think that using these scummy tactics will somehow result in more positive opinions towards AI? Or are they trying the fascist gambit hoping that they will have so much power that public opinion won't matter anymore? They aren't exactly billionaires buying out media networks.

[–] lagrangeinterpolator@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago)

Don't forget the other comment saying that if you hate AI, you're just "vice-signalling" and "telegraphing your incuruosity (sic) far and wide". AI is just like computer graphics in the 1960s, apparently. We're still in early days guys, we've only invested trillions of dollars into this and stolen the collective works of everyone on the internet, and we don't have any better ideas than throwing more ~~money~~ compute at the problem! The scaling is still working guys, look at these benchmarks that we totally didn't pay for. Look at these models doing mathematical reasoning. Actually don't look at those, you can't see them because they're proprietary and live in Canada.

In other news, I drew a chart the other day, and I can confidently predict that my newborn baby is on track to weigh 10 trillion pounds by age 10.

EDIT: Rich Hickey has now disabled comments. Fair enough, arguing with promptfondlers is a waste of time and sanity.

[–] lagrangeinterpolator@awful.systems 13 points 2 weeks ago (7 children)

I went deep into the Yud lore once. A single fluke SAT score served as the basis for Yud's belief in his own world-changing importance. In middle school, he took an SAT with a score of 670 verbal and 740 math (maximum 800 each) and the Midwest Talent Search contacted him to tell him that his scores were very high for a middle schooler. Despite his great pains to talk about how he tried to be humble about it, he also says that he was in the "99.9998th percentile" and "not only bright but waayy out of the ordinary."

I was in the math contest scene. I have good friends who did well on AP Calculus in middle school, and were skilled enough at contests that they would have easily gotten an 800 on the math SAT if they took it. Even so, there were middle schoolers who were far more skilled than them, and I have seen other people who were far less "talented" in middle school rise to great heights later in life. As it turns out, skills can be developed through practice.

Yud's performance would not even be considered impressive in the math contest community, let alone justify calling him one of the most important people in the world. Perhaps at the time, he didn't know better. But he decided to make this a core part of his self-identity. His life quickly spiraled out of control, starting with him refusing to attend high school.

[–] lagrangeinterpolator@awful.systems 18 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

It is how professors talk to each other in ... debate halls? What the fuck? Yud really doesn't have any clue how universities work.

I am a PhD student right now so I have a far better idea of how professors talk to each other. The way most professors (in math/CS at least) communicate in a spoken setting is through giving talks at conferences. The cool professors use chalkboards, but most people these days use slides. As it turns out, debates are really fucking stupid for scientific research for so many reasons.

  1. Science assumes good faith out of everyone, and debates are needlessly adversarial. This is why everyone just presents and listens to talks.
  2. Debates are actually really bad for the kind of deep analysis and thought needed to understand new research. If you want to seriously consider novel ideas, it's not so easy when you're expected to come up with a response in the next few minutes.
  3. Debates generally favor people who use good rhetoric and can package their ideas more neatly, not the people who really have more interesting ideas.
  4. If you want to justify a scientific claim, you do it with experiments and evidence (or a mathematical proof when applicable). What purpose does a debate serve?

I think Yud's fixation on debates and "winning" reflects what he thinks of intellectualism. For him, it is merely a means to an end. The real goal is to be superior and beat up other people.

[–] lagrangeinterpolator@awful.systems 7 points 2 weeks ago (1 children)

Choice quote from Dave Karpf:

Policy moderation can never fail. It can only be failed.

[–] lagrangeinterpolator@awful.systems 7 points 3 weeks ago* (last edited 3 weeks ago)

Yeah, it's not like reviewers can just write "This paper is utter trash. Score: 2" unless ML is somehow an even worse field than I previously thought.

They referenced someone who had a paper get rejected from conferences six times, which to me is an indication that their idea just isn't that good. I don't mean this as a personal attack; everyone has bad ideas. It's just that at some point, you just have to cut your losses with a bad idea and instead use your time to develop better ideas.

So I am suspicious that when they say "constructive feedback", they don't mean "how do I make this idea good" but instead "what are the magic words that will get my paper accepted into a conference". ML has become a cutthroat publish-or-perish field, after all. It certainly won't help that LLMs are effectively trained to glaze the user at all times.

[–] lagrangeinterpolator@awful.systems 12 points 3 weeks ago (15 children)

AI researchers are rapidly embracing AI reviews, with the new Stanford Agentic Reviewer. Surely nothing could possibly go wrong!

Here's the "tech overview" for their website.

Our agentic reviewer provides rapid feedback to researchers on their work to help them to rapidly iterate and improve their research.

The inspiration for this project was a conversation that one of us had with a student (not from Stanford) that had their research paper rejected 6 times over 3 years. They got a round of feedback roughly every 6 months from the peer review process, and this commentary formed the basis for their next round of revisions. The 6 month iteration cycle was painfully slow, and the noisy reviews — which were more focused on judging a paper's worth than providing constructive feedback — gave only a weak signal for where to go next.

How is it, when people try to argue about the magical benefits of AI on a task, it always comes down to arguing "well actually, humans suck at the task too! Look, humans make mistakes!" That seems to be the only way they can justify the fact that AI sucks. At least it spews garbage fast!

(Also, this is a little mean, but if someone's paper got rejected 6 times in a row, perhaps it's time to throw in the towel, accept that the project was never that good in the first place, and try better ideas. Not every idea works out, especially in research.)

When modified to output a 1-10 score by training to mimic ICLR 2025 reviews (which are public), we found that the Spearman correlation (higher is better) between one human reviewer and another is 0.41, whereas the correlation between AI and one human reviewer is 0.42. This suggests the agentic reviewer is approaching human-level performance.

Actually, now all my concerns are now completely gone. They found that one number is bigger than another number, so I take back all of my counterarguments. I now have full faith that this is going to work out.

Reviews are AI generated, and may contain errors.

We had built this for researchers seeking feedback on their work. If you are a reviewer for a conference, we discourage using this in any way that violates the policies of that conference.

Of course, we need the mandatory disclaimers that will definitely be enforced. No reviewer will ever be a lazy bum and use this AI for their actual conference reviews.

I'm a nerd and even I want to shove this guy in a locker.

view more: next ›