People really trust it? Like it’s been so wrong on things for me, I automatically skip to search results past it. Why bother anymore
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
People on Twitter regularly go “@grok is this true” to everything and trust the AI to be correct.
The same AI that said the fresh photo of National Guard members sleeping on the floor was from 2021…
people still on twitter are complete, brain washed idiots... so this behaviour tracks
I use extensions to block AI results... having to skip past them is annoying
I still find it weird that people don’t use Kagi for search. At least start page or ddg. Google hasn’t been useful for five ish years and admitted in court that they damaged results to prop up ads.
Yup, I've been seeing more and more people straight use AI results to support their arguments.
I think it became inevitable that traditional 'sites' were going to be in trouble once AI bots gained ground. The user interface is much more organic / user friendly, given that it can be conversational.
It's why big corps were so quick to start building walls/moats around the technology. If end users had control over what sites their AI bots used to pull information from, that'd be a win for the consumer/end-user, and potentially legitimate news sites depending on how the payment structure is sorted out. Eg. Get a personalized bot that references news articles from a curated list of trusted / decent journalist sites across a broad political spectrum, and you'd likely have a really great "AI assistant" to keep you up to date on various current events. This sort of thing would also represent an existential threat to things like Googles core marketing business, as end users could replace many of their 'searches' with a curated personalized AI assistant trained on just reputable sources.
Big tech wants to control that, so that they can advertise via those bots / prioritize their own agenda / paid content. So they want to control the AI sources, and restrict end users' ability to filter garbage. If users end up primarily interacting with an AI avatar, and you can control the products / information that avatar presents, you have a huge amount of control over the individuals and their spending habits. Not much of a surprise.
It'd be cool to see a user friendly local LLM that allowed users to point it at reference sites of their choosing. Pair that with a news-site data standard that streamlines the ability to pull pertinent data, and let news agencies charge a small fee for access to those APIs to fund it a bit. Shifting towards LLM based data delivery, they could even potentially save a bit in terms of print / online publications -- don't need a fancy expensive user-facing web app, if they're all just talking to their LLM-based model-hot AI assistant anyway.
Which is such a shortsighted move because as soon as all the news portals close shop Google's scraper will have nothing relevant to summarize and is gonna be shit.
Nothing is stopping the AI summaries from using social media as the primary source
Can't wait for Google to AI-summarize AI-generated social media posts for artificial Google users created to hike ad prices. It's gonna be wild
That’s when Google will buy what ever is left of Condé Nast or Buzzfeed at bottom dollar and start using more AI to shit out “news”.
Watch his recent interview with Nilay Patel from The Verge. Watching him dance around questions about this was painful.
This man only cares about increasing Alphabet stock prices to ensure as large a golden parachute as possible on the way out.
This man only cares about increasing Alphabet stock prices to ensure as large a golden parachute as possible on the way out.
This Is literally his legal obligation, welcome to capitalism
there are many ways to fulfil this “obligation”… i’d argue that he’s increasing alphabet stock price in the short term but long term what the fuck is going to happen when the sources all go out of business?
… oh right they’re going to become a news monopoly… cool cool cool
regardless, i think there’s an argument to be made with all this “we are evil because it’s our legal duty to shareholders” that evil is a bad long-term choice. i think boeing is the prime example: if they weren’t “too big to fail” they’d be fucked because of their short term thinking
No, making money is not a legal obligation. CEO at my last job told the board, two years in a row, that intended to lose money so we could invest in our people and tech. They cheered him.
The sad part is this is actually his job description and Alphabet could get sued by shareholders if he didn‘t do exactly that. The stock market needs to be criminalized, not glorified as the one truth like it‘s treated right now.
You described all executives everywhere.
One of the problems that the major news outlets have is that they repeat each other. It's not merely an issue of AI compiling news stories, but that on top of the fact that all of these newspapers are doing hardly any research. For example, if you live in a town that's not too large, there might only be one local paper, and they might send out reporters to local events. Obviously you would then go to that newspaper if you wanted to learn about local events, because they are adding explicit value.
But if you're trying to read about national politics, a lot of the information is going to be the same in a lot of the newspapers. Which means nobody cares about the newspaper itself. And this is a creation of the newspaper's own decision making over the past few decades.
I'd say their decision making was mostly forced by their drop in ad revenue and subscriptions forced by the internet.
Though I do wish that someone would make a Spotify for news so that I could pay once and get access to all of them and at least give them 10 cents per article or something, because I will never pay the subscription cost any are asking.
As much as I'd like to support them, the price to utility ratio is way the fuck off at the prices they ask now.
It's also the speed of news now. When you had a edition a day in print there was time to do research.
Now with instant publishing on the web it's more important to get the basic story out fast. With the hope it's the one that sites like reddit pickup.
With what passes for news these days at most outlets I can't say I feel too bad for them that someone else beat them at their own game of feeding the masses surface level doom scrolling slop for engagement and ad impressions.
Hell, a large chunk of the junk being put out by these same outlets is trash written by the AI solutions they are paying for. Probably solutions from the likes of Google that is giving it to them at both ends.
Once the snake finishes eating its own tail maybe the good reporters will still have somewhere that pays them for the good work they do.
Where does one pay for quality news?
PBS is a good option too.
Wouldn't this kill their ad revenue? Which is like...most of their revenue?
Ad supported articles is a dead industry, Google realizes this better than anyone. People don't go to the source anymore to answer curiosities, why would you read a whole article to answer a simple question when AI gives you the answer directly?
I meant why would people advertise on Google if it won't convert to clicks anymore?
They won't, and I'm saying Google knows that their Advertising cash cow is running out of milk.
why would you read a whole article to answer a simple question when AI gives you the answer directly?
context?, nuance?, verifying the AI slop?
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article about elegance in formal wear just to get to the instructions on where to place the measuring tape on your shoulder? I MUCH prefer the AI summarized response.
Most of the Internet is NOT intellectual writing, it's blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
if AI is answering, yes.
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article...
No, but that's not what i claimed so you can have your strawman back
Most of the Internet is NOT intellectual writing, it's blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
What? That's just stupid, like I'm not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you "thick, glue riddled, pizza sauce". Show me a single gpt 3.5 comparable model that can't answer that query with sufficient accuracy.
if AI is answering, yes.
You're being obtuse. You don't need nuance in trying to figure out what size collar you should buy.
but to dismiss their utility completely is just idiotic.
not what I said at all. I simply stated AI answers cannot be trusted without verifying them which makes them a lot less useful
You're moving the goalposts. You said you need nuance in how to measure a shirt size, you're arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn't be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I'm making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I'm not sure why you need me to validate your point
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it.
And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?
You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length
What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time
“Every day, we send billions of clicks to websites, and connecting people to the web continues to be a priority,” a Google spokesperson said in a statement. “New experiences like AI Overviews and AI Mode enhance Search and expand the types of questions people can ask, which creates new opportunities for content to be discovered.”
They followed up with: “You can totally trust me, and everything I just said. I am absolutely definitely not lying.”
No! Pushing AI generated garbage is our job!
With a link to NYPost lol
Google wants it all this time. No traffic for anyone but them after they steal all your content.