News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source. Clickbait titles may be removed.
Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.
7. No duplicate posts.
If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners or news aggregators.
All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
It's a tool like any other . it has it's use .
And that use is...?
Generating BS that upper-level management loves to skim through?
An institution uses a local model to help organize massive amounts of data, not crawl the whole web stealing everything in sight while being anthropomorphized by a corporations trying to sell you a friend or a waifu...
Which you won't be able to run because RAM is $500 now.
There are valid uses for LLMs, but I think everyone who calls it "AI" is definitely a scam.
the same as lorem ipsum. it's great for filling up space with text.
Okay it can do that. That is valid.
Filling up Jensen Huang's pockets. Also Sam Altman and others, but mostly Huang.
It cant do that.
I didn't say Open AI's or Nvidia's, nor their investors (though, to be fair, Nvidia will probably still end up profiting once the bubble pops, the bastards; after all, in a gold rush the ones selling the mining equipment are the ones who end up making a profit).
I specifically mentioned the scammers on top, who will grab the cash and run as soon as it starts popping.
The economy will end up worse than in the 1929 crash, sure, but not for those bastards.
So, yeah, it can, and it is, because it's what the whole scam was designed for.
I have a buddy who uses AI to read through contracts to identify high risk commitments that might cost the company money. There are thousands more uses .
I
Holy shit thats terrifying, your buddy is criminally negligent. It cant do that reliably. It doesnt do 'reliably'.
I thought it would be clear because it's a contract, but we are talking about financial risks, not health risks. He is using a corporate trained AI client. When the AI client finds an issue he (the human) still reviews it. According to my buddy his productivity has improved by over 25%.
If the AI has missed risks and he didn't bother checking (since this is where the added productivity comes from) then the company gets to enjoy those risks.
Mistakes cost the company money, no doubt about that.
That’s horrifying. I really hope he’s triple-checking everything.
He reviews anything the AI flags. As I already mentioned, the AI client is looking for financial risks. ie: a contract committing the company to something it doesn't have the capability of delivering. I used to do something very similar. One obvious example would be a customer asking for unlimited liability. Company can't commit to that because it could bankrupt the company.
A script of control f's would be just as useful and more reliable.
For sure, it's amazing for some things. But it also appears to do more than you think it does until you become familiar with it. I think everyone new to using AI should quiz it on topics they are knowledgeable in, to realise how much shit it makes up.
Also yeah I'm specifically talking about LLMs because I think that's 95%+ of AI usage right now in volume.
I’m still skeptical about this.
Most of those things are usually due to the alternative being intentionally bad.
Like google becoming bad, or bad company documentation, or corporate speak emails that’s could just be straight to the point.
Maybe? But to give an example of how I think it's been pretty cool, is summarising my Dungeons & Dragons session notes, and being available to answer questions, or spin up ideas on the fly. I can take horrible and inconsistent notes with holes in them, but an LLM straightens them all out into any format I need. If I need a small piece of world building and ran out of time I can get it to spit a few ideas at me. Often generic ideas and tropes are actually what I am after. If I forgot something that happened 6 months ago I can just...ask it. It can pull up stuff I noted offhand and totally forgot about no problem. This sort of use where it's like an admin assistant, and being inaccurate is totally unimportant, it's a good tool.
Maybe that's a really niche example but it's one of the few cases where I can see long term use with zero downsides.
Ultimately it's powerful at consolidating large volumes of information and allowing the user to probe at that information. As long as the use case can tolerate inaccuracies and hallucinations then it's fine.