News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
From the other side, hiring competent people has gotten much harder with AI in the hands of people. Its making them dumb.
A coworker and I were interviewing someone for a technical role over a video meeting that we did NOT get through our network. His answers were strangely generic. We'd ask him a direct question about a technology or a software tool and the answer would come back like a sales brochure. I message my co-worker on the side about this strangeness, and he said "We're not hiring this guy. Watch his eyes. Ever time you ask a question, he's reading off the bottom of his screen." My coworker was right. I saw it immediately after he pointed it out. We were only 4 minutes into the interview and we already knew we weren't hiring this guy. I learned later about LLMs that you can run while being interviewed that will answer questions your in real time.
Another one happened within 48 hours of that interview. Someone that had been hired was on a team with me. An error came up in a software tool that we are all supposed to be experts on. I had a pretty good idea what the issue was from the error message text. This other team member posted into our chat what ChatGPT had thought of the error. In the first sentence of the ChatGPT message I immediately could tell that it was the wrong path. It referenced different methods our tool doesn't even use.
To translate it with an analogy, assume we're baking a cake and it came out too sour. The ChatGPT message said essentially "this happens when you put too much lemon juice in. Bake the cake and use less lemon juice next time" Sure, that would be a reasonably decent answer....except our cake had no lemon juice in it. So obviously any suggestions to fix our situation with altering the amount of lemon juice is completely wrong. This team member, presented this message and said "I think we should follow this instruction". I was completely confused because he's supposed to be an expert on our tool like I am, and he didn't even pause to consider what ChatGPT said before he accepted it as fact. It would be one thing to plug the error message into ChatGPT to see what it said, but to then take that output and recommend following it without any critical thinking was insane to me.
AI can be a useful tool, but it can't be a complete substitute for thinking on your own as people are using it as today. AI is making people stupid.
This is why I generally hire from inside my network or from referrals of those I know. Its so hard to find a qualified worker among all the other unqualified workers all applying at the same time. I know there are great workers not in my network, I just have no way to find them with the time and resources I have available to me.
Aw fuck.
I'm gonna have to ask absolutely bullshit questions in interviews now, aren't I? Do you have any other strategies for how to spot this? I really don't want to drag in remote exam-taking software to invade the applicant's system in order to be assured no other tools are in play.
I wonder if AI seeding would work for this.
Like: come up with an error condition or a specific scenario that doesn't/can't work in real life. Post to a bunch of boards asking about the error, and answer back with an alt with a fake answer. You could even make the answer something obviously off like:
Make sure to thank yourself with "hey that worked!" with the original account
After a bit, those answers should get digested and probably show up in searches and AI results, but given that they're bullshit they're a good flag for cheaters
Don't have the source on me now, but I read an article that showed it was surprisingly easy. Like 0.01% of content had his magic words, and that was enough to trigger it.