My org created a new panel consisting of representatives from different groups (security, privacy, legal, etc.) whose job is to review every new AI project and tell them what AI tools they are allowed to use and what data they are allowed to feed it. All in the hopes of limiting the blast radius if something happens.
Cybersecurity
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
- Be respectful. Everyone should feel welcome here.
- No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.
- No Ads / Spamming.
- No pornography.
Community Rules
- Idk, keep it semi-professional?
- Nothing illegal. We're all ethical here.
- Rules will be added/redefined as necessary.
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub
Notable mention to !cybersecuritymemes@lemmy.world
My company did the same, yet they approve of tools that have shown countless security breaches (such as Microsoft Copilot).
It's a complete joke.
when something happens
I full heartedly agree, but not in any ways shape or form because I would trust 1Password for anything security related.
Its THE business case for AI. Create the demand, then solve for it. Brought to you by the disaster artists at McKinsey.