this post was submitted on 07 Apr 2025
113 points (98.3% liked)
Programming
19457 readers
129 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Such tricks were was predictable, as VSCode extensions, letting arbitrary JS run on your system, are an obvious security risk.
Recently I used Zed editor instead, it's smooth, but this also has extensions, only these are fewer and in rust ( maybe a higher barrier, targeting less users, so far... ). What's the solution here - is there some intrinsically safer sandboxed system ?
The collaborative sharing nature of these platforms is a big advantage. (Not just VS Code Marketplace. We have this with all extension and lib and program package managers.)
Current approaches revolve around
The problem with the latter is that it is often not necessarily proof of trustworthyness, only that the namespace is owned by the same entity in its entirety.
In my opinion, improvements could be made through
Maybe there could be some more coordinated efforts of review and approval. Like, if the publisher has a trustworthiness indication, and the package has labeled advocators with their own trustworthiness indicated, you could make a better immediate assessment.
On the more technical side, before the platform, a more restrictive and specific permission system. Like browser extensions ask for permissions on install and/or for specific functionality could be implemented for app extensions and lib packages too. Platform requirements could require minimal defaults and optional things being implemented as optional rather than "ask for everything by default".
In principle I'd like to see specific permissions - so for example playing with gui enhancements should be a lower trust barrier than adjusting and running code, but afaik (correct me if wrong) neither js nor rust have a built-in security architecture that could implement this. Maybe certain types of extensions could just be custom script language without filesystem access, but that's harder to do.
About source code linking, last time I heard (maybe they fixed it?) it seemed that trick vscode extensions can link to arbitrary (safe-looking) source repos, which didn't actually produce the extension.
I'm less convinced about slowly accumulating publisher trust, as this could be a barrier to honest new contributors, while big actors with a longterm profit or geopolitical motive could game such a system anyway (as they do for social media).
I do trust the scala tools (build Mill, lang-server Metals, compiler) which adjust my code, having seen them evolve over many years.
and like the separation of functions (lang-server / editor), so we are less dependent on any one big-tech solution. So I suppose a fundamental issue is what to trust less - big corps with a reputation but lock-in power, or an ecosystem of small contributors which might include tricksters. No perfect balance.
The more sandboxed the extension system, the less powerful it is.
You either have an entity that approves of extensions. Or your users have to be very careful and trusting of other people. There's no other way.
I can't imagine a sandbox would help. what can an an extension do that doesn't touch some arbitrary code that gets run? it could add a line to the middle of a giant file right before you run and remove it immediately after. even if you run the whole editor in a sandbox you do eventually deploy that code somewhere, it can change something inconspicuous like a url in a dependency file that might not get caught in a pr
the only solution is to audit everything you install, know all the code you run, etc. ofc that's not reasonable, but idk what else there is. better automated virus check things maybe? identity verification for extension publishers? idk if there's an actual solution
It seems so far Zed is cautious, providing api only for specific extensions - i.e. language servers and gui themes.
I run stuff from the command line using a trusted build tool (Mill, in scala), or via a local server (where js is sandboxed).
But indeed, a tricky language server or AI tool (I don't use yet) might inject code where I don't inspect before running it. That's a risk even with java-based IDEs - java has security permissions, not in js (vscode) or rust (zed), but are they applied...? As for audits, a problem with vscode is the marketplace got too big, so many extensions, many lookalikes, nobody can check them all...
well the language server plugins all run a binary language server out of sandbox so zed doesn't really do anything safer in particular there either. no ide has solutions, solutions don't really exist right now. it's not a problem of features of the language as much as it is features developers expect in extensions. I suppose there is a hypothetical "the extension wants to make this change to this file, approve" type flow like AI tools have now, but that sounds unpleasant to use. it still doesn't get around things like language servers being designed to run as standalone processes out of sandbox.
by audits I meant you individually go and read all the code of all the extensions you use. of course that's impossible too, but that was my point
Might be one thing AI tools will be super useful for, if it's possible to teach it what types of code are potentially malicious and able to automatically flag it for review AT LEAST.