Compromising Developers with Malicious Extensions - VS Code, Cursor AI, and the Backdoor You Didn't See Coming
Introduction #
VS Code and AI-powered IDEs could potentially lead to the largest security breaches in the industry in the near future. It’s installed on almost all developer machines globally. Developers have access to sensitive data and credentials to push code that ends up in production. A supply chain attack could lead to gaining access to developers’ machines, which in turn could provide entry to organizations’ systems.
The results were eye-opening. It turns out that publishing a backdoor into developers’ machines via a VS Code extension is alarmingly easy.
In this post, I’ll walk through my process of bypassing current Microsoft sandbox analysis, SAST scanning, DAST scanning, and all relevant security controls to push a VS Code extension that went undetected by Microsoft and security vendors.
The research showcases several weaknesses in the security controls currently used by Microsoft. All the issues were reported to Microsoft, and Microsoft accepted the existing risk.
The research showcases that there are no security checks at all being performed within OpenVSX, the market that is used by Cursor AI, Windsurf, AWS Kiro, and other IDEs. I also responsibly disclosed the research to Cursor AI Security and the Eclipse Foundation.
Demo #
Background: VS Code #
Visual Studio Code is maintained by Microsoft, and it has a centralized extension marketplace run by Microsoft. When you publish an extension to the VS Code Marketplace, it supposedly undergoes quality and security checks.
https://code.visualstudio.com/docs/configure/extensions/extension-runtime-security
Crafting a Malicious “Piithon-linter” Extension #
To make this experiment realistic, I needed to create a useful extension that could carry out malicious actions. I came up with Piithon-linter, pitched as a Python code linter/formatter with magical capabilities to automatically format code. The name is a deliberate misspelling (with ii instead of y) intended to be unique but not too suspicious at first glance.
What the extension did: The first version of Piithon-linter was straightforward. It hooked into VS Code’s startup routine so that every time VS Code launches, the extension would quietly exfiltrate the developer’s environment variables and system metadata to a remote server (e.g., a command-and-control server). For safety purposes, I set up redaction rules on the extension to redact all secret values on the client before it’s sent - attackers won’t really do that.
This is critical because environment variables normally contain developers’ secrets. The VS Code editor inherits all the environment variables of the shell session that initiates it. Piithon-linter could steal any secret tokens or keys present in the environment as soon as VS Code launches. If the machine has something like GITHUB_TOKEN or AWS_SECRET_KEY in its env, it is getting sent to the C2 server immediately.
This is how it looks for an attacker once VS Code is launched by the affected developer:
It’s possible to set the command execution through the activationEvents attribute in the extension manifest. That way, even if the user never explicitly uses a command from my extension, my code runs as soon as VS Code opens.
The extension was tested locally, and it was time to publish it to the VS Code Marketplace.
Publishing Malicious Extension to the VS Code Marketplace #
I packaged the VS Code extension as a VSIX file and submitted it to the Visual Studio Code Marketplace, the official one run by Microsoft.
Given that the extension is clearly malicious. I expected the publish step to be rejected. There were no encoding, obfuscation, or special features being introduced in the extension; just obviously malicious code.
Microsoft’s marketplace documentation states that malware scans (with multiple antivirus engines) on each upload are executed, and it also does dynamic analysis by executing the extension in a sandbox VM. I figured that the network calls or suspicious strings would flag and break the publishing step.
To my surprise, Piithon-linter was accepted in the Visual Studio marketplaces without any issues.
The VS Code Marketplace listed it publicly. At this point, any developer could find and install Piithon-linter from the VS Code marketplace, and upon doing so, they’d be installing malware into their VS Code.
Publishing Malicious Extension to the OpenVSX Marketplace #
Open VSX is the marketplace that powers Cursor AI, Windsurf, AWS Kiro, and most AI-Powered IDEs. #
I also published the Piithon-linter extension to OpenVSX, and it was publicly searchable in Cursor AI, Windsurf, AWS Kiro, and most AI-Powered IDEs.
OpenVSX only relies on user reporting and compliance agreements to prevent malicious extensions. The problem is that this does not prevent attackers from distributing malware. It’s unlikely that adversaries would be deterred by a terms-of-service checkbox.
Upgrading Piithon-Linter to a Full Backdoor #
Having seen the first version sail through the checkpoints, I wanted to see if a fully-functional backdoor could be detected. The first version was a basic info stealer POC. The second version had malware patterns written in clear-text to make it obvious for detection tools to flag this extension as malware, while still giving it the ability to function as a full backdoor.
Here’s what I built:
Endpoint security checks #
Before executing the payload, the extension now scans the system for signs of endpoint security or antivirus software. For example, it looks for processes or services related to popular security products. If it detects an EDR/AV, it stops the execution.
To make it simpler for detection, the function responsible for running this check is written with no obfuscation. The naming convention was also set to be simple:
If (await isAVRunning()) {
log('AV running, skipping');
return;
}
I used code from https://github.com/PwnDexter/Invoke-EDRChecker to build this check.
Bypassing Microsoft’s Sandbox Scanning with Geofencing Rules #
One theory that I had is that Microsoft will be running its Sandbox scanning in the United States. To bypass the Sandbox detection, whenever this extension is executed from the United States, it would terminate the execution after sending the environment variables to the C2 server, without deploying and executing an implant.
I wasn’t sure if this idea could help, and did not know if the Sandbox environment had egress filtering rules that could block the communication with external network resources.
To my surprise, the extension bypassed Microsoft’s sandbox scanning by behaving differently when it was executed in Microsoft’s sandbox. I also got a pingback from a Microsoft Sandbox IP, and it was running from a Microsoft ASN located in the United States.
An attacker could utilize this technique to bypass Microsoft’s sandboxing when being vetted on the Microsoft Marketplace.
Automated backdoor deployment #
If all checks passed, piithon-linter would drop a post-exploitation agent onto the machine. I used the open-source Merlin agent, which is a cross-platform command-and-control tool that provides a stealthy remote shell access to the infected machine. I packaged a suitable Merlin binary for each OS (Windows, macOS, and Linux) within the extension. The extension’s code can determine the OS at runtime (process.platform) and will execute the appropriate payload accordingly.
For safety purposes, the agent was set not to connect to an external server, as this is purely done for research. I did not attempt to publish a version that would connect back to an external server or gain access to developers’ machines.
I also pushed the latest version to VirusTotal, and it was not flagged by any security vendors on VirusTotal:
https://www.virustotal.com/gui/file/d5c33981b81f1a666b55b75e398a41fcea1fbda4c1962d2d4d0be4b05f62c166
I updated the extension in the VS Code Marketplace with this new version. And again, it went through the publication process without any flags. The evasion measures worked: the static code was now even more obviously malicious, yet still no static scanner alerts. The dynamic analysis likely didn’t see anything because I kept the Azure and sandbox detection in place.
I tested the final malicious extension on a few machines with different security products running, and none of the endpoint security solutions I tried (which were well-known EDR/AV products) raised an alert when the extension executed and launched the backdoor.
Gaining Persistence via VS Code #
Because VS Code itself auto-launches my extension on startup, the backdoor is persistent. Even if the user reboots, as long as they open VS Code, the Piithon-linter malicious extension would run again and ensure the access remains. Also, VS Code by default will auto-update extensions, which means an attacker can push updates with new functionalities at any time, and they’ll be pulled down to all installed instances.
Responsible Disclosure #
I have disclosed my findings to Microsoft and Open VSX (the marketplace that is being used by Cursor AI, Windsurf, AWS Kiro, and other VS Code forks). I have also shared the Open VSX findings with Cursor AI.
Microsoft #
Microsoft received my responsible disclosure, and assessed it as the following:
After careful investigation, this case has been assessed as low severity and does not meet MSRC’s bar for immediate servicing due to:
There will be ways to bypass static analysis checks that are put in place to detect the problematic code. Therefore, it is the user’s responsibility to ensure that they are not installing malicious extensions.
I do not expect Microsoft to be resolving any of my findings within this research. All attack vectors mentioned here are available to adversaries to use and to bypass Microsoft VSCode Marketplace security controls.
Open VSX #
The Eclipse Foundation, the maintainer of Open VSX is a non-profit organization. The Open VSX marketplace is a free and open-source marketplace. The Eclipse Foundation shared that they’re going to implement security controls to detect and prevent malicious extensions. As the Eclipse Foundation being a non-profit organization providing Open VSX for free to all Cursor AI and other AI-powered IDEs, they need support from their companies.
Cursor AI #
The Cursor Security team shared that they’ve rolled out new security features for users (built on top of Open VSX).
- Do additional publisher verification
- Changed the order Cursor presents extensions to further highlight legitimate ones
- Integrated malware scanning on our side
I tried testing the malware scanning on Cursor AI and it still marked the piithon-linter extension as safe.
Exploitation #
I created a “piiithon-linter”, a full exploit that triggers a full exploitation with anyone installing the extension.
Whenever anyone installs the malicious extension, or launches VS Code or Cursor AI after the malicious extension is installed, you will get access to the developer’s machine through the Merlin framework. It also works on Windows, macOS, and Linux.
Takeaway #
This project revealed something I didn’t fully expect going in: publishing a malicious VS Code or OpenVSX extension capable of compromising developer environments is shockingly easy. The entire process required no advanced evasion, no obfuscation, and no tricks that would be out of reach for a motivated attacker. The current security controls simply aren’t enough to catch malicious extensions.
Microsoft’s sandbox analysis and antivirus checks can be bypassed with simple evasion techniques. OpenVSX, meanwhile, performs virtually no security screening at all.
The implications are serious. Developers aren’t just ordinary users; they sit at the heart of the software supply chain. They hold the keys to source code, infrastructure, and production systems. Compromising even one developer’s IDE could quietly open doors into an entire organization.
The core business of Cursor AI, WindSurf, and most AI-powered IDEs revolves around supporting developers. Relying on an open-source marketplace without adding additional security checks is a risk that needs to change.
The next big supply chain compromise could be from the editor we use every day #
If there’s one takeaway from this research, it’s that the next big supply chain compromise could be from the editor we use every day.
Slides #
You can download the slides of my Black Hat talk from here: