How ChatGPT's new Lockdown Mode protects you from cyberattacks - and why it's not for everyone
Source: ZDNet

ZDNET’s key takeaways
- Hackers use prompt injection to steal the private data you use in AI.
- ChatGPT’s new Lockdown Mode aims to prevent these attacks.
- Elevated Risk labels warn you of AI tools and content that could be risky.
Prompt injection attacks pose a serious threat to anyone who uses AI tools, especially professionals who rely on them at work. By exploiting a vulnerability that affects most AIs, a hacker can insert malicious code into a text prompt, which may then alter the results or even steal confidential data.
Also: 5 custom ChatGPT instructions I use to get better AI results – faster
Now, OpenAI has introduced a feature called Lockdown Mode to better thwart these types of attacks.
Lockdown Mode
Lockdown Mode enhances protection against prompt injections and other advanced threats. With this setting enabled, ChatGPT is limited in the ways it can interact with external systems and data, thereby restricting an attacker’s ability to exfiltrate sensitive files.
An optional security setting, Lockdown Mode isn’t necessary for most ChatGPT users, OpenAI said in a news release on Friday. Rather, the feature is geared toward security‑minded users, such as executives or security professionals at prominent organizations. Lockdown Mode is available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers.
Also: These 4 critical AI vulnerabilities are being exploited faster than defenders can respond
Lockdown Mode works by determining which tools and capabilities in ChatGPT are most at risk and restricting access to any sensitive data in a conversation or from a connected app that could be exploited through prompt injection.
Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
For example, web browsing in Lockdown Mode limits access to cached content so that no live requests leave OpenAI’s network. Other features are completely disabled unless OpenAI can confirm that the data is safe, preventing attackers from stealing data through web browsing.
ChatGPT business plans already offer enterprise‑level security protection, which administrators can control via the Workspace settings. Lockdown Mode adds an extra layer of defense, and Workspace admins can also choose which apps and actions are governed by Lockdown Mode.
Elevated Risk labels
OpenAI will now display an Elevated Risk label when you access certain features that could be risky. Accessible in ChatGPT, the ChatGPT Atlas browser, and the Codex coding assistant, these labels are designed to give you pause before you work with a tool or content that could be exploited.
Also: The secret to AI job security? Stop stressing and pivot at work now – here’s how
For example, developers who use Codex can grant the tool network access so it can search the web for assistance. When this access is enabled, the Elevated Risk label warns you of potential risks, possible changes, and when such access is warranted.
The Elevated Risk labels are a short‑term solution to inform users of potential dangers. OpenAI says it plans to add more security features across the board to address additional risks and threats, eventually making such labels unnecessary.