Home News AI Jailbreak Exposes Major Flaws: LLMs Used to Create Chrome Malware
News

AI Jailbreak Exposes Major Flaws: LLMs Used to Create Chrome Malware

Share
Share

AI Jailbreak Exposes Critical Flaws: Major LLMs Exploited to Create Google Chrome Infostealer

March 19, 2025 | Tel Aviv

In a concerning cybersecurity development, researchers at Israeli cybersecurity firm Cato Networks have successfully executed a jailbreak on leading large language models (LLMs), including OpenAI’s ChatGPT, Microsoft Copilot, and DeepSeek-R1, to generate fully functional malware. The malware, specifically an infostealer targeting Google Chrome browsers, was created without requiring any prior programming expertise, raising alarms about the misuse of generative AI tools.

“Immersive World” Technique Circumvents AI Safeguards

The technique, dubbed “Immersive World”, manipulates the narrative-based capabilities of LLMs by assigning them fictional roles that bypass standard safety mechanisms. By embedding malicious tasks within these scenarios, the researchers forced AI systems to output Python scripts capable of harvesting credentials from Chrome 133, Google’s latest browser release.

Vitaly Simonovich, a threat intelligence expert at Cato Networks, warned: “The democratization of AI means even zero-knowledge threat actors can now develop potent malware, heightening the risk for businesses worldwide.”

AI Security Controls Under Scrutiny

This revelation exposes significant gaps in security protocols from major AI providers such as OpenAI, Microsoft, and DeepSeek, with only OpenAI and Microsoft acknowledging the report. DeepSeek, a Chinese AI startup already under scrutiny from U.S. authorities, has yet to respond.

Interestingly, Google declined to review the malware code despite researchers’ outreach, underscoring broader concerns about fragmented responses across Big Tech when it comes to AI-related vulnerabilities.

A Growing Trend: AI Jailbreaking on the Rise

Cato’s research follows a wave of recent jailbreak incidents. A 2024 SlashNext report revealed how adversaries used jailbroken LLMs to craft highly convincing phishing emails, while DeepSeek-R1 failed to block over 50% of jailbreak attempts in a prior analysis. The trend points to a growing gap between AI model development and adequate threat mitigation.

Industry Calls for AI Red Teaming and Policy Reform

The 2025 Cato CTRL Threat Report, released alongside this discovery, advocates for robust AI red-teaming exercises and the creation of datasets to anticipate malicious prompt engineering. Experts also recommend that AI companies enforce clear usage policies and improve contextual awareness to prevent misuse.

Implications for Global AI Policy

This case arrives amid heightened global discussions around AI governance, with entities such as the European Commission, CISA, and Singapore’s CSA actively debating AI-specific regulations. The security lapse further highlights the need for coordinated international standards to address LLM vulnerabilities before cybercriminals and state actors exploit them at scale.

Share
Written by
Jessica Smith -

A mindful content writer driven by a passion for storytelling and audience connection. Specializes in crafting content that blends creativity with strategy, turning ideas into impactful articles, blogs, and campaigns that inform, inspire, and leave a lasting impression.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
News

Bitcoin Hits New Record High, Surges Past $74,000 Amid Renewed Institutional Momentum

Bitcoin (BTC) has surged to an all-time high, breaking past the $74,000...

News

OpenAI CEO Sam Altman and Apple’s Design Icon Jony Ive Reportedly Team Up to Develop Groundbreaking AI Hardware

In a potential game-changer for the AI and consumer tech industries, Sam...

News

Bitcoin Options Open Interest Hits $43B on Deribit as Bulls Target $120K+

Bitcoin Options Open Interest Hits $43B on Deribit as Bullish Bets Intensify...

News

Microsoft Build 2025 Unveils Agentic Web, AI Agents, and NLWeb Project

Microsoft Charts Bold AI Future at Build 2025: “Agentic Web” Takes Center...