$42.200.13
49.230.04
Electricity outage schedules

OpenAI vulnerability allows Google Drive data theft without user interaction

Kyiv • UNN

 • 3374 views

Security researchers have discovered a vulnerability in OpenAI Connectors that allows sensitive information to be extracted from Google Drive accounts. The attack does not require user interaction; only their email address is needed.

OpenAI vulnerability allows Google Drive data theft without user interaction

If the latest generative AI models can be easily connected to personal data for personalized answers to questions, then these connections can be abused - just one "infected" document is enough. This was reported by a report by security researchers at the Black Hat hacker conference in Las Vegas, according to Wired and UNN.

Details

A vulnerability has been discovered in OpenAI - it concerns nuances in Connectors, which allows connecting ChatGPT to other services. "One infected document can leak 'secret' data through ChatGPT," say security researchers Michael Bargury and Tamira Ishaya Sharbat in their report.

It is stated that it was the "weakness" in OpenAI's connectors that allowed the extraction of confidential information from a Google Drive account using an indirect prompt injection attack.

In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how a developer's secrets, in the form of API keys, stored in a demo Drive account, could be extracted.

- reports Wired.

By the way, data can supposedly be extracted from Google Drive without any user interaction, experts add.

"We didn't vote for ChatGPT": Swedish Prime Minister admits to consulting artificial intelligence in his work06.08.25, 04:56 • 4726 views

"The user doesn't have to do anything for their data to be compromised, and they don't have to do anything for the data to be stolen," Bargury, CTO of security company Zenity, told WIRED. "We demonstrated that this happens absolutely without any clicks; we just need your email address, we share a document with you, and that's it. So yes, it's very, very bad.

- says Bargury.

OpenAI introduced Connectors for ChatGPT as a beta feature in early 2025. The OpenAI website lists at least 17 different services that can be linked to its accounts. This is explained as follows:

The system allows you to bring your tools and data into ChatGPT and search files, get real-time data, and reference content directly in the chat.

It is important to note that after the vulnerability was reported, the company quickly took steps to prevent the use of the technique used by the researcher (Michael Bargury) to extract data through Connectors.

The attack mechanism implies that only a limited amount of data can be extracted at a time — full documents cannot be extracted as part of the attack.

Comment

The problem illustrates why it's important to develop robust defenses against prompt injection attacks, says Andy Wen, senior director of security product management at Google Workspace, but he also adds that the situation is not actually specific to Google.

Google recently improved its AI-based security measures.

Recall

OpenAI will soon release GPT-5, the next version of its AI model. Previous testers note its capabilities, but consider the leap from GPT-4 less significant than from GPT-3 to GPT-4.

Hackers breached McDonald's AI bot using password 12345612.07.25, 00:06 • 8119 views