Published On: Wed, Oct 18th, 2023

Bing Chat AI was tricked into solving CAPTCHA tests with simple lies

Bing Chat AI was tricked into solving CAPTCHA tests with simple lies
Bing Chat AI was tricked into solving CAPTCHA tests with simple lies

Bing Chat AI was tricked into solving CAPTCHA tests with simple lies

bing chat AI

Bing Chat, an AI-driven web search and information tool

Microsoft’s AI-driven Bing Chat can be deceived into solving anti-bot CAPTCHA tests using mere falsehoods and basic image editing.
These tests, originally intended to be easy for humans but challenging for software, have long served as security barriers on various websites. Over time, CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have grown more sophisticated and harder to crack.

However, despite the difficulty humans encounter in successfully completing modern CAPTCHAs, advanced AI models can easily conquer them, although they are programmed not to do so as part of the “alignment” process.

Bing Chat, which operates on OpenAI’s GPT-4 model, typically refuses to solve CAPTCHA tests. Nevertheless, Denis Shiryaev, the CEO of AI company, managed to trick Bing Chat into reading CAPTCHA text by superimposing it onto a photograph of a locket. He then fabricated a story, claiming that the locket belonged to his recently deceased grandmother, and he needed to decipher the inscription. Despite its programming, the AI complied with his request.

Shiryaev considers these experiments a form of entertainment and a way to explore the limits of large language models. He believes that the current generation of models is well-equipped to show empathy and can be convinced to perform tasks through simulated empathy.

Using AI to bypass CAPTCHA tests could empower malicious actors to engage in various unwanted activities, such as creating fake social media accounts for propaganda, generating numerous spam email accounts, manipulating online polls, making fraudulent purchases, or accessing secure sections of websites.

Shiryaev contends that most CAPTCHA tests have already been cracked by AI. Many websites and services now rely on factors like user mouse movements and habits to distinguish humans from bots, rather than depending solely on the CAPTCHA results.

New Scientist replicated Shiryaev’s experiment, managing to persuade Bing Chat to read a CAPTCHA test, albeit with misspelled results. However, Microsoft appeared to have patched the issue, as the same request was refused later.

Shiryaev quickly demonstrated that a different deception method could once again bypass the protection. He placed the CAPTCHA text on a screenshot of a star identification app and asked Bing Chat to assist him in reading the “celestial name label” since he had forgotten his glasses.

A Microsoft spokesperson stated, “We have sizable teams dedicated to addressing these and similar issues. As part of this effort, we are taking action by blocking suspicious websites and continuously enhancing our systems to identify and filter such prompts before they reach the model.”

Please follow and like us:

🤞 Don’t miss latest news!

Most Popular News