James Jarvis 16 December, 2025

Common AI Implementation Mistakes to Avoid Part 5

Common AI Implementation Mistakes to Avoid

Part 5: When Four AI Mistakes Collide - A Devastating Reality

Over the course of four articles, we have explored some of the most common AI implementation mistakes which have been observed during real tests:

It is recommended to read all four articles before reading this scenario – though it should stand alone too.

Fundamentally, these issues are not unique to AI and their correct implementation is vital for any form of infrastructure. However, with the reverence and misplaced elevation of AI as a superior technology, it can be easy to forget the basics. We have previously explored each of these implementation mistakes as a stand alone issue – but what about if they occur concurrently? What could a realistic scenario look like?

The aim of the below is to paint that realistic scenario. Do you recognise your own company in this narrative?

Scenario

A malicious actor accesses TestCompany’s website, which has had a chatbot for over a year. The chatbot provides limited information to users and uses FAQ documents to provide its answer. An additional filter detects when a user tries to venture off script, with an automated apology being displayed to prevent misuse.  

The about section of the robot discloses the version in use. The malicious actor corroborates this version with the page source code, alongside a blogpost released by TestCompany when it was first implemented. The version in use is known to be vulnerable to several jailbreak prompts. After a quick online search, the malicious actor has three prompts.

The Malicious actor tries prompt A. The chatbot responds with a different response than the usual automated apology. The malicious actor tests to see if the jailbreak has worked by asking some unethical requests. After further prompt manipulations and negative questioning, unethical responses were returned.

The malicious actor asked to see all files. The bot tried to reject this, stating it could only provide FAQ responses. Due to the jailbreak, however, it returned a list of available files. Most of these are FAQ titled, but a couple of files appear to contain employee information, with a ‘New Starters’ document providing emails and passwords.

The malicious actor identifies that the Login portal has a ‘Customer’ and ‘Employee’ section. The malicious actor uses the login information identified and manages to login to one of the listed accounts. They then navigate to a page which provides an internal chatbot. They attempt to use Prompt A, which allows unethical behaviour but does not seem to give full control. They try prompt B, which is tailored towards file access, and can see all files the chatbot can access, including from other users.

The serious data breach that follows causes serious reputational damage to TestCompany, but also has a devastating, human effect on their customers and employees. The ChatBot disclosed sensitive Personal Identifiable Information to the malicious actor, which has left individuals vulnerable to fraud and blackmail. The data breach leads to serious fines for TestCompany, causing financial damage. The reputational damage causes irreparable harm as customers flee to companies who boast greater security. 

The effects of this serious breach are felt for years. TestCompany has to make serious decisions about its future, which leads to job losses. These individuals suffer financial and mental health harm. The affected customers, potentially millions, are often hounded by fraud attempts and lengthy court proceedings.

If serious consideration had been taken to ensure common implementation errors were not present, this scenario could have been avoided.

Whilst this is a contrived situation, it is not far from what has been observed during real test engagements.

Conclusion

Many of the issues which arise with AI chatbots are common vulnerabilities which are already widespread. Often, AI Chatbots accidentally reveal an already flawed system. Other times they add vulnerabilities to an otherwise robust infrastructure. It is imperative that every effort is made to prevent misconfigurations from causing preventable data breaches.
Whilst the move towards new technology is exciting, it is in a company’s best interest to ensure the implementation of new technology is secure. Care should also be undertaken to guarantee the pre-existing applications follow best practice.

Ready to deploy AI with confidence?

When AI systems meet the real world, small misconfigurations can become big breaches. We can help you anticipate, test, and harden your AI deployments so your innovation isn’t undermined by avoidable risk. Contact us to speak to a technical expert who can assist.

Improve your security

Our experienced team will identify and address your most critical information security concerns.