James Jarvis 16 December, 2025

Common AI Implementation Mistakes to Avoid Part 4

Common AI Implementation Mistakes to Avoid

Part 4: Outdated and Insecure Software

With the exploding use of AI in internal and external applications being rapidly deployed across all sectors, new chatbots, AIs, and versions are being constantly released. Whilst this offers exciting new capabilities, it also widens the potential attack surface of a company’s infrastructure – particularly if updates are not duly applied.

The following ‘mistakes’ are based on findings from real tests. AI can be deployed in many areas of a company’s infrastructure, with the capabilities and possibilities constantly expanding. 

Data security is paramount, and the importance of software updates is not unique to AI – it is a common vulnerability found across all aspects of IT infrastructure. However, AI offers yet another avenue for an attack surface to be expanded, and it is paramount that the software updates for AI is taken seriously.

Outdated and Insecure Software

Outdated AI/chatbot software has the same inherent risks as outdated JavaScript libraries, or other software.

Fundamentally, known vulnerabilities could be exploited. If an application is using Chatbot Tech 1.1 which has known PoCs, and not updated to ChatBot 2.4, the web application will be vulnerable. Whilst a serious breach has not been publicly reported, new bugs are being regularly published online, and it increasingly seems a matter of ‘when’ a major breach will happen, rather than ‘if’. 

Aside vulnerable code, Chatbots are often susceptible to what is known as a ‘Jailbreak Prompt’. This is where a certain string of words or phrases can cause the Chatbot to ‘drop’ its previous instructions. When combined with the previous issues raised, this could lead to unethical behaviour and complete file access. Over time, online sleuths identify and share prompts designed to jailbreak clients. Whilst these are designed to be used against specific chatbot models which are not utilising additional filters, it is not unknown for these prompts to work on ‘hardened’ AI implementations. When a company releases a new model, they are usually designed to protect against these known prompts – though this can only be confirmed by testing old prompts against your new implementation.

Furthermore, outdated software could lead to further implications if the AI has been given further privileges (such as ability to change data). Vulnerabilities from outdated software could enable a malicious actor to leverage this to access permissions they would otherwise not have access to. It goes without saying that this could lead could to a serious breach and affect data confidentiality, integrity, and availability. 

It is also important to note that open-source AI models available online and often shared on online repositories such as GitHub are often not stringently tested or may be created with nefarious functions hiding within the source code. It is vital that all AI models used are rigorously tested and examined before deployment. In many ways, this is the same as open source plugins you may deploy on a website – AI requires the same level of care. Arguably, due to the level of user input and PII AI may digest, additional due diligence may be required.

For these reasons, alongside general best practice, AI/Chatbot software should be regularly updated and tested to ensure your web applications are secure against attacks. It should be noted that robust permissions and accurate datasets could mitigate the risks, but severe vulnerabilities may provide the opportunity for a malicious actor to bypass these safeguards.

The Key Takeaways:

  • Outdated AI/chatbot software carries the same risks as outdated libraries or plugins, with known vulnerabilities acting as easy entry points for attackers.
  • Jailbreak prompts and insecure open-source models can bypass filters or introduce hidden risks, making regular updates and thorough testing essential.
  • With AI evolving rapidly, falling behind on updates is not just risky, it hands attackers a ready-made opportunity to breach confidentiality, integrity, and availability of your company’s data.

At the speed AI developments are moving, it is important to move with the technology and not be left behind in a vulnerability-ridden dust cloud. 

Part 5...

Improve your security

Our experienced team will identify and address your most critical information security concerns.