James Jarvis 16 December, 2025

Common AI Implementation Mistakes to Avoid Part 1

Common AI Implementation Mistakes to Avoid

Part 1: Misconfigured AI/LLM Permissions

The use of AI in internal and external applications is rapidly being deployed across all sectors. Whilst this offers exciting new capabilities, it also widens the potential attack surface of a company’s infrastructure. AI chatbots can accidentally expose sensitive data if permissions aren’t properly configured.

The ‘mistakes’ highlighted within this mini-series are based on findings from real tests. AI is often deployed as a chatbot within an internal or external web application, which will be the focus of this article. Whilst it focusses on chatbots, the information can also be applied to data security across a company’s infrastructure.

Data security is paramount, but many of the issues raised below are not unique to AI –in most cases, these are standard vulnerabilities. Their occurrence has been exacerbated by poor AI/chatbot implementation where rigorous security practices may have been overlooked.

Misconfigured AI/LLM Permissions

When an account has misconfigured permissions, it may allow a user to access areas they should not be able to do so. An example would be where Team A cannot see the customer data of Team B, and vice versa; a non-employee would have no access to this data. This approach, and others, provide data security to protect customer data. 

To ensure this remains a robust approach, chatbots/AI requires careful implementation. If a system-wide AI/chatbot is implemented without considering user permissions, it could enable users to access data which would otherwise be off-limits to them. This could enable Team A to access Team B documents via AI, even if they cannot access it through usual file-access methods. If particularly poorly misconfigured, a non-employee using a front-end customer AI/chatbot may be able to access internal information. Any instance where a chatbot/AI provides additional access constitutes a breach in data confidentiality. 

It should be noted that misconfigured permissions are a security vulnerability not limited to AI – however, AI could widen the attack surface, providing further opportunities for a malicious actor to take advantage. Furthermore, If AI considers user permissions, but these permissions themselves are weak, the issue will remain unresolved.

Companies should ensure that all aspects of their application(s) have a robust permissions policy implemented to prevent a breach of data confidentiality, which should extend to additional technologies and regularly reviewed to confirm misconfigurations have not occurred. Where misconfigurations have occurred, robust procedures should be in place to ensure they are swiftly addressed. 

The Key Points

The three main takeaways for Misconfigured Permissions:

  • Misconfigured permissions could enable AI/chatbots to bypass normal data security boundaries, exposing confidential information.
  • AI doesn’t eliminate traditional vulnerabilities - in fact, it can expand the attack surface if implemented poorly.
  • Strong, regularly reviewed permissions policies are critical to ensure AI enhances, not undermines, security.

As companies rush to develop AI offerings, they must not neglect data security. Failing to address these concerns before deployment risks exposing sensitive data in pursuit of progress.

Part 2...
 

Improve your security

Our experienced team will identify and address your most critical information security concerns.