The adoption of cloud platforms and solutions has rapidly accelerated in the last year. Some of this adoption is naturally a result of the global pandemic pushing organisations to find ways to continue to operate and support the needs of the work force and customers. Cloud environment management and setup can be very different from traditional internal/external based infrastructure deployment and therefore careful planning and design consideration is key to building scalable, resilient, secure cloud environments.
The pandemic has driven cloud adoption and changed how end users gain access to resources. Traditional IT infrastructures were built around perimeter controls and access to internal resources required a user to make a remote tunnel into the network to access them. System architects had to build redundancy and carefully balance volumes of traffic as the implications of not considering downtime could essentially sever portions of the workforce from functioning. Ensuring adequate power failover as well as hardware recovery options were always part of a well-defined disaster recovery approach. The same was true of email provisions where internal email servers presented access to the world through firewall rules. Managing these systems takes administration, time and money.
In recent years we have seen an increase in outsourcing these types of services to large corporations as the benefits of offloading the administrative management requirements can be attractive. Customers consume the product without having to be concerned about the redundancy aspects of the solution. Power requirements are handled by the provider. Hardware patching is handled by the provider. This leaves the customer to focus on management of access to users. This approach also allows companies of all sizes to benefit from the same level of availability where, traditionally, deploying a similar solution for smaller companies was a large investment.
Always on access that gives users access to the services they need provides flexibility to the workforce and where a vendor offers an integrated stack of tooling - for example, email, video conferencing, file storage - companies can immediately see the possible benefits of operating in the cloud.
Whilst the benefits of cloud solutions are obvious, sometimes the details about best practice and security controls are not considered when deploying solutions rapidly. In an ideal world, starting with a fresh cloud environment and migrating business components in phases has the best outcomes. However, in practice, it’s difficult to start from scratch. When deploying at speed this is not always possible and, in most cases, companies have some internal structures that need to be transitioned. A common approach is to link up internal systems with cloud-based resource, for example synchronising an on-premise Active Directory with Azure. Whilst this provides a quick route, mass account synchronisation can put accounts with weak passwords within reach of attackers. Historically, despite poor account security choices, there could have been some protection afforded by a strong perimeter, but with a cloud model, the traditional perimeter is gone. Understanding how cloud architecture differs from historic IT approaches is vital when deciding what configuration applies and the implications of such decisions. It is too easy to start with the “how” and immediately start to configure provisions but a detailed understanding of “why” a specific requirement is needed can help to create a robust, flexible environment longer-term.
Skills in cloud-based solutions are in high demand and this can be seen in how vendors provide certification programs for the workforce. Microsoft have adopted a new role-based approach to certifications and tie heavily into the Microsoft 365 and Azure solutions. Amazon and Google also have similar solutions where certifications are aimed at skill levels within specific areas such as development and system administration. The options and services applicable to all cloud vendors are vast and rapidly expanding, so ensuring that career paths are defined as IT processes continue to evolve are wise investments for both the individual and the organisation.
One common thing in these types of deployments is to enforce multi-factor authentication (MFA) for users. This is a good thing and can measurably improve the security level. In the past these types of configuration requirements were complex to deploy, but cloud platforms make this very easy to require that users authenticate with a secondary token, either hardware based or via a TOTP device. However, it is common for organisations to define trusted locations that do not require MFA enforcement via policies. This is a perfect example of usability versus security. By enabling a trusted location, most of the time an office location with an externally NAT’d IPv4 address, you are essentially trusting any source IP address on the internal network at this location. From an attacking perspective, you have now removed the secondary factor just by being part of this network and one weak credential could be the gateway in. When you contrast this risk against users being prompted for a security authentication every 30 days using sign-in frequency control policies, it has little impact on the user experience. Sure, in some cases, trusted locations must be defined, but we should consider why a network is to be trusted and exempt from effective security controls and reduce the attack surface as much as possible.
Another advantage to the cloud model is visibility. When performing simulated attacks on an organisation, often the focus is on the ability to detect and respond to attacks. This starts with visibility over the network and its components. Cloud vendors are continually progressing logging and reporting options and it’s now common to have a multitude of logging options provided as part of the cloud vendor offering. Learning how to reduce the “noise” is a key component but it is certainly an advantage to have integrated logging solutions as part of the overall vendor choice.
A Fresh Perspective
Deployment practices have changed considerably in the last few years, and this will only continue to evolve. Modern system administration is largely governed by Application Programming Interfaces that enforce baselines and configuration. This shift from traditional perimeter-focused administration to policy-based controls does complement the flexibility that cloud platforms offer. The ability to enforce rights based on groups or roles is at the centre of a solid cloud deployment strategy. Protecting privileged groups and having the ability to define policies that enforce more stringent controls for these groups should be part of an effective security strategy, for example, enforcing those administrators can only connect from devices with an issued certificate, always enforced MFA, and ensuring that the device meets a certain level of patching threshold (operating system, browser version).
We effectively establish trust boundaries and assurance via these policies and by layering policy requirements we can achieve a granular level of control over who can connect and by what means. The removal of the traditional perimeter means that all source networks are treated as hostile and instead, we establish trust through validation and conformity of the inbound request. This is known as the “ZeroTrust” model and is an emerging approach to network design in response to changing architecture and cloud adoption. Essentially, it focuses on the components of the system (what is connecting) and the account (who is the identity) using them regardless of where that source is coming from. Validating these against a central directory is a vital component in a ZeroTrust model.
Migration to cloud resources can offer considerable benefits in how a company develops services. One area that benefits greatly is research and development where quickly standing up infrastructure to explore a pilot project is easy and directly provides tooling that aids in agility. Developers can be granted rights to deploy instances and application stacks to work through these goals. However, without proper oversight, these can mount up unpredictable and hidden costs. Ensuring that a trail linking development costs with auditing and authorisation processes can help businesses understand the expenditure. Consistent verification of cloud assets and the purpose is required, but this is nothing new. Understanding assets in use and their function within the business network is as relevant today as it’s always been. The challenge now is that a server instance that is forgotten will have cost implications, and depending on the type of instances in use, this cost could be significant.
There are also security implications to be considered that extend beyond gaps in administration. The ease of cloud deployment is an attractive lure to an attacker wishing to gain access to expensive cloud resources at your expense. The wealth of instance types now accessible across all the cloud platforms is constantly increasing and accessing a multi-GPU powered machine is a mere click away. That click will cost a significant amount of money per hour. Deploying an instance like this into a region that is not normally checked could mean this goes unnoticed if the cloud account has not been setup with the correct tracking metrics and audit trails. These expensive instances could be mining cryptocurrency, deployed just to increase your bill or used to facilitate attacks against other entities. From the cloud providers perspective, if you use the resource, you pay for the resource.
Business processes will continue to drive cloud adoption technologies as the trends continue to demonstrate. This is down to business development plans as well as vendor strategies. Microsoft has made it clear that Azure is the future of the core infrastructure and a key component within the company strategy. This has huge benefits in centralising management and maintenance and in many ways, this has changed the context of the company. Microsoft was historically driven by software operating system sales that customers managed on premise, now the type of software deployed is not restricted to just Microsoft options, and the focus is providing a mechanism so that this can run on Azure. This levels the competition for companies of all sizes in terms of access to resource as the focus now shifts to what you build using these cloud platforms, not necessarily the scale of the organisation. Frequently, when talking to our customers they are at various stages within a cloud adoption process and part of this journey is a requirement to understand the security implications of the data and the processes. Supporting customers whilst they determine what a transition to a cloud-centric model of operation means for them is a request the security industry will continue to see as companies push more resources onto cloud provider platforms.
Improve your security
Our experienced team will identify and address your most critical information security concerns.