Having a comprehensive, well-thought-out approach to cloud adoption is vital to the IT lifecycle. The AWS Cloud Adoption Framework (AWS CAF) helps users structure and plan this journey. The framework does this by breaking up an enterprise’s cloud journey into seven areas of focus, or as AWS calls them, Perspectives: business, platform, maturity, people, process, operations, and security.
It’s that last perspective that I’d like to write about. The Security Perspective covers how to structure a risk-based approach to control identification and selection, how to build a security program enabling maturation via iteration, and how AWS works with customers to set up their security model in the AWS cloud. There are four components in the Security Perspective: Directive, Preventative, Detective and Responsive.
These are the controls that establish your environment’s governance, risk, and compliance models. Effective planning starts by defining your security environment for your security team. This includes determining account governance, account ownership, control framework, data classification, change and asset management, and least privilege access.
Creating a culture of security ownership for application teams is an easy way to keep everyone responsible for their assignments. Using a strong authentication method, such as the 2Factor Secure Cloud Access Datapipe uses with FortyCloud, serves as an additional layer of protection for all actors in any given account.
These controls protect workloads and mitigate threats and vulnerabilities. Once you’ve established the controls and guidance in the Directive component, the Preventive component will determine how to operate the controls effectively.
There are three main areas to look at here: identity and access, infrastructure protection, and data protection. At Datapipe, we don’t use root access credentials. Instead we utilize Identity and Access Management to control users’ access to AWS services as part of an overall access control approach we refer to as Datapipe Access and Audit Control for Cloud (DAACC).
Creating policies and roles associated with the appropriate groups and users and setting up automation for vital changes help reduce human access to applications and data. With infrastructure protection, implement a security baseline for system security configuration and maintenance, security groups, Amazon API Gateway, and other relevant policy enforcement points. Finally, protect data both in motion and at rest with the appropriate safeguards, using encryption keys, integrity validation, and appropriate retention of data.
Infrastructure protection is also a key piece of the Preventive Component that can be overlooked by some users in setting up initial AWS environments. Implementing a secure system with boundaries requires maintenance (such as harden systems and patching) along with other policy enforcements points (including AWS WAF and security groups).
The end result should be achieving the goals and needs you set in the Directive component. Once that occurs, you can move onto the next set of controls.
These controls provide full visibility and transparency over the operation of deployments in AWS, giving you an inside look at your organization’s security setup. At Datapipe, we automatically enable AWS CloudTrail, a service that tracks system access and changes. Constant logging and monitoring, when integrated properly, ensures end-to-end resolution of all security activity. An aspect of this is asset inventory; it’s a lot easier to know what to monitor when you know what workloads are deployed and operational.
It’s wise to engage in security testing, as well. You’ve set these security standards–now make sure you’re meeting them. Knowing how your systems respond when a certain event occurs allows you to be prepared for if the event actually happens. Vulnerability scanning, penetration testing, and error injection are all different kinds of security testing.
The final set of controls drive remediation of potential deviations from users’ security baselines. Adopting your AWS environment into your existing security posture, then simulating actions that require response will keep everyone better prepared. What’s more, having automated incident response and recovery lets the security team focus on root cause analysis and forensics instead of responding to issues.
During an incident, it’s crucial to contain the event and then return to a known good state. Review your current process and you’ll likely find automated response and recovery can play a big part in your operations. Forensics teams will benefit from this, and most existing forensics tools will work on the same services their business-critical applications are built on including Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), Amazon Kinesis, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon RedShift, and Amazon Elastic Compute Cloud (EC2).
To learn more about the Security Perspective, check out AWS’s new whitepaper on the subject. We also encourage readers to attend the upcoming AWS NY Summit August 10-11, which will feature AWS sessions such as “Securing Cloud Workloads with DevOps automation.” If you do attend, feel free to stop by and speak with us at booth #444.