Amazon EC2 Instance Types (2019)

Amazon EC2 Instance Types

Creating a Virtual Server Using Elastic Amazon Compute Cloud (EC2)  Instance Types

Consider the meaning of elastic in many of the AWS service names. When you see the word elastic, you should think of the ability to stretch and contract. All the AWS documentation alludes to this fact, but it often makes the whole process sound quite complicated when it really isn’t. Just think about a computer that can stretch when you need more resources and contract when you don’t.  

 

With AWS, you pay only for the services you actually use, so this capability to stretch and the contract is important because it means that your organization can spend less money and still end up with just the right amount of services needed.

 

Even though some members of your organization might fixate on the issue of money, the real value behind the term elastic is time. Keeping your own equipment right-sized is time-consuming, especially when you need to downsize. 

 

Using Amazon EC2 means that you can add or remove computing capacity in just a few minutes, rather than weeks or months. Because new requirements tend to change quickly today, the capability to right size your capacity in minutes is crucial, especially if you really do want that pay raise. 

 

As important as being agile and keeping costs low are to an administrator, another issue is even more important: being able to make the changes without jumping through all sorts of hoops. Amazon EC2 Instance Types or EC2 provides two common methods for making ­configuration changes:

  • Manually using the AWS Console
  • Automatically using the AWS Application Programming Interface (API)

 

Just as you do with your local server, you have choices to make when building an EC2 instance (a single session used to perform one or more related tasks). The instance can rely on a specific operating system, such as Linux or Windows. You can also size the instance to provide a small number of services or to act as a cluster of computers for huge computing tasks (and everything in between).

 

AWS bases the instance size on the amount of CPU type, memory, and storage required to perform the tasks you assign to the instance. In fact, you can create optimized instances for tasks that require more resources in the following areas:

  • CPU
  • Memory
  • Storage
  • GPU

As the tasks that you assign to an instant change, so can the instance configuration. You can adjust just the memory allocation for an instance or provide more storage when needed. You can also choose a pricing model that makes sense for the kind of instances you create:

 

On Demand: You pay for what you use.

Reserved Instance

Reserved Instance: Provides a significantly reduced price in return for a one-time payment based on what you think you might need in the way of service.

 

Spot Instance: Lets you name the price you want to pay, with the price affecting the level of service you receive. Autoscaling is an EC2 feature that you use to ensure that your instance automatically changes the configuration as the load on it changes. Rather than require someone to manage EC2 constantly, you can allow the instance to make some changes as needed based on the requirements you specify.

 

The metrics you define determine the number and type of instances that EC2 runs. The metrics include standards, such as CPU utilization level, but you can also define custom metrics as needed.

 

A potential problem with autoscaling is that you’re also charged for the services you use, which can mean an unexpectedly large bill. Every EC2 feature comes with pros and cons that you must balance when deciding on how to configure your setup. AWS also provides distinct security features.

 

The use of these security features will become more detailed as the blog progresses. However, here is a summary of the security features used with EC2:

 

Virtual Private Cloud (VPC): Separates every instance running on the physical server from every other instance. Theoretically, no one can access someone else’s instance.

Amazon EC2 Instance Types

Network Access Control Lists (ACLs) (Optional): Acts as a firewall to control both incoming and outgoing requests at the subnet level.

 

Identity and Access Management (IAM) Users and Permissions: Controls the level of access granted to individual users and user groups. You can both allow and deny access to specific resources managed by EC2.

 

Security Groups: Acts as a firewall to control both incoming and outgoing requests at the instance level. Each instance can have up to five security groups, each of which can have different permissions. This security feature provides finer-grained control over access than Network ACLs, but you must also maintain it for each instance, rather than for the virtual machine as a whole.

 

Hardware Security Device:

Security Device

Relies on a hardware-based security device that you install to control security between your on-premises network and the AWS cloud.No amount of security will thwart a determined intruder. Anyone who wants to gain access to your server will find a way to do it no matter how high you build the walls.

 

In addition to great security, you must monitor the system and, by assuming that someone will break in, deal with the intruder as quickly as possible. Providing security keeps the less skilled intruder at bay as well as helps keep essentially honest people honest, but skilled intruders will always find a way in. The severity of these breaches varies, but it can actually cause businesses to fail, as in the case of Code Spaces.

 

A number of security researchers warn that AWS is prone to security lapses. However, don’t assume that other cloud services provide better security. Any time you use external services, you take significant risks as well.

 

A final consideration is the use of storage. Each instance comes with a specific amount of storage based on the kind of instance you create. If the instance storage doesn’t provide the functionality or capacity you need, you can also add Elastic Block Store (EBS) support. The main advantage of using EBS, besides capacity and flexibility, is the capability to define a specific level of storage performance to ensure that your application runs as expected.

 

Working with the Identity and Access Management (IAM) Console

Access Management

One of the more basic and prevalent security measures provided by Amazon is IAM. Even though this service appears in this blog, it isn’t solely associated with EC2. You also use this service for other needs:

  • Computing
  • Storage
  • Database
  • Application services
  •  

In short, you see a lot of IAM throughout the blog because you need to know how to use it with a number of services. The following sections provide a general overview of IAM, plus some EC2 specifics that you use when working through various procedures in this blog.

 

For example, the following sections tell you the general principles of working with IAM, which includes interacting with groups, users, and permissions in a manner similar to that used with physical servers that you may have used in the past.

 

Configuring root access

Configuring root access

When you initially create your AWS account, Amazon assigns an identity to the account name and password that you provide as the root user. The root user has unrestricted access to all the AWS resources. As you might imagine, this makes the root user account dangerous, because anyone who gains access to it can also access everything your AWS account supports.

 

Because root user access is so dangerous, Amazon recommends that you use it only to create your first administrative user and then lock the root user access so that no one can use it. The root user never goes away; you simply make it nonfunctional unless you really do need it at some point.

 

To begin this process, you must access the IAM Console at https://console.aws. amazon.com/iam/. You may see a sign-in page for your account when you aren’t already signed into AWS. Provide your credentials as needed. 

 

Now that you have access to the IAM Console, you can perform the tasks required to create an administrator account and assign it privileges. The following sections help you accomplish these goals.

 

You can close the left panel to see more of the selected page by clicking the left-pointing arrow in the tab near the top of the page. When the panel is closed, clicking the right-pointing arrow in the tab reopens it. By closing the panel until you need to use it, you can gain a better view of the task at hand.

 

Creating the Administrators group

VPS

Before you can create an Administrator user, you must provide a group for it. Every user who can perform administrative tasks is part of the Administrators group. The following steps describe how to perform this task.

 

1. Click Groups on the left side (Navigation pane) of the IAM Console page.

You see the Groups page. The page currently lacks any groups because you haven’t created any. As you create new groups, the page shows each of them, along with a list of users, the policy associated with the group, and when you created the group.

 

2. Click Create New Group.

You see the Set Group Name page. You type the name of the group you want to create in the Group Name field. The group name field can support names up to 128 characters long, but you normally don’t make them that long. Choose something simple, like Administrators, to describe your group.

Group

3.Type Administrators (or the name of the group you want to create) and click Next Step.

You see the Attach Policy page. Each group can have one or more policies attached to it. In this case, the IAM Console automatically shows the only existing policy, which is AdministratorAccess.

 

4. Check Administrator Access and click Next Step.

You see the Review page. This page tells you the group name and shows the policies that are attached to it. If you find that the group name is wrong, click Edit Group Name. Likewise, if the policy is incorrect, click Edit Policies. The IAM Console helps you make the required changes to the name or policy.

 

5. Verify the group information and click Create Group.

 

Creating an Administrator user

User_aws

Creating the Administrators group is the first step in ensuring that your AWS account remains safe. The next step is to create an account for yourself and assign it to the Administrators group so that you have full access to the ­administrative features of your AWS account. The following steps describe how to perform this task.

 

1. Select Users in the Navigation pane.

You see the Users page. This page shows all the users who can access the EC2 setup, what level of access they have, password details, access keys, and the time you created the account.

 

2. Click Create New Users.

You see the Create User page. You can create up to five new users at a time. Each username can contain up to 64 characters.

New Users

3. Type the username you want to use in field 1, deselect the Generate an Access Key for Each User check box, and click Create.

Amazon creates a new user with the name you specify and returns you to the Users. Your new user account still doesn’t belong to the Administrators group, however, so that’s the next step in the process.

 

4. Click the link associated with the username you specified for the ­administrator account.

You see the user specifics on the Summary page. Notice the tabs at the bottom of the page that you can use to configure the account.

 

5. Click Add User to Groups.

You see a list of available groups.

 

6. Check the Administrators entry and click Add to Groups.

The IAM Console returns you to the Summary page for the user. However, the Groups tab now shows that the user is part of the Administrators group. Note that you can remove access to a group by clicking the Remove from Group link next to the group entry.

 

7. Select the Security Credentials tab of the Summary page.

 

8. Click Manage Password.

You see the Manage Password page. Note that you can assign an auto-generated password to the account or rely on a custom password. When creating accounts for other users, make sure to check the Require User to Create a New Password at Next Sign-in check box.

 Manage Password

9. Type the same password in the Password and Confirm Password fields and then click Apply.

The IAM Console returns you to the Summary page, where you see that the user now has a password but hasn’t ever used it.

 

10. Sign out of your root user account.

You need to sign back into AWS using the new administrator account that you just created.

 

Accessing AWS using your new Administrator account

AWS_host

To use the new account you just ­created, you navigate to this URL using your browser. When there, you see a sign-in page like. Notice that the account name is already filled in for you. Type your username and password into the appropriate fields; then click Sign In. You see the same AWS services as normal, but now you use your administrative account to access them.

 

Defining permissions and policies

permissions

Sometimes it’s hard to figure out the whole idea behind permissions and policies. To begin with, a permission defines the following:

  • Who can access a resource
  • What actions individuals or groups can perform with the resource

 

Every user starts with no permissions at all. In other words, a user can’t do anything, not even view security settings or use access keys to interact with a resource. This is a good practice because it means that you can’t inadvertently create a user with administrator rights simply because you forget to assign the user-specific permissions.

 

Of course, assigning every user every permission required to perform what amounts to the same tasks is time-consuming and error-prone. A policy is a package of permissions that apply to a certain group of people. For example, in the “Creating the Administrators group” section, earlier in this blog, you create a group named Administrators and assign the Administrator Access policy to it, which gives anyone in the Administrators group administrator-level permissions to perform tasks with AWS.

 

AWS comes with only the Administrator Access policy configured, so if you want to use other policies, you must define them yourself. Fortunately, policies come in several forms to make them easier to work with. The following list describes the policy types and describes how you use them:

 

Managed: Stand-alone policies that you can attach to users and groups, but not to resources. A managed policy affects identities only. When creating a managed policy, you use a centralized management page to create, edit, monitor, rollback, and delegate policies. You have access to two kinds of managed policies.

 

AWS-Managed: A policy that AWS creates and manages for you. These policies tend to support the obvious needs that most organizations have. The reasons to use AWS managed policies are that they’re simple to implement and they automatically change to compensate for changes to AWS functionality.

 

Customer-Managed: A policy that you create and manage. Use these policies to support any special organizational requirements. The main reason to use a policy of this type is to gain flexibility that the AWS-managed option doesn’t provide.

 

Inline: Embedded policies that you create and attach to users, groups, or resources on an individual basis (without using a centralized manager). AWS views identity policies (those used for users and groups) differently from resource policies as described in the following list:

 

Embedded User, Group, or Role: An identity policy that you embed directly into a user, group, or role. You use an identity policy to define the actions that an entity can perform. For example, a user can have permission to run instances of EC2. In some cases, in addition to defining what action a user can take, the policy can also define the specific resource with which the user can work. This additional level of control is a resource-level permission, one that defines both the action and the specific resource.

 

Resource: One or more permissions that determine who can access the resource and what actions they can perform with it. You can add resource-based permissions to Amazon S3 buckets, Amazon Glacier vaults, Amazon SNS topics, Amazon SQS queues, and AWS Key Management Service encryption keys. Unlike policies created for identities, resource-based permissions are unmanaged (inline only).

 

Creating customer-managed policies

AWS_policies

When you initially create your AWS account, only one AWS-managed policy is in place: AdministratorAccess. However, after you create the first user and log in to AWS using your new administrator account, you can access a large number of AWS-managed policies.

 

Whenever possible, you should use the AWS-managed policies to ensure that the policy receives automatic updates that reflect changes in AWS functionality. When using a customer-managed policy, you must perform any required updates manually. The following steps get you started using ­customer-managed policies.

 

1. Sign in to AWS using your administrator account.

2. Navigate to the IAM Management Console at https://console.aws. amazon.com/iam.

3. Select Policies in the Navigation pane.

4. Click Get Started.

 

You see a list of available AWS-managed policies. Each of the policy names starts with the word Amazon (to show that it’s an AWS-managed policy), followed by the service name (EC2 in the figure), optionally followed by the target of the permission (such as ContainerRegistry), and ending with the kind of permission granted (such as Full Access).

 

When creating your own customer-managed policies, it’s good to follow the same practice to make the names easier to use and consistent with their AWS-managed counterparts. Note that the policy list tells you the number of entities attached to a policy so that you know whether a policy is actually in use.

 

You can also see the policy creation time and the time someone last edited it. The symbol on the left side of the policy shows the policy type, which is a stylized cube for AWS-managed policies. Before you create a customer-managed policy, make certain no AWS-managed policy exists that serves the same purpose. Using the AWS-managed policy is simpler, less error-prone, and provides automatic updates as needed.

 

5. Click Create Policy.

AWS Managed Policy

Copy an AWS Managed Policy: An AWS-managed policy acts as a starting point. You then make the changes required to customize the policy for your needs. Because this option helps ensure that you create a usable policy and requires the least work, you should use it whenever possible. This example assumes that you copy an existing AWS-managed policy because you follow this route most often.

 

Policy Generator: Relies on a wizardlike interface to either allow or deny actions against an AWS service. You can assign the permission to specific resources (in some cases) using an Amazon Resource Name, ARN, or to all resources (using an *, asterisk).

 

A policy can contain multiple permissions, each of which appears as a statement within the policy. After you define policies, the wizard shows you the policy document, which you can edit manually if desired. The wizard uses the policy document to generate the policy. This is the best option to use when you need a single policy to cover multiple services.

 

Create Your Own Policy: Defines a policy completely by hand. All you see is the policy document page, which you must fill in manually using appropriate syntax and grammar.

The discussion at http://docs.aws.amazon.com/IAM/latest/UserGuide tells you more about how to create a policy completely manually. You use this option only when necessary because the time involved in creating the document is substantial and the potential for error is high.

 

6 Click Copy an AWS-Managed Policy.

AWS-Managed Policy

7.   Click Select next to the AWS-Managed policy that you want to use as the basis for your customer-managed policy.

The example uses the AmazonEC2FullAccess policy as a starting point, but the same steps apply to modify other policies. Starting with a policy that has too many rights and removing the rights you don’t want is significantly easier than starting with a policy that has too few rights and adding the rights you need.

 

Adding rights entails typing new entries into the policy document, which requires a detailed knowledge of policies. Removing rights means highlighting the right statements you don’t want and deleting them.

 

8. Type a new name in the Policy Name field.

The example uses MyCompanyEC2FullAccessNoCloudWatch as the policy name.

 

9. Modify the Description field as needed to define the changes made.

The example adds that the policy doesn’t allow access to CloudWatch.

 

10. Modify the Policy Document field as needed to reflect policy changes.

 

11. Click Validate Policy.

If the changes you made work as intended, you see a This Policy Is Valid success message at the top of the page. Always validate your policy before you create it.

 

12. Click Create Policy.

You see a success message on the Policies page, plus a new entry for your policy. Note that your policy doesn’t include a policy type icon. The lack of an icon makes the policy easier to find in the list.

 

Creating groups

AWS_11

After you create all the policies needed to manage your EC2 setup, you need to create the groups you want to use. The technique for any group you want to create is the same as the one found in the “Creating the Administrators group” section, earlier in this blog. However, you provide different names for each of your groups and assign the policies that are specific to that group.

 

Including inline policies

In general, you want to avoid using inline policies because they’re hard to manage and you must go to the individual entities, such as groups, to make any required changes.

Including inline policies

In addition, the inline policies have a tendency to hide, making troubleshooting problems with your setup just that much harder. However, you may encounter situations in which an inline policy offers the only way to set security properly. The following steps help you create inline policies as needed. (This procedure uses an example group named EC2Users, but it works with any entity that supports inline policies.)

 

1. Select the Groups, Users, or Roles entry in the Navigation pane.

2. Open the entity you want to work with by clicking its entry in the Object Type page.

3. Select the Permissions tab of the entity’s Summary page.

4. Click Inline Policies.

If this is your first inline policy, you see a message saying “There are no inline policies to show. To create one, click here.”

 

5. Click the Click Here button.

You see a Set Permissions page containing two options:

 

Policy Generator: Displays a wizard that lets you easily create a policy for use with your entity. Among the methods for creating an inline policy, this is the easiest.

 

Custom Policy: Displays an editor in which you manually type a policy using the appropriate syntax and grammar. This is the more flexible of the two options for creating an online policy.

 

6. Select a permission generation option and then click the Select button next to that entry.

The example assumes that you want to use the Policy Generator option. You see the Edit Permissions page. This interface enables you to allow or deny actions against a specific AWS service and, optionally, a specific resource associated with that service.

 

7. Configure the permission using the various permission entries and then click Add Statement to add the statement to the policy.

 

8. Click Next Step.

You see the Review Policy page. Because you define the policy using a series of individual permissions, you probably don’t need to edit the policy.

 

9. Click Validate Policy.

If the changes you made work as intended, you see a This Policy Is Valid success message at the top of the page. Always validate your policy before you create it.

Validate Policy

10. Click Apply Policy.

You see the policy added to the Inline Policies area of the Permissions tab of the entity’s Summary page. In addition, the Inline Policies area now includes a button to create more policies, such as the Create Group Policy entry for groups. To interact with an existing inline policy, use the links in the Actions column of the policy list. Here’s an overview of the actions you can perform on an inline policy:

 

Show Policy: Displays the code used to create the policy.

Edit Policy: Lets you edit the code used to create a policy using the Review

 

Remove Policy: Deletes the inline policy so that it no longer affects the entity. The deletion is final, so you must make that sure you actually want to delete the policy.

 

Simulate Policy: Demonstrates the effect of the policy on the entity. You can set up various configurations and testing criteria so that you know that the inline policy works as expected.

 

Adding users

Adding users

AWS provides numerous means for automatically generating users. However, the best policy is to ensure that you correctly configure each user from the outset.

 

The procedure found in the “Creating an Administrator user” section, earlier in this blog, works best for this purpose. Of course, you use a unique name for each user and assign the user to groups as needed. You can also attach managed or inline policies to the user by using the options found on the Permissions tab of the user’s Summary page.

 

Attaching and detaching policies

Instead of attaching or detaching policies for individual entities, you can perform the task on a large scale. The Policies page contains a listing of all the policies you define for your setup. When you want to attach a policy to one or more entities, select the box next to the policy you want to use and choose Policy Actions ➪ Attach.

 

You see an Attach. Select the check box next to each of the entities that should receive the policy and then click Attach Policy to ­complete the action. Likewise, when you want to detach a policy from an entity, select the check box next to the policy you want to use and choose Policy Actions ➪ Detach. You see a Detach Policy page.

 

Select the check box next to each of the entities that currently have the policy set but no longer need it; then click Detach Policy.

 

Working with Elastic Block Store (EBS) Volumes

EC2

Just as there isn’t one kind of hard drive, there isn’t one kind of EBS volume. Amazon currently provides access to both Solid-State Drive (SSD) and Hard Disk Drive (HDD) volumes. SSD provides high-speed access, while HDD provides lower-cost access of a more traditional hard drive. Amazon further subdivides the two technologies into two types each (listed in order of speed):

 

EBS Provisioned IOPS SSD: Provides high-speed data access that you commonly need for data-intensive applications that rely on moderately-sized databases.

 

EBS General Purpose SSD: Creates a medium-high-speed environment for low-latency applications. Amazon suggests this kind of volume for your boot drive. However, whether you actually need this amount of speed for your setup depends on the kinds of applications you plan to run.

 

Throughput Optimized HDD: Defines a high-speed hard drive environment, which can’t compete with even a standard SSD. However, this volume type will work with most common applications and Amazon suggests using it for big data or data warehouse applications. This is probably the best option to choose when money is an issue and you don’t really need the performance that SSD provides.

 

Cold HDD: Provides the lowest-speed option that Amazon supports. You use this volume type for data you access less often than data you place on the other volume types (think data you use once a week, rather than once every day). This isn’t an archive option; it’s more like a low-speed option for items you don’t need constantly, such as a picture database. As you move toward higher-speed products, you also pay a higher price.

 

For example, at the time of writing, a Cold HDD volume costs only $0.025/GB/month, but an EBS Provisioned SSD volume costs $0.125/GB/month. You can find price and speed comparison details at http://aws.amazon.com/ebs/details/#piops. The table provided contains some interesting statistics. For example, all the ­volume types top out at 16TB and support a maximum throughput per instance of 800MB/s.

 

Creating an EBS volume

AWS_volume

Before you can use an EBS volume, you must create it. As discussed in previous sections, EBS volumes can take on many different characteristics. The following steps describe how to create a simple volume that you can use with EC2 for the procedures in this blog. However, you can use these same steps for creating volumes with other characteristics later.

 

1. Sign into AWS using your administrator account.

 

2. Navigate to the EC2 Console at https://console.aws.amazon.com/ec2/.

 Notice the Navigation pane on the left, which contains options for performing various EC2-related tasks. The Resources area of the main pane tells you the statistics for your EC2 setup, which currently includes just the one security group.

 

3. Choose an EC2 setup region from the Region drop-down list at the top of the page.

 

4. Select Volumes in the Navigation pane.

The EC2 Console shows that you don’t currently have any volumes defined.

 

5. Click Create Volume.

Create Volume

You see the Create Volume dialog box. Notice that you can choose a volume type and size, but not the Input/Output Operations Per Second (IOPS) or the throughput, which are available only with certain volume types. The Availability Zone field contains the location of the storage, which must match your EC2 setup.

 

The Snapshot ID field contains the name of an S3 storage location to use for incremental backups of your EBS data. You can also choose to encrypt sensitive data, but doing so places some limits on how you can use EBS. For example, you can’t use encryption with all EC2 instance types.

 

6. Click Create.

AWS creates a new volume for you and displays statistics about it. The new volume lacks any sort of backup. The next step configures a snapshot that AWS uses to perform incremental backups of the EBS data, reducing the risk of lost data.

 

7. Choose Actions ➪ Create Snapshot.

You see the Create Snapshot dialog box. Notice that AWS fills in the Volume field for you and determines the need for encryption based on the volume settings.

 

8. Type EBS.Backup in the Name field, type Test Backup in the Description field, and then click Create.

You see a dialog box telling you that AWS has started the snapshot.

 

9. Click Close. The volume is ready to use.

When you finish this example, you can delete the volume you created by selecting its entry in the list and choosing Actions ➪ Delete Volume. You automatically create the required EBS volume required for your EC2 setup in the “Creating an instance” section, later in this blog. However, in a real-world setup, you can attach this volume to an EC2 instance or detach it when it’s no longer needed.

 

Discovering Images and Instances

Amazon Machine Image

An Amazon Machine Image (AMI) is a kind of blueprint. You tell AWS to use an AMI to build an instance. As with blueprints, AWS can use a single AMI to build as many instances as needed for a particular purpose. Every instance created with the AMI is precisely the same from the administrator’s perspective.

 

AMI is one of the EC2 features that enable you to autoscale. Amazon uses the AMI to create more instances as needed so that your application continues to run no matter how many people may want to access it. You don’t have to create an AMI; Amazon provides default AMIs that you can use.

 

However, if you want to create a custom environment to use with EC2, you need to create your own AMI. This blog assumes that you use one of the default AMIs. The following sections show the easiest method for creating an instance using one of Amazon’s AMIs. Before you can use an AMI, you must first create and configure it as described at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html.

 

In addition, you must register the AMI with AWS. After you’ve registered it, you can tell AWS to create (launch) instances based on the AMI. A user or application connects to an instance in order to use it, and an administrator can configure an instance to meet particular requirements.

 

You can copy your AMI between regions to help improve overall application speed if you want. To stop using an AMI, you deregister it. With these tasks in mind, the following sections describe how to use the Amazon Management Console to work with EC2 images and instances.

 

Generating security keys

security keys

To access your EC2 instance securely, you need to generate security keys. These keys enable you to be verified as the person logging on to the EC2 instance. The following steps help you create a key pair that you need when creating your instance in the next section:

 

1. Select Key Pairs in the Navigation pane. AWS tells you that you don’t have any key pairs defined.

2. Click Create Key Pair. You see a Create Key Pair dialog box that asks for a key pair name.

3. Type MyKeyPair in the Key Pair Name field and click Create.

You see a download dialog box for the browser that you use. Be sure to save a copy of the key pair as a Privacy-Enhanced Mail (.pem) file. The article at http://fileformats.archiveteam.org/wiki/PEM tells more about this particular file format.

4. Save the .pem file to disk. The Key Pairs page now shows a key pair named MyKeyPair with all the pertinent information.

 

Creating an instance

Creating an instance

The process for creating an EC2 instance can become quite complex. You can manually create key pairs used to log in to the instance, for example, or create a special security group to help maintain EC2 security.

 

In addition, you can use a custom AMI to configure your instance. The problem is that all these extra steps make what should be a relatively simple process for experimentation purposes quite difficult. With this in mind, the following steps show the easiest, fastest method for creating an EC2 instance.

 

However, keep in mind that you can do a lot more with EC2 setups than described in this blog. This procedure assumes that you have already logged in and selected the same region used for your EBS volume.

1. Select Instances in the Navigation pane. AWS tells you that you don’t have any EC2 instances running.

 

2. Click Launch Instance.

You see a series of AMI entries. Amazon owns all these AMIs. You can also choose to use your own AMI or obtain access to an AMI through the AWS Marketplace or Community. using EC2, you must select one of the Free Tier Eligible entries, which include Amazon Linux, Red Hat Linux, SUSE Linux, Ubuntu Linux, and Windows Server (all in various versions). To ensure that you don’t accidentally choose a paid option, select the Free Tier Only check box on the left side of the page.

 

3. Click Select next to the Amazon Linux AMI 2016 entry.

You see a listing of instance types. One of the instance types is marked Free Tier Eligible. You must choose this option unless you want to pay for your EC2 instance. Choosing to configure the instance details or change storage requirements will create a new instance type.

 

The new instance type won’t be free-tier eligible. You can view the various configuration options available, but click Cancel instead of creating the instance if you want to continue working with AWS free of charge.

 

4. Select the instance type that you want to create and then click Review and Launch. You see Step 6: Configure Security Group page

5. Click Edit Security Groups.

6. Type Default-Launch in the Security Group Name field.

Use a group name that’s both short and meaningful to avoid potential confusion later.

7. (Optional) Type a group description in the Description field.

8. Choose All Traffic in the Type field.

Using this option gives you maximum EC2 access. However, in a real-world setup, you limit the Type field entries to just the protocols you actually plan to use. For example, if you don’t plan to use Secure Shell (SSH) to interact with EC2, don’t include it in the list of allowed protocols.

 

9. Choose My IP in the Source field.

By limiting the access to just your IP, you reduce the likelihood that anyone will access the EC2 setup. However, intruders can find all sorts of ways around this precaution, such as by using IP spoofing.

 

10. Click Add Rule.

AWS adds the rule to the list. Click the X next to the new rule that AWS automatically generates in some cases to remove it — you don’t need it.

 

11. Click Review and Launch.

The EC2 Management console takes you back to the Step 7: Review Instance Launch page.

12. Click Launch. You see a Select an Existing Key Pair or Create a New Key Pair dialog box.

13. Select Choose an Existing Key Pair in the first field.

14. Select MyKeyPair in the second field.

15. Select the checkbox to acknowledge that you have access to the private key and then click Launch Instances.

 

A Different Kind of Workload

Workload

EC2 quickly became an extremely popular option for companies looking to run websites, manage databases, and balance traffic. As AWS grew as a business, so did the number of websites powered by its EC2 platform. Huge names like Netflix and Instagram are, or were for some time, running a majority of their sites and operations on AWS-powered services. Hundreds of thousands of domains can be resolved to an EC2 host.

 

At some point around late 2012 or early 2013, developers and managers at Amazon realized that, while a bulk of EC2 usage was web-related, an entire another category of usage existed: event-driven processing.

 

Many companies were turning to EC2 to react to events in other parts of the infrastructure: images saved by users of a web service would trigger EC2 instances to automatically generate thumbnails; new log files being delivered would cause EC2 instances to process and make sense of the data; databases updates launched EC2 instances to process these changes and apply updates elsewhere. Essentially, EC2 was being used in a chain of events that could contain many moving parts, most utilizing additional AWS services.

 

To respond to this demand, AWS announced another product during the keynote of its 2014 Re: Invent conference in Las Vegas: AWS Lambda. According to Amazon, “AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.”

 

The announcement of Lambda was a huge game changer for many organizations that have traditionally relied on EC2 to process events, resize images, respond to database updates, or react to user clicks or website updates.

 

Under the EC2 model, a developer and/or operations staff must: design code, deploy resources, configure event hooks, manage those resources, implement monitoring, respond to downtime, plan for scale, and react to surges in traffic with proper scaling policies. With Lambda, a developer only needs to: design code, configure a few options, and release the application.

 

Despite these advantages, Lambda is not yet designed for workloads that are not primarily event-driven. For example, Lambda functions cannot respond directly to HTTP requests (unless paired with AWS API Gateway, another recently-announced service), or run long-running processes in the background.

 

Given their low timeouts (currently 300 seconds) and moderate memory allowances (up to 1.5 GB at the time of writing this blog), Lambda functions are not designed to completely replace EC2; instead, they can replace some of the functionality for specific services and support an entirely new form of computing in the cloud.

 

Introducing EC2 Systems Manager

 EC2 Systems Manager

As the name suggests, EC2 Systems Manager is a management service that provides administrators and end users with the ability to perform a rich set of tasks on their EC2 instance fleet such as periodically patching the instances with a predefined set of baseline patches, tracking the instances' configurational state, and ensuring that the instance stays compliant with a state template, runs scripts and commands over your instance fleet with a single utility, and much, much more!

 

The EC2 Systems Manager is also specifically designed to help administrators manage hybrid computing environments, all from the comfort and ease of the EC2 Systems Manager dashboard. This makes it super efficient and costs effective as it doesn't require a specialized set of software or third-party services, which cost a fortune, to manage your hybrid environments!

 

But how does AWS achieve all of this in the first place? Well, it all begins with the concept of managed instances. A managed instance is a special EC2 instance that is governed and managed by the EC2 Systems Manager service. Each managed instance contains a Systems Manager (SSM) agent that is responsible for communicating and configuring the instance state back to the Systems Manager utility.

 

Windows Server 2003–2012 R2 AMIs, Windows Server 2003–2012 R2 AMIs will automatically have the SSM agent installed. For Linux instances, however, the SSM agent is not installed by default. Let's quickly look at how to install this agent and set up our first Dev instance in AWS as a managed instance.

 

Getting started with the SSM agent

SSM agent

In this section, we are going to install and configure an SSM agent on a new Linux instance, which we shall call as a Dev instance, and then verify it's working by streaming the agent's log files to Amazon CloudWatch Logs. So let's get busy!

 

Configuring IAM Roles and policies for SSM

First, we need to create and configure IAM Roles for our EC2 Systems Manager to process and execute commands over our EC2 instances. You can either use the Systems Manager's managed policies or alternatively create your own custom roles with specific permissions. For this part, we will be creating a custom role and policy.

 

To get started, we first create a custom IAM policy for Systems Manager managed instances:

Log in to your AWS account and select the IAM option from the main dashboard, or alternatively, open the IAM console at http://console.aws.amazon.com/iam/.

Next, from the navigation pane, select Policies. This will bring up a list of existing policies currently provided and supported by AWS out of the box.

 

Type SSM in the Policy Filter to view the list of policies currently provided for SSM.

 

Select the AmazonEC2RoleforSSM policy and copy its contents to form a new policy document. Here is a snippet of the policy document for your reference:

Once the policy is copied, go back to the Policies dashboard and click on the Create policy option. In the Create policy wizard, select the Create Your Own Policy option.

 

Provide a suitable Policy Name and paste the copied contents of the AmazonEC2RoleforSSM policy into the Policy Document section. You can now tweak the policy as per your requirements, but once completed, remember to select the Validate Policy option to ensure the policy is semantically correct.

 

Once completed, select Create Policy to complete the process. With this step completed, you now have a custom IAM policy for System Manager managed instances.

 

The next important policy that we need to create is the custom IAM user policy for our Systems Manager. This policy will essentially scope out which particular user can view the System Manager documents as well as perform actions on the selected managed instances using the System Manager's APIs:

 

Once again, log in to your AWS IAM dashboard and select the Policies option as performed in the earlier steps.

 

Type SSM again in the Policy Filter and select the AmazonSSMFullAccess policy. Copy its contents and create a custom SSM access policy by pasting the following snippet in the new policy's Policy Document section:

 

Remember to validate the policy before completing the creation process. You should now have two custom policies.

 

With the policies created, we now simply create a new instance profile role, attach the full access policy to the new role, and finally verify the trust relationship between Systems Manager and the newly created role:

 

To create a new role, from the IAM management dashboard, select the Roles option from the navigation pane.

 

In the Create Role wizard, select the EC2 option from the AWS service role type, as shown in the following screenshot. Next, select the EC2 option as the use case for this activity and click on the Next: Permissions button to continue:

 

In the Attach permissions policy page, filter and select the ssm-managedInstances policy that we created at the beginning of this exercise. Click on Review once done. Finally, provide a suitable Role name in the Review page and click on Create role to complete the procedure! With the role in place, we now need to verify that the IAM policy for your instance profile role includes http://ssm.amazonaws.com as a trusted entity:

 

To verify this, select the newly created role from the IAM Roles page and click on the Trust relationships tab.

 

Here, choose the Edit Trust Relationship option and paste the following snippet in the policy editor, as shown. Remember to add both EC2 and SSM as the trusted services and not just one of them:

With the new trust policy in place, click on Update Trust Policy to complete the process. Congratulations!

 

You are almost done with configuring the Systems Manager! A final step remains, where we need to attach the second policy that we created (SSM full access) to one of our IAM users. In this case, I've attached the policy to one of my existing users in my AWS environment, however, you can always create a completely new user dedicated to the Systems Manager and assign it the SSM access policy as well. With the policies out of the way, we can now proceed with the installation and configuration of the SSM agent on our simple Dev instance.

 

Installing the SSM agent

Installing the SSM agent

As discussed at the beginning of the blog, the Systems Manager or the SSM agent is a vital piece of software that needs to be installed and configured on your EC2 instances in order for Systems Manager to manage it. At the time of writing, the SSM agent is supported on the following sets of operating systems:

  • Windows:
  • Windows Server 2003 (including R2)
  • Windows Server 2008 (including R2)
  • Windows Server 2012 (including R2)
  • Windows Server 2016
  • Linux (64-bit and 32-bit):
  • Amazon Linux 2014.09, 2014.03 or later
  • Ubuntu Server 16.04 LTS, 14.04 LTS, or 12.04 LTS
  • Red Hat Enterprise Linux (RHEL) 6.5 or later
  • CentOS 6.3 or later
  • Linux (64-bit only):
  • Amazon Linux 2015.09, 2015.03 or later
  • Red Hat Enterprise Linux 7.x or later
  • CentOS 7.1 or later

The user data script varies from OS to OS. In my case, the script is intended to run on an Ubuntu Server 14.04 LTS (HVM) instance. You can check your SSM agent install script at http://docs.aws.amazon.com/systems-manager/latest/userguide/sysma n-install-ssm-agent.html#sysman-install-startup-linux.

 

Once the instance is up and running, SSH into the instance and verify whether your SSM agent is up and running or not using the following command. Remember, the following command will also vary based on the operating system that you select at launch time:

 

# sudo status amazon-ssm-agent

You can, optionally, even install the agent on an existing running EC2 instance by completing the following set of commands.

 

For an instance running on the Ubuntu 16.04 LTS operating system, we first create a temporary directory to house the SSM agent installer: # mkdir /tmp/ssm

 

Next, download the operating-specific SSM agent installer using the wget utility:

wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb

 

Finally, execute the installer using the following command:

# sudo dpkg -i amazon-ssm-agent.deb

 

You can additionally verify the agent's execution by tailing either of these log files as well:

sudo tail -f /var/log/amazon/ssm/amazon-ssm-agent.log

sudo tail -f /var/log/amazon/ssm/errors.log

 

Configuring the SSM agent to stream logs to CloudWatch

CloudWatch

This is a particularly useful option provided by the SSM agent, especially when you don't want to log in to each and every instance and troubleshoot issues. Integrating the SSM agent's logs with CloudWatch enables you to have all your logs captured and analyzed at one central location, which undoubtedly ends up saving a lot of time, but it also brings additional benefits such as the ability to configure alarms, view the various metrics using CloudWatch dashboard, and retain the logs for a much longer duration.

 

But before we get to configuring the agent, we first need to create a separate log group within CloudWatch that will stream the agent logs from individual instances here:

 

1. To do so, from the AWS Management Console, select the CloudWatch option, or alternatively, click on the following link to open your CloudWatch dashboard from https://console.aws.amazon.com/cloudwatch/.

 

Next, select the Logs option from the navigation pane. Here, click on Create log group and provide a suitable name for your log group.

Once completed, SSH back into your Dev instance and run the following command: sudo cp /etc/amazon/ssm/seelog.xml.template /etc/amazon/ssm/seelog.xml

 

Next, using your favorite editor, open the newly copied file and paste the following content in it. Remember to swap out the <CLOUDWATCH_LOG_GROUP_NAME> field with the name of your own log group:

 

With the changes made, save and exit the editor. Now have a look at your newly created log group using the CloudWatch dashboard; you should see your SSM agent's error logs, if any, displayed there for easy troubleshooting.

 

With this step completed, we have now successfully installed and configured our EC2 instance as a Managed Instance in Systems Manager. To verify whether your instance has indeed been added, select the Managed Instance option provided under the Systems Manager Shared Resources section from the navigation pane of your EC2 dashboard; you should see your instance listed, as shown here:

 

In the next section, we will deep dive into the various features provided as a part of the Systems Manager, starting off with one of the most widely used: Run Command!

 

Introducing Run Command

Run Command

Run Command is an awesome feature of Systems Manager, which basically allows you to execute remote commands over your managed fleet of EC2 instances. You can perform a vast variety of automated administrative tasks, such as installing software or patching your operating systems, executing shell commands, managing local groups and users, and much more! But that's not all!

 

The best part of using this feature is that it allows you to have a seamless experience when executing scripts, even over your on-premises Windows and Linux operating systems, whether they be running on VMware ESXi, Microsoft Hyper-V, or any other platforms. And the cost of all this? Well, it's absolutely free! You only pay for the EC2 instances and other AWS resources that you create and nothing more!

 

Here's a brief list of a few commonly predefined commands provided by Run Command along with a short description:

  • AWS-RunShellScript: Executes shell scripts on remote Linux instances
  • AWS-UpdateSSMAgent: Used to update the Amazon SSM agent
  • AWS-JoinDirectoryServiceDomain: Used to join an instance to an AWS

 

Directory

Directory

  • AWS-RunPowerShellScript: Executes PowerShell commands or scripts on Windows instances
  • AWS-UpdateEC2Config: Runs an update to the EC2Config service
  • AWS-ConfigureWindowsUpdate: Used to configure Windows Update settings
  • AWS-InstallApplication: Used to install, repair, or uninstall software on a Windows instance using an MSI package
  • AWS-ConfigureCloudWatch: Configures Amazon CloudWatch Logs to monitor applications and systems

 

Before we proceed with the actual execution of the Run Commands, it is important to remember that the Run Command requires both the SSM agent as well as the right set of permissions and roles to work with. So if you haven't performed the SSM agent's installation or the setup of the IAM policies and roles, then now would be a good time to revisit this!

 

In this section, let's look at a simple way of executing a simple set of commands for our newly added managed instance:

 

To begin with, first, log in to the AWS Management Console and select the EC2 service from the main dashboard. Alternatively, you can even launch the EC2 dashboard via https://console.aws.amaz See Locals in your City Meet Locals™.

 

Next, from the navigation pane, select the Run Command option from the Systems Manager Services section. You will be taken to the Run Command dashboard where you will need to select the Run a command option to get started.

 

In the Run a command page, the first thing we need to do is select a Command document that we can work with. A command document is basically a statement or set of information about the command you want to run on your managed instances. For this scenario, we will select the AWS-RunShellScript command document to start with.

 

In the next Select Targets by section, you can optionally choose whether you wish to execute your command document manually by selecting individual instances or specify a particular group of instances identified by their tag name.

 

The Execute on criteria provides you with the option to select either the Targets or Percent of instances you wish to execute the command document on. Selecting Targets allows you to specify the exact number of instances that should be allowed to execute the command document. The execution occurs on each instance one at a time. Alternatively, if you select the Percent option, then you can provide a percentage value of the instances that should be allowed to run the command at a single time.

 

You can optionally set the Stop after x errors to halt the execution of your command document in case an instance encounters an error.

 

Finally, you can paste your execution code or shell script in the Commands section as shown in the following screenshot. In this case, we are running a simple script that will install and configure a Zabbix monitoring agent on our Dev instance for easy monitoring of our EC2 resources:

 

The rest of the options provide other configurational items such as setting up an optional working directory where the commands get executed on the remotely managed instances.

 

Additionally, you can even choose to Enable SNS notifications as well as write your command output logs to S3 using the Advanced Options sections.

 

Once the configuration items are filled in, simply select the Run option to start the execution of your command document. During this time, Systems Manager will invoke the execution of your supplied commands over the list of managed instances that you provided. If there is an error during the execution, Systems Manager will halt the execution and display the Status of your output as either Success or Failed.

 

Simple isn't it? You can use this same mechanism to manage and execute commands remotely over your fleet of EC2 instances with ease and consistency and even leverage the AWS CLI to perform the same set of actions we have explored in this section.

 

In the next section, we will be learning a bit about yet another really useful feature provided by Systems Manager: State Manager.

 

Working with State Manager

State Manager

State Manager is a powerful tool that helps to govern and manage the configuration of a managed system. For example, by using State Manager you can enforce a particular firewall rule for your fleet of managed instances and set that as the required State that needs to be enforced at all times.

 

If the rules change outside of State Manager, it will automatically revert to match the required state's configuration, thus maintaining compliance and enforcing standardization over your environment.

 

Working with the State Manager is quite simple and straightforward. You start off by selecting a state document (JSON based) that specifies the settings you need to configure or maintain your EC2 instances.

 

These documents come predefined and you can create customized versions of them. With the document created, you can then select the individual managed instances, which can be either EC2 instances or even on-premises virtual machines, as well as specify a schedule for when and how often you wish to apply these states. It's that simple!

 

But before we go ahead with the invocation of our State Manager, let's first understand the concept of state documents a bit better as these documents are the foundation on which your Systems Manager works.

 

State documents are nothing more than simple JSON-based steps and parameters that define certain actions to be performed by Systems Manager. AWS provides dozens of such documents out of the box, which can be used to perform a variety of tasks such as patching your instances, configuring certain packages, configuring the CloudWatch Log agents, and much more!

 

Additionally, you can even create your own custom document as well! There are three types of documents that are supported by Systems Manager:

 

Command: Command documents are leveraged by the Run Command to execute commands over your managed instances. Alternatively, State Manager uses the command documents to apply certain policies as well. These actions can be run on one or more targets at any point during the life cycle of an instance.

 

Policy: Used by the State Manager, policy documents are used to enforce a policy on your managed instances.

 

Automation: These documents are more often used by the automation service within the Systems Manager to perform common maintenance and deployment tasks. We will be learning more about automation documents a bit later in this blog.

Automation

To view System Manager's predefined documents, from the EC2 dashboard navigation pane, select the Documents option under the Systems Manager Shared Resources section. Here you can use any of the predefined documents as per your requirements for State Manager, however, let's quickly create a very simple custom document based on the aws:configurePackage definition:

 

To create your own document, select the Create Document option from the Documents dashboard as shown here:

 

2. In the Create Document wizard, start off by providing a suitable Name for your document. In this case, I've provided the name yoyodev-ssm-configure-packages. Do note that the name cannot contain any spaces. Next, from the Document Type dropdown, select Command as the option type and paste the  JSON code in the Content section.

 

With the document pasted, you can now click on Create Document to complete the document creation process.

The document comprises two primary sections: a parameters section, which contains a list of actions to be performed by the document, followed by a mainSteps section that specifies the action, which in this case is the aws: configure the package to be performed by the document.

 

In this case, the document when invoked will ask the user to select either apache2, MySQL-server, or php from the dropdown list followed by an optional version number of the software you select. You can then select whether you wish to install or uninstall this particular package from your fleet of managed EC2 instances and simply execute the document when done!

 

Now that your custom document is created, let's quickly configure the State Manager to invoke it:

From the Systems Manager Services section in the EC2 navigation pane, select the State Manager. In the State Manager dashboard, select the Create Association option to get started with configuring State Manager.

 

Provide a suitable Association Name for your association. Note that this is an optional field and you can skip it if you want.

Next, from the Select Document section, filter and select the custom document that we created in our earlier step. On selection, you will notice the subfields change according to what we provided as parameters in the document. Let's quickly configure this and create our association.

 

In the Targets section, select your Dev instance or any of your managed instances which you wish to associate with this State Manager. Finally, go ahead and configure the Schedule that will trigger the association based on either a CRON or a rate schedule.

 

Last but not least, configure the Action and select the appropriate package Name from the Parameters section. You can optionally enable the Write to the S3 checkbox to log the State Manager's execution in your own custom S3 bucket. For this scenario, I have not selected this option.

 

Finally, complete the State Manager's association process by selecting the Create Association option. You can now view and modify your associations using the State Manager dashboard. Alternatively, you can even choose to enable your association immediately by selecting the Apply Association Now option as well.

 

In the next section, we will be looking at yet another simple and easy-to-use feature provided by Systems Manager that helps automate simple instance and deployment tasks, called System Manager Automation!

 

Simplifying instance maintenance using System Manager Automation

System Manager Automation

System Manager Automation is a managed service that provides a single, centralized interface for executing and monitoring commonly occurring instance management tasks such as patching, performing backups, executing scripts, and much more. Let's first get started by understanding a few necessary prerequisites that are required to be configured in order for automation to work in your environments.

 

Working with automation documents

automation documents

As discussed briefly during the introduction to the State Manager service, automation documents are simple JSON-based documents that are designed to help you get started with the automation service quickly and efficiently. You can leverage the predefined automation documents or, alternatively, create your own set. In this section, we will look at how to leverage an existing automation document to patch your Dev EC2 instance and create a new AMI from it:

 

From the EC2 Management Console, select the Documents option from the Systems Manager Shared Resources section. Using the Documents dashboard, you can filter and view only the documents that have Automation set as the Document Type.

 

Select AWS-UpdateLinuxAmi and click on the Content tab to view the automation document.

 

The AWS-UpdateLinuxAmi document comprises five distinctive steps, each explained briefly here:

 

launchInstance: This step basically launches a new EC2 instance using your Systems Manager IAM instance profile as well as with a user data script that will install the latest copy of the SSM agent on this instance. The SSM agent is vital as it will enable the next steps to be executed using the Run Command as well as State Manager.

 

updateOSSoftware: With the instance launched and the SSM agent installed, the next step is responsible for updating the packages in your Linux instance. This is done by executing an update script that methodologically updates the packages and any other software that may be marked for upgrades.

 

You also get the capability to include or exclude a particular set of packages from this step using the IncludePackages and ExcludePackages parameters respectively. If no packages are included, the program updates all available packages on the instance.

 

stopInstance: Once the instance is updated with the latest set of packages, the next action simply powers off the instance so that it can be prepped for the image creation process.

 

createImage: This step creates a new AMI from your updated Linux instance. The image contains a descriptive name that links it to the source ID and the creation time of the image.

 

terminateInstance: The final step in the automation document, this step essentially cleans up the execution by terminating the running Linux instance. Let's look at a few simple steps using which we can invoke this particular automation document manually using the automation dashboard.

 

Patching instances using automation

instances using automation

In this section, we will be manually invoking the AWS-UpdateLinuxAmi automation document for patching our Linux instance and later creating a new AMI out of it:

 

To do this, first select the Automation option present under the Systems Manager Services section. From the Automation dashboard, select the Run automation document option.

 

From the Document name field, select the AWS-UpdateLinuxAmi document and populate the required fields in the Input parameters section as described here:

 

SourceAmiId: Provide the source Amazon Machine Image ID from which the new instance will be deployed.

 

InstanceIamRole: Provide the IAM role name that enables Systems Manager to manage the instance. We created this role earlier during the start of this blog as a part of SSM's prerequisites.

 

AutomationAssumeRole: Provide the ARN of the IAM role that allows automation to perform the actions on your behalf.

 

TargetAmiName: This will be the name of the new AMI created as a part of this automation document. The default is a system-generated string including the source AMI ID and the creation time and date.

 

5. InstanceType: Specify the instance type of instance to launch for the AMI creation process. By default, the t2.micro instance type is selected.

6. PreUpdateScript: You can additionally provide the URL of a script to run before any updates are applied. This is an optional field.

7. PostUpdateScript: Provide an optional post-update script URL of a script to run after package updates are applied.

8. IncludePackages: Include specific packages to be updated. By default, all available updates are applied.

9. ExcludePackages: Provide names of specific packages that you wish to exclude from the updates list.

 

The automation document takes a couple of minutes to completely execute. You can verify the output of the execution using the Automation dashboard as well.

 

Simply select your automation job Execution ID to view the progress of each individual step as shown in the following screenshot. Optionally, you can verify the output of each step by selecting the adjoining View Outputs link as well:

 

With this completed, you can now run similar automation tasks by creating your own automation documents and executing them using the steps mentioned herein. But what if you wanted to trigger these steps based on some events or schedules? Well, that's exactly what we will look into in the next section, Triggering automation using CloudWatch schedules and events.

 

Triggering automation using CloudWatch schedules and events

Although you can trigger automation documents manually, it's far better to either schedule or automates the execution of automation jobs using CloudWatch schedules and events.

 

Let's first understand how you can leverage CloudWatch events to trigger simple notifications of Systems Manager Automation events. These events can be used to notify you of whether your automation task succeeded, failed, or simply timed out:

 

First, log in to the CloudWatch dashboard. Alternatively, you can open CloudWatch via https://console.aws.amazon.com/cloudwatch/.

 

Next, from the navigation pane, select the Events option to bring up the Create rule page. Here, select Event Pattern from the Event Source section.

 

With this done, we now need to build our event source. To do so, from the Service Name drop-down list, search and select the option EC2 Simple Systems Manager (SSM), as shown here:

 

With the service selected, you can now opt to select a corresponding SSM Event Type as well, for example in this case I wish to be notified when a particular Automation task fails. So in the Event Type drop-down list, I've selected the

 

Automation option. You can alternatively select other SSM services as well.

Next, in the detail type section, I've opted to go for the EC2 Automation Execution Status-change Notification option. Correspondingly, I've also selected Failed as the Specific status(es) for my event.

 

This means that if and when a failed status event is generated as a result of an automation job, it will trigger a corresponding action which can be as simple as sending a notification using an SNS service or even triggering a corresponding Lambda function to perform some form of remediation action.

 

Similarly, you can even configure a CRON expression or fixed rate of execution of your automation jobs by selecting the Schedule option in the Event Source section:

 

Provide a suitable Cron expression depending on your requirements, for example, I wish to run the AWS-UpdateLinuxAmi automation document every Sunday at 10 P.M. UTC. In this case, the CRON expression will become 0,18,?,*,SUN,*.

 

With the schedule configured, move on to the Targets section and select the SSM Automation option from the Targets drop-down list as shown in the following screenshot:

 

Next, configure the AWS-UpdateLinuxAmi parameters as we discussed earlier, and once the desired fields are populated, click on Add target* to complete the configuration.

 

With this step completed, you can now instantaneously trigger your automation jobs based on events as well as schedules, all powered by CloudWatch! Amazing isn't it?

 

In the next and final section, we will be learning a bit about yet another simple and easy to use SSM service that enables you to manage and patch your instances with ease.

 

Managing instance patches using patch baseline and compliance

Regularly patching your instances with the right set of security patches is an important activity that can take up a lot of time and effort if performed manually on each individual instance. Luckily, AWS provides a really efficient and easy way of automating the patching of your managed instances using the concept of Patch Manager services, provided as an out-of-the-box capability with SSM.

 

As an administrator, all you need to do is scan your instances for missing patches and leverage Patch Manager to automatically remediate the issues by installing the required set of patches. You can, alternatively, even schedule the patching of your managed instance or group of instances with the help of SSM's maintenance window tasks.

 

In this section, we will explore a quick and easy way of creating a unique patch baseline for our Dev instances and later create and associate a maintenance window for this, all using the EC2 Management dashboard. So let's get started with this right away!

 

First up, you will need to ensure that your instance has the required set of IAM Roles as well as the SSM agent installed and functioning as described at the beginning of this blog. With these basics out of the way, we first need to configure the patch baseline with our set of required patches:

 

To do so, launch your EC2 dashboard and select the Patch Baselines option from the Systems Manager Services section.

 

Patch Manager includes a default patch baseline for each operating system supported by Patch Manager. This includes Windows Server 2003 to 2016, Ubuntu, RHEL, CentOS, SUSE, and even Amazon Linux as well. You can use these default patch baselines or alternatively you can create one based on your requirements. Here, let's quickly create a custom baseline for our Dev instances.

 

Select the Create Patch Baseline option to bring up the Create Patch Baseline dashboard. Here, provide a suitable Name for your custom baseline.

 

From the Operating System, select Ubuntu as the OS choice. You will notice the patching rules change accordingly based on the OS type you select.

 

Next, in the Approval Rules section, create suitable patch baseline rules depending on your requirements. For example, I wish to set the Python packages to an Important priority and with a High compliance level as well. Similarly, you can add up to 10 such rules for one baseline.

 

In the final section, Patch Exceptions, you can optionally mention the Approved Packages, Rejected Packages, and the Compliance Level for these patches collectively. In this case, I've left these values as their defaults and selected the Create Patch

 

Baseline option to complete the process.

With your new patch baseline created, you now have the option to promote the same as the Default Baseline by selecting the new baseline from the Patch Baselines dashboard and clicking on the Set Default Patch Baseline option from the Actions tab. Moving on to the next part of this walkthrough, we will now go ahead and set up the maintenance window for our newly created patch baseline:

 

To do so, select the Maintenance Windows option from the Systems Manager Shared Resources section. Click on Create maintenance window to get started with the process.

 

In the Create maintenance window page, provide a suitable Name for your window as well as an optional Description. Next, in the Specify schedule section, you can opt to either use a CRON scheduler or a rate expression to define the schedule for your maintenance window. For this scenario, I've opted for the Cron schedule builder option and provided a window that starts every Sunday at 12:00 UTC:

 

In the Duration as well as the Stop initiating tasks field, specify the timeline in hours that the maintenance window has to last for, as well as the number of hours before you want the system to stop initiating new tasks. Once all the required fields are populated, click on Create maintenance window to complete the creation process.

 

With the maintenance window created, we next need to add some targets for execution. Targets are individual EC2 instances or a group of EC2 instances that are identified by tags. To configure targets, select your newly created maintenance window then from the Actions tab and select the option Register targets:

 

In the Register targets page, provide a Target Name for your maintenance window's target with an optional Description. Next, select the target EC2 instances you wish to associate with this target by either opting to Specify Tags or even by Manually Selecting Instances as shown in the following screenshot. For this scenario, I've already provided the tag OS: Linux to my Dev instances; alternatively, you can manually select your instances as well:

 

3. Once completed, select the Register targets option to complete the process. With the target instances registered with our maintenance window, the final step left to take is associate the maintenance window with our patch baseline:

 

In order to do this, we need to select the newly created maintenance window; from the Actions tab, select the option Register run command task. Here, in the Register run command task page, fill in the required details such as a name for your new Run Command followed by an optional Description.

 

Next, from the Command document section, select the AWS-RunPatchBaseline document. You will also see the targeted instance associated with this Run Command already, as we configured it in our earlier steps.

 

Finally, in the Parameters section, select the appropriate IAM Role, provide a suitable count for the Run Command to stop after receiving a certain amount of errors, and last but not least, don't forget to select whether you wish to Install or simply Scan the target instances for the required set of patches.

 

With all the fields completed, click on Register task to complete this configuration.

 

Awesome isn't it? With just a few simple clicks you have now set up an effective patch management solution for your Dev instances and without the need for any specialized software or expertise! But before we wind up this blog, let's look at one last simple and really useful service provided by Systems Manager, which helps collect and inventory metadata about your AWS as well as on-premises instances.

 

Getting started with Inventory Management

Inventory Management or just Inventory is yet another managed service provided by Systems Manager that is responsible for collecting operating system, application, and instance metadata from your AWS instances as well as those present and managed by Systems Manager in your on-premises environments. You can use this service to query the inventory metadata for mapping, understanding, and remediating EC2 instances based on certain software or regulatory compliances.

 

Let's look at a very simple example of enabling the inventory service for our Dev instance using the AWS Management dashboard:

 

To begin with, you will require both the SSM agent as well as the required IAM Roles configured on your managed instance. Once this is completed, select the Managed Instances option from the Systems Manager Shared Resources section. Here, select your Dev instance and click on the Setup Inventory option.

 

3. On the Setup Inventory page, most of the options will be quite familiar to you by now, such as the Targets and Schedule sections, so I'm not going to dwell on them here again. The more important section here is the Parameters section, using which you can choose to either Enable or Disable different types of inventory collections.

 

For example, since we are working with Linux instances, I've chosen to disable the Windows updates parameters while keeping the rest enabled.

 

The final field Write execution history to S3 basically allows you to write and store the inventory data centrally in an S3 bucket. This comes in really handy when you wish to collate your inventory data from multiple instances at a central location and then query this data either using services such as Amazon Athena or QuickInsights. Once completed, click on Setup Inventory to complete the inventory setup process.

 

You can now view the collected metadata of your EC2 instance by selecting it from the Managed Instances page and clicking on the Inventory tab. Here, choose between the various Inventory Types drop-down lists to view your instance-specific metadata. You can toggle between AWS: Application, AWS: AWS Component, AWS: Network, and AWS: Instance DetailedInformation, just to name a few.

 

Planning your next steps

Well, we have covered a lot of new features and services in this blog, however, there are still a few things that I would recommend you try out on your own. First up is the Parameter Store!

 

The Parameter Store is yet another managed service provided by Systems Manager and is designed to store your instance's configuration data such as passwords, database strings, license codes, and so on, securely. You have the added option of storing the data either as plain text, or even in encrypted form, and later reference it in your automation documents or policy documents as variables rather than complete plain text.

 

But that's not even the best part of it! Using Parameter Store, you can also reference your data across other AWS services such as EC2 Container Service and AWS Lambda, thus making this a centralized store for storing all your configurational and secure data. Another important feature that I would recommend using in your environments is Resource Data Sync. Resource Data Sync enables you to store your instance's metadata collected from different Systems Manager services in a centralized location.

 

You can configure it to collect metadata from instance operating systems, applications, Microsoft Windows updates, network configurations, and much more! This comes in really handy in production environments where you want to analyze the software and patch compliances of your instances.

 

Summary

Summary

The single largest entity of these cloud providers is Amazon. Its Amazon Web Services (AWS) business is now larger than all of its competitors combined and is growing at an incredible pace. With over 45 different offerings, AWS provides solutions that span storage, networking, computing, machine learning, and mobile device testing. One of the oldest products in the AWS catalog is Elastic Compute Cloud, or EC2, which allows developers to launch virtual server instances on-demand.

 

After declaring parameters such as the amount of memory, disk space, and CPU size, administrators can provision a single virtual instance or up to thousands of running instances at once. The administrators and developers are then responsible for deploying code, running the applications, and ensuring that the instances remain healthy (are not consuming too much memory, disk space, etc.).

 

EC2 has been and remains one of AWS’s most popular core offerings. With root or administrator access to a base Linux or Windows machine, developers are free to run almost any kind of code imaginable. Millions of applications have been developed on top of EC2, and entire businesses (such as Heroku) use EC2 for their underlying infrastructure. In fact, using nothing but EC2, most organizations could run their entire workloads including storage, database, and routing needs. The availability and performance of EC2 on AWS is unparalleled.