Amazon S3 Bucket (Best Tutorial 2019)

Amazon S3 Bucket

What is Amazon S3 Bucket and how it works (Tutorial 2019)

Amazon S3 Bucket is more than storage, despite the name that Amazon attached to it, because of how you can use it to interact with other services and store objects other than files. This tutorial explains the What is Amazon S3 Bucket and how it works with the best examples. And also discuss various Amazon cloud storage types used in 2019.

 

The heart of every enterprise — the most important asset it owns — is data. Everything else is replaceable, but getting data back can be difficult, if not impossible. 

 

To make online storage useful, AWS has to provide support for objects, which does include files but also includes a lot of other object types that include sounds, graphics, security settings, application settings, and on and on.

 

This blog helps you focus on the kinds of objects that S3 supports. By supporting a large number of objects, S3 enables you to ­perform the most important task of all: reliable backups.

 

In the second case, you use S3 with Lambda to perform certain tasks using code. The capability to perform these tasks makes S3 a powerful service that helps you accomplish common tasks with a minimum of effort in many cases. 

 

The final section of the blog discusses long-term storage or archiving. Everyone needs to archive data, both individuals, and organizations. Data often become outdated and, some might feel, not useful. However, data often has a habit of becoming useful when you least expect it, so saving it in long-term storage is important.

 

Considering the Simple Storage Service (S3) Features

Amazon S3 Bucket

S3 does so much more than just store information. The procedure you see in the “Performing a Few Simple Tasks” section of blog 2 is a teaser of a sort — but it doesn’t even begin to scratch the surface of what S3 can do for you.

 

The following sections help you understand S3 more fully so that you can grasp what it can do for you in performing everyday tasks. These sections are a kind of overview of common features; S3 actually does more than described.

 

Introducing AWS S3

AWS S3

The idea behind S3 is straightforward. You use it to store objects of any sort in a directory structure you choose, without actually worrying about how the objects are stored.

 

The goal is to store objects without considering the underlying infrastructure so that you can focus on application needs with a seemingly infinite hard drive.

 

S3 focuses on objects, which can be of any type, rather than on a directory structure consisting of folders that hold files. Of course, you can still organize your data into folders, and S3 can hold files.

 

It’s just that S3 doesn’t care what sort of object you put into the buckets you create. As a real-world bucket, an S3 bucket can hold any sort of object you choose to place in it.

 

The bucket automatically grows and shrinks to accommodate the objects you place in it, so most of the normal limits placed on storage don’t exist with S3.

 

Even though you see a single bucket in a single region when you work with S3, Amazon actually stores the data in multiple locations at multiple sites within the region. The automatic replication of your data reduces the risk of losing it as a result of any of a number of natural or other kinds of disasters.

 

No storage technology is foolproof, however, and your own applications can just as easily destroy your data store as a natural disaster can. Consequently, you need to follow all the usual best practices to protect your data.

 

The Amazon additions act as a second tier of protection. Also, versioning (see the “Employing versioning” section, later in this blog, for details) provides you with a safety net in that you can recover a previous version of an object when a user deletes it or an application damages it in some way.

 

Amazon provides a number of storage-solution types, which are mentioned as the blog progresses. S3 is a new approach that you can use in the following ways:

AWS

Data storage: The most common reason to use S3 is to store data. The data might originally appear on your local systems or you might create new applications that generate and use data in the cloud.

 

A big reason to use S3 is to make data available anywhere to any device using any application you deem fit to interact with it.

 

However, you still want to maintain local storage when security or privacy is the primary concern, or when your own infrastructure provides everything needed to store, manage, and access the data. When you think about S3 as a storage method, think about data with these characteristics:

 

It’s in constant flux. Applications require high accessibility to it. The cost of a data breach (where hackers steal the data and you must compensate the subjects of the data in some way) is relatively low.

 

Backup: 

Backup

Localized backups have all sorts of problems, including the fact that a natural disaster can wipe them out. Using S3 as part of your data backup strategy makes sense when the need to keep the data available for quick access is high.

 

The storage costs are acceptable, and the risk from a data breach is low. Of course, backups imply disaster recovery. You make a backup on the assumption that at some point, you need the backup to recover from a major incident.

 

As with health or car insurance, you hope that the reason you’re keeping it never occurs, but you don’t want to take the risk of not having it, either. Amazon provides all the best-practice storage techniques that your organization would normally employ, such as periodic data integrity checks to ensure that the data is still readable.

 

In addition, Amazon performs some steps that most organizations don’t, such as performing checksums on all network data to catch data corruption problems before the data is stored (making it less likely that you see any data corruption later).

 

Data analysis: 

Data analysis

The use of an object-storage technique makes S3 an excellent way to store data for many types of data analysis.

 

Organizations use data analysis today to study trends of all sorts, such as people’s buying habits or the effects of various strategies on organizational processes. The point is that data analysis has become an essential tool for business.

 

Static website hosting:

website hosting

 Static websites may seem quaint because they rely on pages that don’t change often. Most people are used to seeing dynamic sites whose content changes on an almost daily basis (or right before their eyes, in the case of ads).

 

However, static sites still have a place for data that doesn’t change much, such as providing access to organizational forms or policies. Such a site can also be useful for product manuals or other kinds of consumer data that doesn’t change much after a product comes on the market.

 

Working with buckets

Working buckets

The bucket is the cornerstone of S3 storage. You use buckets to hold objects. A bucket can include organizational features, such as folders, but you should view a bucket as the means for holding related objects.

 

For example, you might use a single bucket to hold all the data, no matter what its source might be, for a particular application. Amazon provides a number of ways to move objects into a bucket, including the following:

 

Amazon Kinesis: A method of streaming data continuously to your S3 storage. You use Kinesis to address real-time storage needs, such as data capture from a device like a data logger. Because the majority of the Kinesis features require programming, this blog doesn’t cover Kinesis extensively.

 

Physical media: You can send physical media to Amazon to put into your S3 bucket if your data is so huge that moving it across the Internet proves too time-consuming.

 

When working with Direct Connect, you can speed data transfers using Transfer Acceleration. You can also read more about Transfer Acceleration at http://docs.aws.amazon.com/AmazonS3/latest/dev/ transfer-acceleration.html.

 

Essentially, Transfer Acceleration doesn’t change how a transfer occurs, but rather how fast it occurs. In some cases, using Transfer Acceleration can make data transfers up to six times faster, so it can have a huge impact on how fast S3 is ready to use with your application or how fast it can restore a backup onto a local system.

 

To use Transfer Acceleration, you simply select a box in the S3 Management Console when configuring your bucket.

 

The best way to reduce costs when working with S3 is to ensure that you create a good archiving strategy so that AWS automatically moves objects you don’t use very often to Glacier.

 

The important thing to remember is that S3 works best as storage for objects that you’re currently using, rather than as long-term storage for objects you may use in the future.

 

Managing objects using buckets

AWS_objects

You can manage objects in various ways and at various levels in S3. The management strategy you use determines how much time you spend administering, rather than using, the data. The following list describes the management levels:

 

Bucket: 

The settings you provide for the bucket affect every object placed in the bucket. Consequently, this course management option creates the general environment for all objects stored in a particular S3 bucket.

 

However, it also points to the need for creating buckets as needed, rather than using a single catchall bucket for every need. Use buckets to define an environment for a group of related objects so that you can reduce the need for micromanagement later.

 

Folder: 

Folder

It’s possible to add folders to buckets. As with folders (directories) on your local hard drive, you use folders to better categorize objects. In addition, you can use folder hierarchies to organize objects in specific ways, just as you do on your local hard drive.

 

Configuring higher-level folders to provide the best possible environment for all objects that the folder contains is the best way to reduce the time spent managing objects.

 

Object:

Configuring individual objects is an option of the last choice because individual object settings tend to create error scenarios when the administrator who performed the configuration changes jobs or simply forgets about the settings.

 

The object tends to behave differently from the other objects in the folder, creating confusion. Even so, individual object settings are sometimes necessary, and S3 provides the support needed to use them.

 

Setting bucket security

Setting bucket security

AWS gives you several levels of security for the data you store in S3. However, the main security features are similar to those that you use with your local operating system (even though Amazon uses different terms to identify these features).

 

The basic security is user-based through Identity and Access Management (IAM). You can also create Access Control Lists (ACLs) similar to those used with other operating systems.

 

In addition to standard security, you can configure bucket policies that determine actions that requestors can perform against the objects in a bucket. You can also require that requestors provide authentication as part of query strings (so that every action passes through security before S3 performs it).

 

Even though the Amazon documentation mentions security support for Payment Card Industry (PCI) and Health Insurance Portability and Accountability Act (HIPAA) support, you need to exercise extreme care when using S3 for this purpose­. Your organization, not Amazon, is still responsible for any breaches or other problems associated with cloud storage. 

 

Be sure to create a compliant configuration by employing all the AWS security measures correctly, before you store any data. In fact, you may want to work with a third-party vendor, such as Connectria (http://www.connectria.com/cloud/hipaa_aws.php), to ensure that you have a compliant setup.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Employing encryption

Employing encryption

Making data unreadable to a third party while continuing to be able to read it yourself­ is what encryption is all about. Encryption employs cryptography as a basis for making the message unreadable, and cryptography is actually a relatively old science (see the articles at https://access.redhat.com/blogs/766093/ posts/1976023 . Encryption occurs at two levels when working with AWS:

 

Data transfer: As with any Internet access, you can encrypt your data using Secure Sockets Layer (SSL) encryption to ensure that no one can read the data as it moves between S3 and your local system.

 

Storage: 

Storage

Keeping data encrypted while stored is the only way to maintain the protection provided during the data transfer. Otherwise, someone can simply wait until you store the data and then read it from the storage device.

 

Theoretically, encryption can work on other levels as well. For example, certain exploits (hacker techniques used to access your system surreptitiously) can attack machine memory and read the data while you have it loaded in an application.

 

Because your data is stored and processed on Amazon’s servers, the potential for using these other exploits is limited and therefore not discussed in this blog. However, you do need to consider these exploits for any systems on your own network.

 

Modern operating systems assist you in protecting your systems. For example, Windows includes built-in memory protection that’s effective against many exploits (see the articles at https://msdn.microsoft. com/library/windows/desktop/aa366553.aspx 

 

Storage is one of the main encryption concerns because data spend more time in storage than in being moved from one place to another. AWS provides the following methods for encrypting data for storage purposes:

 

Server-side encryption:

Server-side encryption

 

Client-side encryption:

You encrypt the data before sending it to S3. Because you encrypt the data prior to transfer, the chance that someone will read it is reduced. Remember that you also encrypt the data as part of the transfer process.

Client-side encryption

The potential problems with this approach include a requirement that the client actually performs the encryption (it’s less safe than relying on the automation that a server can provide) and the application will likely see a speed penalty as well.

 

Both:

The Amazon documentation doesn’t mention that you can doubly encrypt the data, both on the client and on the server, but this option adds a level of safety because a hacker would have to undo two levels of encryptions (probably relying on different algorithms).

 

This option is the slowest because you must pay for two levels of encryption, so the application suffers a definite speed loss. However, when working with sensitive data, this method presents the safest choice.

 

Amazon Simple Notification Service (Amazon SNS) topic: 

 Notification Service

When working with SNS, AWS publishes notifications as a topic that anyone with the proper credentials can subscribe to in order to receive information about that topic.

 

Using this approach lets multiple parties easily gain access to information at the same time. This approach represents a push strategy through which AWS pushes the content to the subscribers.

 

AWS Lambda function: Defines a method of providing an automated response to any given event.

 

Employing versioning

Employing versioning

During the early days of computing, the undo feature for applications was a major innovation that saved people countless hours of fixing errors. Everyone makes mistakes, so being able to undo a mistake, rather than fix it, is nice.

 

It makes everyone happier. Versioning is a kind of undo for objects. Amazon can automatically store every version of every object you place in a bucket, which makes it possible to

 

  • Return to a previous version of an object when you make a mistake in modifying the current version
  • Recover an object that you deleted accidentally

 

  • Compare different iterations of an object to see patterns in changes or to detect trends
  • Provide historical information when needed

 

  • Seemingly, you should make all your buckets versioned. After all, who wouldn’t want all the wonderful benefits that versioning can provide? However, versioning also comes with some costs that you need to consider:
  • Every version of an object consumes all the space required for that object, rather than just the changes, so the storage requirements for that object increase quickly.

 

  • After you configure a bucket to provide versioning, you can never stop using versioning. However, you can suspend versioning as needed.
  • Deleting an object doesn’t remove it. The object is marked as deleted, but it remains in place, which means that it continues to occupy storage space.

 

  • Overwriting an object results in a new object version rather than a deletion of the old object.
  • Adding versioning to a bucket doesn’t affect existing objects until you interact with those objects. Consequently, the existing objects don’t have a version number.

 

Creating folders

Creating folders

As with your local hard drive, you use folders to help organize data within buckets. And just as with local folders, you can create hierarchies of folders to store related objects more efficiently. Use the following steps to create new folders.

 

1. Navigate to the location that will hold the folder. When creating folders for the first time, you always start in the bucket.

2. Click Create Folder.

3. Type the folder name and press Enter.

The example uses a folder name of My Folder 1. You see the folder added to the list of objects in the bucket. At this point, you can access the folder if desired.

 

Simply click the folder’s link. When you open the folder, Amazon tells you that the folder is empty. Folders can hold the same objects that a bucket can. For example, if you want to add another folder, simply click Create Folder again to repeat the process you just performed.

 

When you open a folder, the navigation path shows the entire path to the folder. To move to another area of your S3 setup, simply click that entry (such as All Buckets) in the navigation path.

 

Setting a bucket, folder, and object properties

static website hosting

After you create a bucket, a folder, or object, you can set properties for it. The properties vary by type.

 

For example, when working with a bucket, you can determine how the bucket supports permissions, static website hosting, logging, events, versioning, life-cycle management, cross-region replication, tags, requestor pays, and transfer acceleration. This blog discusses most of these properties because you set them to perform specific tasks.

 

Likewise, folders let you set details about how the folder reacts. For example, you can set the storage class, which consists of Standard, Standard – Infrequent Access, or Reduced Redundancy. The storage class determines how the folder stores data.

 

Using the Standard – Infrequent Access option places the folder content in a semi-archived state. With the content in that state, you pay less money to store the data but also have longer retrieval times.

 

Deleting folders and objects

Deleting folders

At some point, you may find that you need to delete a folder or object because it no longer contains pertinent information. (Remember, when you delete folders or objects in buckets with versioning enabled, you don’t actually remove the folder or object. The item is simply marked for deletion.) Use the following steps to delete folders and objects:

 

1. Click the box next to each folder or object you want to delete. The box changes color and becomes solid to show that you have selected it. (Clicking the box a second time deselects the folder or object.)

2. Choose Actions ➪ Delete. You see a dialog box asking whether you want to delete the selected items.

3. Click OK.

The Transfers pane, opens to display the status of the operation. When the task is complete, you see Done as the status. Selecting the Automatically Clear Finished Transfers option automatically clears messages for tasks that complete successfully. Any tasks that end with an error will remain.

 

You can also manually clear a transfer by right-clicking its entry and choosing Clear from the context menu.

 

4. Click the X in the upper-right corner of the Transfers pane to close it. You see the full entry information as you normally do.

 

Uploading objects

Uploading objects

The only method that you can use to upload an object from the console is to employ the same technique. This means clicking Upload, selecting the files that you want to send to S3, and then starting the upload process.

 

however, leaves out some useful additional settings changes you can make. After you select one or more objects to upload, click Set Details to display the Set Details dialog box, which lets you choose a storage class and determine whether you want to rely on server-side encryption.

 

The next step is to click Set Permissions. You use the Set Permissions dialog box, to define the permissions for the objects you want to upload. Amazon automatically assumes that you want full control over the objects that you upload.

 

The Make Everything Public checkbox lets everyone have access to the objects you upload. You can also create custom permissions that affect access to all the objects you upload at a given time.

 

Finally, you can click Set Metadata to display the Set Metadata dialog box. In general, you want Amazon to determine the content types automatically. However, you can choose to set the content type manually. This dialog box also enables you to add custom metadata that affects all the objects that you upload at a given time. 

 

To perform other kinds of uploads to S3, you must either resort to writing application code using one of the AWS Application Programming Interfaces (APIs) or rely on another service.

 

For example, you can use Amazon Kinesis Firehose (https:// aws.amazon.com/kinesis/firehose/) to stream data to S3. The techniques used to perform these advanced sorts of data transfers are outside the scope of this blog, but you need to know that they exist.

 

Retrieving objects

Retrieving objects

At some point, you need to retrieve your data in order to use it. Any applications you build can use S3 directly online. You can also create a static website to serve up the data as you would any other website.

 

The option discussed in this section of the blog is to download the data directly to your local drive. The following steps tell how to perform this task.

 

1. Click the box next to each object you want to download. The box changes color and becomes solid to show that you have selected it.

(Clicking the box a second time deselects the object.)  You can’t download an entire folder. You must select individual objects to download (even if you want to download every object in a folder). In addition, if you want to preserve the folder hierarchy, you must rebuild it on your local drive.

 

2. Choose Actions ➪ Download.

You see a dialog box that provides a link to download the selected objects. Clicking the Download link performs the default action for your browser with the objects. For example, if you select a .jpg file and click Download, your browser will likely display it in a new tab of the current windows.

 

Right-click the Download link to see all the actions that your browser provides for interacting with the selected objects. Of course, this list varies by browser and the kinds of extensions you add to it.

 

3. Right-click the Download link and choose an action to perform with the selected objects.

Your browser accomplishes the action as it normally would when right-clicking a link and choosing an option. Some third-party products may experience permissions problems when you attempt the download.

 

4. Click OK. You see the full entry information as it normally appears.

 

Performing AWS Backups

AWS_backup

Data is subject to all sorts of calamities — some localized, some not. In days gone by, individuals and organizations alike made localized backups of their data. They figured out before long that multiple backups worked better, and storing them in a data safe worked better still. 

 

However, major tragedies, such as flooding, proved that localized backups didn’t work well, either. Unfortunately, storing a backup in a single outside location was just as prone to trouble.

 

That’s where using cloud backups comes into play. Using S3 to create a backup means that the backups appear in more than one location and that Amazon creates more than one copy for you, reducing the risk that a single catastrophic event could result in a loss of data. The following sections explore using S3 as a backup solution.

 

Performing a manual backup

 backup

Amazon’s backup suggestions don’t actually perform a backup in the traditional sense. You don’t see an interface where you select files to add to the backup. You don’t use any configuration methods for whole drive, image, incremental, or differential backups.

 

After you get through all the hoopla, what it comes down to is that you create an archive (such as a .zip file) of whatever you want in the backup and upload it to an S3 bucket. Yes, it’s a backup, but it’s not what most people would term a full-backup solution.

Certainly, it isn’t an enterprise solution. However, it can still work for an individual, home office, or small business. The “Working with Objects” section, earlier in this blog, tells you how to perform all the required tasks using the console.

 

Automating backups

Automating backups

You can provide some level of backup automation when working with S3. However, you need to use the initial Command Line Interface (CLI) instructions found at https://aws.amazon.com/getting-started/tutorials/backup-to-s3-cli/ to make this task possible.

 

After you have CLI installed on your system, you can use your platform’s batch-processing capability to perform backups.

 

For example, a Windows system comes with Task Scheduler to automate the use of batch files at specific times. Of course, now you’re talking about performing a lot of programming, rather than using a nice, off-the-shelf package.

 

Unless your organization just happens to have the right person in place, the Amazon solution to automation is woefully inadequate.

 

You do have other alternatives.

For example, if you have a Python developer in-house, you can use the pip install aws cli command to install AWS CLI support on your system.

The process is straightforward and incredibly easy compared to the Amazon alternative. You likely need administrator privileges to get the installation to work as intended.

 

After you have the required Python support in place, you can rely on S3-specific Python backup packages, such as the one at https://pypi.python.org/pypi/s3-backups, to make your job easier. However, you still need programming knowledge to make this solution work. 

 

There are currently no optimal solutions for the administrator who has no programming experience whatsoever, but some off-the-shelf packages do look promising.

 

For example, S3cmd (http://s3tools.org/s3cmd) and S3Express (on the same page) offer a packaged solution for handling backups from the Linux, Mac, and Windows command line. Look for more solutions to arrive as S3 continues to gain in popularity as a backup option.

 

Developing a virtual tape library

virtual tape library

Most businesses likely have a backup application that works well. The problem is making the backup application see online storage, such as that provided by S3, as a destination.

 

The AWS Storage Gateway (https://aws.amazon.com/storage gateway/) provides a solution in the form of the Gateway-Virtual Tape Library (Gateway-VTL)

(https://aws.amazon.com/about-aws/whats-new/2013/11/05/ aws-storage-gateway-announces-gateway-virtual-tape-library/). 

 

This solution lets you create the appearance of virtual tapes that your backup application uses in place of real tapes. Consequently, you get to use your existing backup product, but you send the output to the cloud instead of having to maintain back-ups in-house.

 

Amazon does provide you with a trial period to test how well a Gateway-VTL solution works for you. Just how well it works depends on a number of factors, including the speed of your Internet connection.

 

The most important consideration is that a cloud-based solution is unlikely to work as fast as dedicated high-speed local connectivity to a dedicated storage device.

 

However, the cost of using Gateway-VTL may be considerably lower than working with local devices. You can check out the pricing data for this solution at https://aws.amazon.com/ storagegateway/pricing/.

 

Combining S3 with Lambda

Lambda enables you to run code on AWS without provisioning or managing a server. AWS provides many other solutions for running code, but this particular solution will appeal to administrators who need to perform a task quickly and without a lot of fuss.

 

The Lambda function that you want to use must exist before you try to create a connection between it and S3. You use the following steps to connect S3 with Lambda:

 

1. Select the bucket that you want to use for the static website and then click Properties. The Properties pane opens on the right side of the page.

 

2. Open the Events entry.

Events entry

S3 shows the static website configuration options. Even though this section discusses Lambda, note that you can create other sorts of actions based on S3 events that include an SNS topic and a Simple Queue Service queue, as described in the “Using S3 events and notifications” section, earlier in this blog.

 

3. Type a name for the event. The example uses MyEvent.

 

4. Click the Events field.

You see a list of the potential events that you can use to initiate an action. The “Using S3 events and notifications” section, earlier in this blog, describes the various events.

 

5. Choose one or more of the events.

You can choose as many events as desired. Every time one of these events occurs, S3 will call your Lambda function.

 

6. Optionally type a value in the Prefix field to limit events to objects that start with a particular set of characters. The prefix can include path information so that events occur only when something changes in a particular folder or folder hierarchy.

 

7.  Optionally, type a value in the Suffix field to limit events to objects that start with a particular set of characters.

8. Select the Lambda Function in the Send To field.

9. Select a Lambda function to execute when the events occur.

You may choose an existing event or choose the Add Lambda Function ARN option to access a Lambda function by its Amazon Resource Name (ARN).

 

10. Click Save. The events you selected for the S3 bucket will now invoke the Lambda function

 

Introducing AWS CloudTrail

AWS CloudTrail

In-depth visibility: Using CloudTrail, you can easily gain better insights into your account's usage by recording each user's activities, such as which user initiated a new resource creation, from which IP address was this request initiated, which resources were created and at what time, and much more!

 

Easier compliance monitoring: With CloudTrail, you can easily record and log events occurring within your AWS account, whether they may originate from the Management Console, or the AWS CLI, or even from other AWS tools and services.

 

The best thing about this is that you can integrate CloudTrail with another AWS service, such as Amazon CloudWatch, to alert and respond to out-of-compliance events.

 

Monitoring CloudTrail Logs using CloudWatch

Monitoring CloudTrail

 


Automating deployment of CloudWatch alarms for AWS CloudTrail

 CloudWatch alarms

As discussed in the previous section, you can easily create different CloudWatch metrics and alarms for monitoring your CloudTrail Log files. Luckily for us, AWS provides a really simple and easy to use CloudFormation template, which allows you to get up and running with a few essential alarms in a matter of minutes!

 

The best part of this template is that you can extend the same by adding your own custom alarms and notifications as well. So without any further ado, let's get started with it. The template itself is fairly simple and easy to work with.

 

You can download a version at https://s3-us-west-2.amazonaws.com/awscloudtrail/cloudwatch-alarms-for-cloudtrail-api-activity/CloudWatch_Alarms_for_CloudTrail_API_Activity.json.

 

At the time of writing this blog, this template supports the creation of metric filters for the following set of AWS resources:

  • Amazon EC2 instances
  • IAM policies
  • Internet gateways
  • Network ACLs
  • Security groups

 

1. To create and launch this CloudFormation stack, head over to the CloudFormation dashboard by navigating to https://console.aws.amaz on.com/cloudformation.

2. Next, select the option Create Stack to bring up the CloudFormation template selector page.

Paste https://s3-us-west-2.amazonaws.com/awscloudtrail/cloudwatch-alarms-for-cloudtrail-api-activity/C loudWatch_Alarms_for_CloudTrail_API_Activity.json in the Specify an

 

Amazon S3 template URL field, and click on Next to continue. In the Specify Details page, provide a suitable Stack name and fill out the following required parameters:

Email: A valid email address that will receive all SNS notifications. You will have to confirm this email subscription once the template is successfully deployed.

 

LogGroupName: The name of the Log Group that we created earlier in this blog.

 

Once the required values are filled in, click on Next to proceed. Review the settings of the template on the Review page and finally select the Create option to complete the process. The template takes a few minutes to completely finish the creation and configuration of the required alarms.

 

So far, we have seen how to integrate CloudTrail's Log files with CloudWatch Log Groups for configuring custom metrics as well as alarms for notifications. But how do you effectively analyze and manage these logs, especially if you have extremely large volumes to deal with?

 

This is exactly what we will be learning about in the next section, along with the help of yet another awesome AWS service called Amazon Elasticsearch!

 

 


Introducing AWS Config

AWS Config

AWS Config is yet another managed service, under the security and governance wing of services, that provides a detailed view of the configurational settings of each of your AWS resources.

 

Configurational settings here can be anything, from simple settings made to your EC2 instances or VPC subnets, to how one resource is related to another, such as how an EC2 instance is related with an EBS volume, an ENI, and so on.

 

Using AWS Config, you can actually view and compare such configurational changes that were made to your resource in the past, and take the necessary preventative actions if needed.

 

Here's a list of things that you can basically achieve by using AWS Config:

  • Evaluate your AWS resource configurations against the desired setting
  • Retrieve and view historical configurations of one or more resources
  • Send notifications whenever a particular resource is created, modified, or deleted
  • Obtain a configuration snapshot of your resource that you can later use as a blueprint or template
  • View relationships and hierarchies between resources, such as all the instances that are part of a particular network subnet, and so on

 

Using AWS Config enables you to manage your resources more effectively by setting governing policies and standardizing configurations for your resources. Each time a configuration change is violated, you can trigger off notifications or even perform remediation against the change.

 

Furthermore, AWS Config also provides out-of-the-box integration capabilities with the likes of AWS CloudTrail, as well to provide you with complete end-to-end auditing and compliance monitoring solution for your AWS environment.

 

Before we get started by setting up AWS Config for our own scenario, let's first take a quick look at some of its important concepts and terminologies.

 

Concepts and terminologies

Concepts

The following are some of the key concepts and terminologies that you ought to keep in mind when working with AWS Config:

 

Config rules: Config rules form the heart of operations at AWS Config. These are essentially rules that represent the desired configuration settings for a particular AWS resource.

 

While the service monitors your resources for any changes, these changes get mapped to one or more set of config rules, that in turn flag the resource against any non-compliances.

 

AWS Config provides you with some rules out of the box that you can use as-is or even customize as per your requirements. Alternatively, you can also create custom rules completely from scratch.

 

Configuration items: Configuration items are basically a point-in-time representation of a particular AWS resource's configuration. The item can include various metadata about your resource, such as its current configuration attributes, and its relationships with other AWS resources, if any, its events, such as when it was created, last updated, and so on.

 

Configuration items are created by AWS Config automatically each time it detects a change in a particular resource's configuration.

 

Configuration history: A collection of configuration items of a resource over a particular period of time is called its configuration history.

 

You can use this feature to compare the changes that a resource may undergo over time, and then decide to take necessary actions. Configuration history is stored in an Amazon S3 bucket that you specify.

 

Configuration snapshot: A configuration snapshot is also a collection of configuration items of a particular resource over time. This snapshot acts as a template or benchmark that can then be used to compare and validate your resource's current configurational settings. 

 

With this in mind, let's look at some simple steps which allow you to get started with your own AWS Config setup in a matter of minutes!

 

Tips and best practices

Here's a list of a few essential tips and best practices that you ought to keep in mind when working with AWS CloudTrail, AWS Config, and security in general:

 

Analyze and audit security configurations periodically: Although AWS provides a variety of services for safeguarding your cloud environment, it is the organization's mandate to ensure that the security rules are enforced and periodically verified against any potential misconfigurations.

 

Complete audit trail for all users: Ensure that all resource creation, modifications, and terminations are tracked minutely for each user, including root, IAM, and federated users.

 

Enable CloudTrail globally: By enabling logging at a global level, CloudTrail can essentially capture logs for all AWS services, including the global ones such as IAM, CloudFront, and so on.

 

Enable CloudTrail Log file validation: An optional setting, however, it is always recommended to enable CloudTrail Log file validations for an added layer of integrity and security.

 

Enable access logging for CloudTrail and config buckets: Since both CloudTrail and config leverage S3 buckets to store the captured logs, it is always recommended that you enable access tracking for them to log unwarranted and unauthorized access. Alternatively, you can also restrict access to the logs and buckets to a specialized group of users as well.

 

Encrypt log files at rest: Encrypting the log files at rest provides an additional layer of protection from unauthorized viewing or editing of the logged data.

Recommend