What is Elastic Beanstalk (2019)

What is Elastic Beanstalk 2018

What is Elastic Beanstalk 2018 How it works

At one time, developers created desktop applications to harness the power and flexibility that desktop systems can provide. In many situations, developers still need this power and flexibility, but more and more application development occurs on the web. The reasons for this change are many, but they all come down to convenience. Users want to use applications that can run on any device, anywhere, and in the same way.

 

To make this happen, developers use web applications that run in a browser or a browser-like environmentwhich is where Elastic Beanstalk (EB) comes into play. Using EB enables developers to create­ applications that run anywhere on any device, yet don’t suffer from problems of reliability and scalability that can occur when using a company-owned host.

 

Relying on the cloud makes things easier for administrators as well, because now it they can tweak an application configuration from anywhere. The first ­section of this blog explores how EB makes moving applications to the cloud easier for everyone involved.

 

As administrators, you need to know how to install EB applications. This blog doesn’t show how to write any applications, but it does take you through the ­process of installing one.

 

In addition, it discusses the requirements for making updates and getting rid of applications after you finish using them. These three tasks are an essential part of using EB, even if you plan to keep the application private to your organization. Whether your application is public or private, you likely want to monitor it.

 

This blog shows how to use native EB functionality to perform the task. By using the EB monitoring features, you can integrate all your application activities using a single interface and ensure tight integration between the monitoring software and the application. There is no charge for EB.

 

However, Amazon does charge for resources that your application uses, so you need to exercise care when working through the examples in this blog. In addition, some of the supplementary services discussed in this blog, such as Amazon CloudWatch, also come with hidden costs. 

 

Make sure that you understand the pricing strategies for services, such as Amazon CloudWatch before you work through the examples. In addition, configuring your setup to provide email notifications of any charges is a good idea.

 

Considering Elastic Beanstalk (EB)

AWS

 

A focus of EB is to easily be able to upload, configure, and manage applications of all sorts. An application isn’t useful unless people can access it with ease and make it perform whatever tasks it’s designed to perform in the most seamless manner possible. Achieving these goals requires that the hosting platform support various programming methodologies on a variety of platforms so that developers can use the tools most suited to a particular need.

 

When working with AWSyou currently can create web applications (in the easiest-to-access form that’s ­currently available) using these languages (with more to follow):

  • Java
  • .NET
  • PHP
  • Node.js
  • Python
  • Ruby
  • Docker

The applications run in managed containers for the language you choose.

 

the managed container is one in which the host manages application resources and ensures that the application can’t easily crash the system. The container acts as a shield between the application you’re working with and every other application that the system hosts. Developers may create the applications, but administrators must manage them.

 

To make administrators as efficient as possible, a host must support a number of platforms. Matching the language (to meet developer needs) with a platform (to meet administrator needs) on a host can prove difficult, but EB is up to the task because it provides support for these web application platforms:

  • Apache
  • Nginx
  • Passenger
  • IIS

 

In looking through the EB documentation, you may initially get the idea that this service is designed to meet developer needs — to simplify application deployment and management in a way that allows a developer more time to code. However, administrators need more time, too.

 

The management features provided by EB address the needs of administrator and developer alike. This blog focuses almost entirely on the administrator view of EB. The three cornerstones of EB application are the following:

 

Deployment: Getting the application onto the server so that someone can use it

 

Management: Configuring the application as people find problems using it

 

Scaling: Providing a good application experience for everyone by ensuring that the application runs fast, reliable, and without any security issues

 

As part of this whole picture, EB also relies on application health monitoring through Amazon CloudWatch. The Amazon CloudWatch service provides the means for determining when application health issues require the host to make changes in the application environment, such as by using autoscaling to make sure that the application has enough resources to run properly.

 

Deploying an Elastic Beanstalk Application

Deploy_aws

 

Before you can use your EB application, you must deploy it (make it accessible) on a server. Deployment means:

 

1.  Creating an application entry.

 

2. Uploading the application to Amazon. You perform this step as part of creating the application entry.

 

3. Configuring the application so that it runs as anticipated, which is also part of creating the application entry on the first pass, but you can also change the configuration later.

 

4. Configuring the application environment so that it has access to the required resources. You perform the initial setup while creating the application entry, but you make configuration changes later based on the results of monitoring that you perform.

 

5.  Testing the application to determine whether it works as anticipated.

 

EB comes with no additional charge; however, you must pay for any resources that your application uses. Be sure to keep this fact in mind as you work through the blog. The examples don’t require much in the way of resources, but you do need to pay for them, which means that you may need permission to install the applications before you proceed.

 

The blog structure is such that you can simply follow along with the text if desired. The following sections describe how to deploy an EB application.

 

Creating the Elastic Beanstalk application entry

AWS

Before you do anything else, you need to define an application entry in order to run an application using EB. The application entry acts as a sort of container for holding the application. The following steps describe how to create the application entry:

1. Sign in to AWS using your administrator account.

 

2. Navigate to the Elastic Beanstalk Console at https://console.aws.amazon.com/elasticbeanstalk.

You see a Welcome page that contains interesting information about Elastic Beanstalk and provides links to additional information and sample applications.

 

3. Click Create New Application.

You see the Application Information page. You need to provide an application name (identity) and, optionally, describe it.

 

4. Type TestApp in the Application Name field, type A test application. in the Description field, and then click Next. EB provides two default environments:

 

Web Server Environment: Lets you run web applications using any of the languages that support web development.

 

Worker Environment: Creates a background application that you can call on in a variety of ways. Background applications don’t provide user interfaces, so you normally use this option to create support for another application.

 

5. Click Create Web Server.

EB asks you to configure the environment type. The Preconfigured Configuration field contains a listing of languages that you can use. The Environment Type field defines how to run the application: single instance or using both load balancing and autoscaling.

 

6. Choose PHP in the Preconfigured Configuration field, choose Single Instance in the Environment Type field, and then click Next.

You see options for an application source. Normally you upload your own application or rely on an application-defined as part of a Sample Storage Service (S3) setup. However, because this is an example, the next step will ask you to use a sample application. Working with sample applications makes experimenting easier because you know that nothing is wrong with the application code to cause a failure.

 

7. Select the Sample Application option and click Next.

EB displays the Environment Information page. You must create a unique environment name for your application.

 

8. Type MyCompany-TestEnv in the Environment Name field.

Note that EB automatically provides an Environment URL field value for you. In most cases, you want to keep that URL to ensure that the URL will work properly. The Environment URL field automatically provides the location of your EC2 instances to run the web application, so normally you won’t need to change this value, either.

 

9. Click Check Availability.

The square around the Environment URL field changes to green if the check is successful. Otherwise, you need to provide a different Environment Name field entry.

 

10. Type A test environment. in the Description field and then click Next.

EB asks about the use of additional resources. Remember that additional resources generally incur fees, so keep these options blank when working with a test application. The RDS DB option creates a link to a database to use with the application. The VPC option creates a Virtual Private Cloud (VPC) to run the application.

 

11. Click Next.

At this point, you need to define the configuration details. The options you use depend on how you want to run the application. However, the steps tell you how to maintain a free setup. Using other Instance Type field settings could incur costs.

 

12 Choose t2.micro in the Instance Type field and the key pair that you want to use.

 

13 (Optional) Type your email address in the Email Address field.

 

14. Choose Basic in the System Type field (Health Reporting section) and then click Next.

EB asks whether you want to define Environment Tags. These are key-value pairs used to help configure your application. The sample application doesn’t require any tags.

 

15. Click Next.

The Permissions page contains options for creating or using permissions. The test setup doesn’t contain any permissions, so you won’t see any options in the Instance Profile or Service Role fields. If you had already defined another application, these fields would allow you to reuse those existing permissions. EB creates a set of default permissions for you, which you can later modify as needed.

 

16. Click Next.

You see a Review page that contains all the settings made so far in the procedure. Check the settings to ensure that you made the entries correctly.

 

17. Click Launch.

You see EB launching your application. Be patient: This process can require several minutes to complete.

 

You can find some sample applications at http://docs.aws.amazon.com/ elasticbeanstalk/. All you need to do is download the application and then use it as part of following the exercises in this section and the sections that follow.

 

The sample applications cover a number of the languages and platforms, but not all of them. If you download an application and install it using the techniques found in this blog, you must also pay for resources that the application requires to run.

 

Testing the Elastic Beanstalk

AWS_test

After you complete the steps in the previous section of the blog, you have an application running, to see the URL field entry near the top of the page. Click this link to see your application running.

 

Setting application security

Any code you deploy using EB becomes immediately public at the URL provided in the URL field unless you change the security rules. This means that you really do need to verify that the page is safe to display before you deploy it. However, you can also make the page private using the following steps.

 

1. Choose Services ➪ EC2 from the menu at the top of the page. You see the EC2 Dashboard page.

 

2. Choose Security Groups from the Navigation pane.

EC2 displays a list of security groups. The selected security group in the figure is the one used with EB. If you followed the procedure in the “Creating the application entry” section, earlier in this blog, you should see a security group with a similar name.

 

3. Select the security group entry for EB configuration and choose Actions ➪ Edit Inbound Rules.

EC2 displays the Edit Inbound Rules dialog box. You can change any configuration option for the security group that will modify the way in which incoming requests work. For example, you can change the HTTP type to HTTPS to create secure access to the page.

 

However, in this case, you can use a simpler method to secure access to the page in a reasonable way: Simply disallow access from sources other than your system.

 

4. Choose My IP in the Source field for both HTTP and SSH access of the security group. EC2 modifies the rules as expected.

 

5. Click Add Rule.

You see a new rule added to the list. To make this setup work, you must also provide access to the website to the instance security group. Otherwise, when you attempt to perform updates, the updates will fail.

 

6. Choose All Traffic in the Type field and All in the Protocol field. These two settings provide complete access to the security group.

 

7. Choose Custom in the Source field and type sg in the text field after it.

You see a listing of security groups for your server. A source can consist of a Classless Inter-Domain Routing (CIDR) address, IP address, or security group ID. Typing sg tells EC2 that you want to use a security group.

 

8. Click the security group for the website instance in the list.

The security group appears in the Source field.

 

9. Click Save.

The inbound security rules now prevent access to the site by any entity other than the website instance or you. The IP address supplied when you choose My IP in the Source field uses the IP address of your current location.

 

If other people use the same router (and therefore the same IP address), they also have access to the website. Consequently, setting the inbound rules does help provide security, but only a certain level of security.

 

In addition, the IP address can change when you reset the router and then reconnect to the Internet provider. Consequently, you could find that you lose access to the test site you’ve created because of the change in IP address. If you suddenly find that you have lost access, verify that your IP address hasn’t changed.

 

Configuring the Elastic Beanstalk

You can modify the application configuration as needed. You initially set all these configuration options during the creation process, but getting the settings correct at the outset isn’t always possible. Simply select the Configuration entry in the Navigation pane and you see the listing of application configuration. 

 

To change a configuration option, click the button next to the heading, such as Scaling, that you want to modify. You see a new page that contains the configuration options. After you make the configuration changes, click Apply to make them active or click Cancel when you make a mistake.

 

Monitoring the Elastic Beanstalk

Monitoring lets you determine whether the application environment is sufficient for your application. For example, you may decide that you require a different instance type because of the amount of traffic.

 

To monitor your application, choose the Monitoring option in the Navigation panethe test application uses hardly any of the resource provided to it, so you don’t need to make any changes.

 

Of course, this is an expected outcome given that you’re the only one with access to the application. The line graphs below the text output show graphically how many resources your application uses. You can also change the monitoring criteria for longer monitoring sessions.

 

Updating an EB Application

Applications don’t exist in a vacuum: Organizational and other requirements change, environments evolve, user needs morph, and so on. As the application functionality and operation changes, so must the application configuration and setup.

 

The environmental needs change as well. In other words, you must perform an EB update to keep the application current so that users can continue using it. The following sections describe the kinds of changes you need to consider during an update.

 

Getting the sample code and making a change

You’re extremely unlikely to upload just one version of your application. An application actually has a life cycle, and change is simply part of the process. Making changes using the sample application means getting the current code and then doing something with it.

 

The following steps help you get a copy of the current application and perform a small change on it. You most definitely don’t need to be a developer to perform these steps.

Download the php-v1.zip file found at http://docs.aws.amazon.com/elasticbeanstalk/.

 

Expand the archive into its own folder (directory) on your hard drive.

You see a number of application files, including index.php. The index.php file contains the code used to display the web page. Modifying the code changes how the web page appears.

 

1. Open the index.php file using any text editor.

The text editor must output pure text files without any formatting. For example, Notepad on Windows systems, gedit on Linux systems, and TextEdit on Mac systems are all examples of pure text editors.

 

On the other hand, Microsoft Word, LibreOffice, and FreeOffice are all examples of editors that you can’t use to make modifications to PHP files.

 

2. Locate the line that reads <h1>Congratulations!</h1> and replace it with <h1>Hello There!</h1>.

The change you’ve just made modifies the greeting you see. It’s a small change, but it serves to demonstrate how modifications typically work.

 

3. Save the file.

The modified file is now ready to upload to Amazon.

 

Uploading the modified application

Upload

To see any coding changes you make, you must upload the changes to AWS. It doesn’t matter how complex the application becomes: At some point, you use the same process to upload changed files. The following steps describe how to ­perform this task:

 

1. Place the files for your application into an archive.

The normal archive format is a .zip file.

 

2. Open the application dashboard by clicking its entry in the initial EB page.

You see options for working with the application

 

3. Click Upload and Deploy.

 

4. Click Browse.

You see a File Upload window consistent with your platform and browser.

 

5. Select the file containing the modified application code and click Open.

EB displays the filename next to the Browse button.

 

6. Type Changed-Greeting in the Version Label field and then click Deploy.

EB displays messages telling you that it’s updating the environment. Be patient; this process can take a few minutes to complete.

 

At some point, the application indicator turns green again (and you see the checkmark icon), which means that you can test the application using the same procedure found in the “Testing the application deployment” section, earlier in this blog. What you should see is a change in greeting from “Congratulations!” to “Hello There!”

 

Switching Elastic Beanstalk 

You now have two application versions uploaded to the EC2 instance. In some cases, you may have to switch between application versions. Perhaps a fix in a new version really didn’t work out, so you need to go to an older, more stable version. The following steps describe how to switch between versions:

 

1. Click Upload and Deploy.

 Notice the Application Versions Page link. This page contains a listing of all the versions that are available for use.

 

2. Click the Application Versions Page link.

You see the Application Versions page. The last field of the table shows where each version is deployed. In this case, Changed-Greeting is deployed, but Sample Application isn’t.

 

3. Check the Sample Application version and click Deploy.

EB displays the deployment options. The fields contain the same environment settings as before. You don’t want to change these settings unless you want to create a new environment.

 

4. Click Deploy.

EB displays messages telling you that it’s updating the environment. At some point, the application indicator turns green again, which means that you can test the application using the same procedure found in the “Testing the application.

 

Removing Unneeded Applications

EB applications get old just like any other application does. Software becomes outdated to the point at which additional updates become counterproductive and expensive; creating a new application becomes easier.

 

When an application gets old enough, you need to shut it down in a graceful manner and remove it after training users to use whatever new application you have in place.

 

Transitions from one application to another are one of the most difficult administrative tasks because it’s hard to foresee everything that can go wrong, and transitions add layers of complexity that administrators may not understand.

 

This blog can’t provide you with everything needed to perform a transition, but it can show you the mechanics of removing an EB application you no longer need. However, before you remove the application, make sure that you have the transition process well planned and have backup processes in place for when things go wrong.

 

To remove an application from an instance, you select its entry in the Application Versions page and click Delete. However, before you delete an application, be sure to have another version of the application deployed to the instance, or site users may suddenly find that they can’t access your site.

 

To delete an entire application, including all the versions, select the Environments page. Choose Actions ➪ Delete Application to remove the entire application. Removing the application doesn’t remove the EC2 instance.

 

Introducing Amazon Elastic Beanstalk

One of the key features of a cloud is to provide its users and developers with a seamless and easy to use platform for developing and deploying their applications. That's exactly where Elastic Beanstalk comes in. Elastic Beanstalk was first launched in the year 2011, and has continuously evolved to become a full-fledged PaaS offering from AWS.

 

Elastic Beanstalk is your one-stop shop for quickly deploying and managing your web applications in AWS. All you need to do is upload your code to Beanstalk, and voila! Elastic Beanstalk takes care of the entire application's deployment process, from EC2 capacity provisioning to auto-scaling the instances and even load balancing using an ELB! Elastic Beanstalk does it all so that you can concentrate on more important tasks, such as developing your applications and not getting bogged down with complex operational nuances.

 

But for me, Beanstalk is much more than just the deployment and management of your applications. Let's look at some of the key benefits of leveraging Elastic Beanstalk for your web applications:

 

Deployment support: Today, Beanstalk supports standard EC2 instances and Docker containers as the basis for your application's deployment. This enables you to host your web applications and your microservices-based apps on AWS with relative ease.

 

Platform support: Beanstalk provides a rich set of platforms for developers to deploy their apps on. Today, the list includes Java, PHP, Python, .NET, Node.js, and Ruby, with more languages and platforms to be added in the future.

 

Developer friendly: It is extremely easy to build and deploy your applications over to AWS using Beanstalk. You can leverage a wide variety of options, including the AWS Management Console or its CLI, a code repository such as Git, or even an IDE such as Eclipse or Visual Studio to upload your application, and the rest is all taken care of by Beanstalk itself.

 

Control: With Beanstalk, you get complete control over your underlying AWS resources as well as the environments on which your application runs. You can change the instance types, scale the resources, add more application environments, configure ELBs, and much more!

 

Costs: One of the best things about Beanstalk is that it's absolutely free! Yes, you heard it right! Free! You only pay for the AWS resources that are spun up based on the configurations that you provide and nothing more. Amazing isn't it?

With these pointers in mind, let's look at some of the essential concepts and terminologies that you ought to know before getting started with Elastic Beanstalk.

 

Concepts and terminologies

Here's a look at some of the common concepts and terminologies that you will often come across while working with Elastic Beanstalk:

 

Applications: An application in Elastic Beanstalk is basically a collection of Beanstalk's internal components, and includes environments, versions, events, and various other things. Think of an Elastic Beanstalk application as a high-level container which contains different aspects of your application.

 

Application versions: Application versions are nothing more than different versions of an application's code. Each version of your application's code is stored in an S3 Bucket that is auto-created and managed by Beanstalk itself. You can create multiple versions of your application code and use this for deployment to one or more environments for testing and comparison.

 

Environments: An Elastic Beanstalk environment is yet another logical container that hosts one application version at a time on a specified set of instances, load balancers, auto scaling groups, and so on. Typically, you would have an environment for development, one for acceptance testing, and another one for production hosting, however, there are no hard and fast rules on this.

 

An environment comes in two flavors, and you can choose between the two during your initial environment setup phase. The first is called a web server environment, and is basically created for applications that support

 

HTTP requests, such as web applications and so on. The second is called a worker environment, where the application pulls tasks from an Amazon SQS Queue. Here's a look at each of these flavors in a bit more detail:

 

Web server environment:

As mentioned earlier, this particular environment is well suited to hosting and managing web frontend applications, such as websites, mobile applications, and so on. As part of this environment, Beanstalk provisions an internet-facing Elastic Load Balancer, an autoscaling group with some minimalistic configuration settings, and a small number of EC2 instances that contain your application code along with a pre-installed agent called Host Manager.

 

The Host Manager agent is a key component in the entire setup process, as it is responsible for deploying and monitoring the application as well as periodically patching the instance and rotating the logs.

 

Here's a representational diagram depicting a simple application being scaled using a web server environment. Note the RDS instance in the diagram as well. You can also choose to set up an RDS instance for your application using Elastic Beanstalk, or add it to the application stack manually later:

 

An additional point worth mentioning here is that every environment has a unique CNAME, for example, WordPress. The CNAME maps to a URL which is in the form of http://mywordpress.us-east-1.elasticbeanstalk.com. This URL is aliased in Amazon's DNS service Route53 to an Elastic Load Balancing URL, something like http://abcdef-123456.us-east-1.elb.amazonaws.com, by using a CNAME record.

 

Worker environment: The worker environment works in a very different way to the web server environment. In this case, Elastic Beanstalk starts up an SQS Queue in your environment and installs a small daemon into each of the worker instances. The daemon is responsible for regularly polling the queue for newer messages, and if a message is present, the daemon pulls it into the worker instance for consumption, as depicted in the following diagram:

 

Ideally, you can use a combination of web and worker environments to host your applications, so that there is a clear decoupling of your web frontend resources and your backend processing worker instances. Keep in mind that there are a lot more design considerations that you also ought to think about while setting up your environments, such as how the scalability is going to be handled and storage options for data depending on the type of data, for example S3 for logs and RDS for application-centric data, security, fault tolerance, and much more.

 

With this section completed, let's move on to the fun part and see how to get started with using Elastic Beanstalk!

 

Getting started with Elastic Beanstalk

In this section, we will be performing a deep dive into how to set up a fully-functional Dev and Prod environment for our simple WordPress application using Elastic Beanstalk. Before we get started, here is a list of some prerequisite items that you need to have in place before we can proceed:

 

A valid AWS account and user credentials with the required set of privileges to run the AWS CLI and the Elastic Beanstalk CLI.

A sandbox/Dev instance to download the WordPress installation and later use it to push the application code over to the respective Beanstalk environment. Note that you can also use other resources, such as a Git URL or an IDE, but for now we will be focusing on this approach.

 

Creating the Dev environment

Let's first start off by creating a simple and straightforward development environment for our WordPress site. To do so, execute the following steps:

 

Sign in to the AWS Console and select the Elastic Beanstalk option from the Services filter, or alternatively, launch the Elastic Beanstalk console by launching the URL https://console.aws.amazon.c om/elastic beanstalk in a browser of your choice.

 

Next, select the Create New Application option to get started. Remember, an application is the highest level of the container for our application code, which can contain one or more environments as required.

 

In the Create New Application dialog box, provide a suitable Application Name and an optional Description to get started with. Click on Create once completed.

 

With the basic application container created, you can now go ahead and create the development environment. To do so, from the Actions drop-down list, select the Create New Application option.

 

Here, you will be provided with an option to either opt for the Web server environment or the Worker environment configuration. Remember, an environment type can be selected only once here, so make sure that you select the correct tier based on your application's requirements. In this case, I've opted to select the Web server environment, as shown in the following screenshot:

 

On the Create a new environment wizard page, provide a suitable Environment name for your WordPress site. Since this is a development environment, I've gone ahead and named it YoyoWordpress-dev. Next, in the Domain field, provide a unique name for your website's domain URL. The URL will be suffixed by the region-specific Elastic Beanstalk URL, as shown in the following screenshot:

 

Next, type in a suitable Description for your new environment, and move on toward the Base configuration section. Here, from the Platform drop-down, select the Preconfigured platform option and opt for the PHP platform, as depicted in the following screenshot. PHP is our default option as WordPress is built on PHP 5.6. Today, Beanstalk supports packer builder, Docker containers, Go, Java SE, Java with Tomcat, .NET on Windows Server with IIS, Node.js, PHP, Python, and Ruby, with more platform support coming shortly:

 

Now, here's the part where you need to keep your calm! Leave the rest of the options as their default values and select Configure more options, not the Create environment option! Yes, we will be configuring a few additional items first and will create our development environment later!

 

On the Configure Environment page, you can opt to select one of the three preconfigured Configuration presets options, based on your application's requirements. The presets are briefly explained here:

 

Low cost (Free tier eligible): This particular configuration will launch a single (t2.micro) instance with no load balancing or autoscaling group configured. This is ideal if you just want to get started with an application using the basics or wish to set up a minimalistic Dev environment, as in this case.

 

High availability: Unlike the low-cost preset, the high-availability configuration comes pre-equipped with an autoscaling group that can scale up to a default of four instances or more, and an Elastic Load Balancer that has cross-zone load balancing and connection draining enabled by default. Besides this, you also get a host of CloudWatch alarms created for monitoring, as well as security groups for your instance and Load Balancer.

 

Custom configuration: You can additionally opt to configure your environment based on other parameters. You can select this preset, and modify each and every component present within your environment as you see fit.

 

With the Configuration preset set to Low cost (Free tier), the next item that we can modify is the Platform configuration. Elastic Beanstalk supports the following PHP platform configurations:

 

PHP Language Amazon Linux AMI PHP version

  • PHP 7.1 2017.03.1 PHP 7.1.7
  • PHP 7.0 2017.03.1 PHP 7.0.21
  • PHP 5.6 2017.03.1 PHP 5.6.31
  • PHP 5.5 2017.03.1 PHP 5.5.38
  • PHP 5.4 2017.03.1 PHP 5.4.45

 

Since we are using a WordPress application, we need to modify the platform as well, to accommodate for the correct PHP version. To do so, select the Change platform configuration option. This will bring up the Choose a platform version dialog, as shown here:

 

Here, from the drop-down list, search and select the 64bit Amazon Linux 2017.03 v2.5.0 running PHP 5.6 option, as WordPress execution is stable with PHP 5.6. Once done, click on Save to complete the process.

 

With the Platform configuration changed as per our requirements, we can now move on to configuring the add-on services such as security, notifications, network, database, and much more! For example, let's quickly configure the networking for our WordPress Dev environment by selecting the Modify option in the Network pane.

 

In the Network pane, you can opt to launch your environment in a custom VPC, as well as other instance-specific settings such as enabling Public IP address, selecting the Instance subnets based on your VPC design, and finally assigning Instance security groups for your Dev instances.

 

In this case, I already have a custom VPC created specifically for the development environment that contains one public subnet and one private subnet, with a default security group as well. Here is an overview of the network configuration setup for my environment. You can tweak this to match your requirements:

 

Once the settings are made, click on Save to complete the networking changes. You can perform other configurational changes as you see fit, however, since this is only a development environment, I've opted to leave the rest of the options as default for now. Once completed, select the Create environment option to finish the environment creation process.

 

Once the environment creation process is initiated, it will take a couple of minutes to complete its execution, as depicted in the following screenshot. Here, you will see Elastic Beanstalk create a new security group as well as an Elastic IP address for your EC2 dev instance. During this stage, the environment also transitions from a Pending to an Ok state, and you can view the environment, your application's logs, and the status:

 

With your environment up and running, you can also verify it using the URL provided as an output of your environment's creation by using the environment dashboard. Upon selecting the URL, you will be redirected to a new application landing page in your web browser which basically verifies that your environment is configured to work with PHP 5.6.

 

But where is our WordPress application? That's exactly what we will be deploying next using a really simple and easy-to-use Elastic Beanstalk CLI.

 

Working with the Elastic Beanstalk CLI

With your environment deployed using the AWS Management Console, we now shift our focus to leveraging the Elastic Beanstalk CLI, or EB CLI, to push the application code over to the newly created environment.

 

The EB CLI is a powerful utility that can be used to operate and manage your entire Elastic Beanstalk environment using a few simple CLI commands. It is also designed to work with AWS development services such as CodeBuild and CodeCommit, as well as other third-party code repository services such as Git.

 

In this section, we will first be looking at a few simple steps for installing the EB CLI on a simple Linux instance, later followed by configuring and pushing our WordPress application to its respective development environment:

 

To do so, we first need to ensure that the instance is updated with the latest set of packages. In my case, I'm performing the steps on a simple Ubuntu 14.04 LTS instance, however, you can alternatively use your own on-premises virtual machines.

 

Run the following command to update your OS. The command will vary based on your operating system variant:

sudo apt-get update

 

3. Next, we need to ensure that the instance has the required Python packages installed in it. Note that if you are using the Amazon Linux AMI, then by default it will already have the necessary Python packages installed in it:

# sudo apt-get install python python-pip

AWS CLI and the EB CLI require Python 2 version 2.6.5+ or Python 3 version 3.3+.

 

With the Python packages installed, we now move forward and set up the AWS CLI using the following commands:

pip install awscli

aws configure

 

The first command installs the CLI, while the other runs you through a simple wizard to set up the AWS CLI for your instance. To learn more about how to configure the AWS CLI, you can check out this URL: AWS CLI Command Reference /userguide/installing.html.

 

Finally, we go ahead and install the EB CLI. The installation is pretty straightforward and simple: 

pip install awsebcli

 

That's all there is to it! You now have a functioning Elastic Beanstalk CLI installed and ready for use. So let's now go ahead and download the required WordPress code ZIP file locally and use the EB CLI to push it into the development environment:

sudo git clone WordPress/WordPress

 

Extract the contents of your WordPress ZIP file into a new folder, and run the following command from within the WordPress directory:

eb init

 

The eb init command is used to initialize and sync the EB CLI with your newly created development environment. Follow the on-screen instructions to configure the EB CLI's settings, such as Selecting a default region to operate from, Selecting an application to use, and so on. Remember, the default region has to match your current development environment's region as well, which in my case is us-east-1:

 

With the EB CLI set up, the only step left now is to deploy the WordPress application to the development environment using yet another EB CLI command called simply eb deploy:

# eb deploy

 

During the deployment process, the CLI creates an application version archive in a new S3 bucket within your environment. Each application deployment will result in subsequent version creations within S3 itself. After this, you will see your application code get uploaded to your development environment, as depicted in the following screenshot:

 

The environment simultaneously changes its state from Ok to Pending as the application is uploaded and set up in your development instance. Once the application becomes available, the health state of the environment will yet again transition from Pending to OK to Info.

 

You can verify whether your application has uploaded or not by refreshing the application URL (http://yoyoclouds.us-east-http://1.elasticbeanstalk.com) on your environment's dashboard. You should see the WordPress welcome screen, as shown in the following screenshot:

 

Note, however, that this setup still requires a MySQL database, so don't forget to go to the RDS Management Console and create a minimalistic MySQL database, or even better, an Aurora DB instance, for your development environment. Remember to note down the database username, password, and the DB host and database name itself; you will need these during your WordPress configuration!

 

With this step completed, let's take a few minutes to understand the various options for configuring and monitoring your newly deployed application using the environment dashboard!

 

Understanding the environment dashboard

The environment dashboard is your one-stop shop for managing and monitoring your newly deployed applications, as well as the inherited instances. In this section, we will quickly look at each of the sections present in the environment dashboard and how you can leverage them for your applications.

 

To start off with, the Dashboard view itself provides you with some high-level information and event logs depicting the current status of your environment. To learn more about the recent batch of events, you can opt to select the Show All option in the Recent Events section, or alternatively select the Events option from the navigation pane.

 

The Dashboard also allows you to upload a newer version of your application by selecting the Upload and Deploy option, as shown in following screenshot. Here, you can see a Running Version of your WordPress application as well. This is the same application that we just deployed using the EB CLI.

 

You can also control various aspects of your environment, such as Save Configuration, Clone Environment, and Terminate Environment, as well using the Actions tab provided in the right-hand corner of the environment dashboard:

 

Moving on from the Dashboard, the next tab in the navigation pane that is worth checking out is the Configuration section. Let's look at each of the configuration options in a bit more detail, starting off with the Scaling tile:

 

Scaling: Here, you can opt to change your Environment Type from a Single instance deployment to a Load balancing, auto-scaling enabled environment simply by selecting the correct option from the Environment Type drop-down list. You can even enable Time-based scaling for your instances by opting for the

 

Add scheduled action option.

Instances: In the next tile, you can configure your instance-specific details for your environment, such as the Instance type, the EC2 key pair to be used for enabling SSH to your instances, the Instance profile, and other options as well, such as the root volume type and its desired size.

Notifications: Here, you can specify a particular Email address, using which, notifications pertaining to your environment—such as its events—are sent using the Amazon SNS.

 

Software configuration: This section allows you to configure some key parameters for your application, such as the application's Document root, the Memory limit for running your PHP environment, and the logging options. But the thing that I really love about the software configuration is the Environment properties section.

 

With this, you can pass secrets, endpoints, debug settings, and other information to your application without even having to SSH into your instances, which is simply amazing! We will be learning a bit about environment properties and how you can create simple environment variables and pass them to your WordPress application a bit later in this blog.

 

Health: One of the most important configuration items in your environment, the Health section allows you to configure the Health Check URL for your application, as well as to enable detailed health reporting for your environment using a special agent installed on your systems.

 

This agent monitors the vitals of your EC2 instance, captures application-level health metrics, and sends them directly to Beanstalk for further analysis. This, in conjunction with the Application Logs, helps you to drill down into issues and mitigate them all using the Elastic Beanstalk Console itself.

 

NOTE: You can find the agent's logs in your instance's /var/log/healthd/daemon.log file.

 

Apart from the Configuration tab, Elastic Beanstalk also provides you with a Logs option, where you can request either the complete set of logs or the last 100 lines. You can download each instance's log files using this particular section as well:

 

And last but not least, you can also leverage the Monitoring and Alarms sections to view the overall Environment Health, as well as other important metrics, such as CPU Utilization, Max Network In, and Max Network Out. To configure the alarms for individual graphs, all you need to do is select the alarm icon adjoining each of the graphs present in the Monitoring dashboard, as shown in the following screenshot:

 

A corresponding Add Alarm widget will pop up, using which you can configure the alarm's essentials, such as its Name, the Period, and Threshold settings, as well as the required Notification settings.

 

In this way, you can use the environment dashboard and the EB CLI together to perform daily application administration and monitoring tasks. In the next section, we will be leveraging this environment dashboard to clone and create a new production environment from the existing development environment.

 

Cloning environments

With the development environment all set and working, it is now time to go ahead and create a production environment. Now, technically, you could repeat all the processes that we followed earlier for the development environment creation, and that would work out well indeed, but Elastic Beanstalk offers a really simple and minimalistic approach to creating new environments while using an existing one as a template. The process is call cloning, and it can be performed in a few simple clicks, using the environment dashboard itself:

 

To get started, simply select the Actions tab from the environment dashboard page and select the option Clone Environment. This will bring up the New Environment page, as shown here:

 

Here, start off by providing an Environment name for the new environment, followed by a unique prefix for the Environment URL. Remember, this is a clone from the earlier development environment that we created, so, by default, it will contain the same Amazon Linux instance with the WordPress application that we pushed in during the Dev stages. This is not a concern as we can always use the EB CLI to push the production version of the application as well. But for now, fill in the rest of the details and select the Clone option.

 

The new environment undergoes the same initialization and creation process as it did earlier, creating separate security groups, assigning a new Elastic IP, and launching a new EC2 instance with the same application version that was pushed in the development environment.

 

Once completed, you should now have two very similar environments up and running side by side, but isn't a production environment supposed to be more than just one instance? Well, that's precisely what we will be configuring in the next section.

 

Configuring the production environment

Now that we have had a good tour of the environment dashboard, it should be relatively easy to configure the production environment as per our requirements. Let's start off by increasing the instance count for our production environment:

 

Select the Scaling configuration tile from the newly created production environment's configuration dashboard and change the Environment type from Single instance to Load balancing, auto-scaling. The instance count settings, as well as the auto-scaling features, will only be available once the new changes are reflected in the environment. Click Apply once did.

 

To verify that the changes have indeed been propagated, you can copy the newly created Elastic Load Balancer DNS name into a web browser and verify that you can access the WordPress getting started the wizard.

 

Next, you can also change the default instance type from

  • 1.micro to something a bit more powerful, such as t2.medium or
  • 2.large, using the Instances configuration section.

 

Once your major settings are done, you will also require a new RDS backed MySQL database for your production instances. So go ahead and create a new MySQL DB instance using the RDS Management Console at https://console.aws.amazon.com/rds/.

 

For handling production-grade workloads, I would strongly recommend enabling multi-AZ deployment for your MySQL database.

 

Remember to make a note of the database name, the database endpoint, as well as the username and password, before moving on to the next steps!

 

Next, using the production environment URL, launch your WordPress site and fill in the required database configuration details, as depicted in the following screenshot:

 

This method of configuring database settings is not ideal, especially when it comes to a production environment. Alternatively, Elastic Beanstalk provides you with the concept of environment properties that enable you to pass key-value pairs of configurations directly to your application.

 

To do so, you need to select the Configuration section from your Production dashboard, and within that, opt to modify the Software configuration.

 

Here, under the Environment Properties section, fill out the required production database variables, as depicted in the following screenshot:

But where do these variables actually end up getting configured? That's where we leverage the WordPress configuration file called wp-config.php and configure all these variables into it. Upon loading, PHP will read the values of each of these properties from the environment property that we just set in Elastic Beanstalk.

 

8. Open your wp-config.php file using your favorite text editor, and change the database section, as shown in the following snippet:

Save the file and push the newly modified code into the production environment using the eb deploy command. Simple isn't it?

 

Here's what the new environment should look like after the deployments:

 

Looking good so far, right? With this done, your WordPress setup should be able to scale in and out efficiently without you having to worry about the load balancing needs or even about the MySQL instances.

 

Additionally, now that we have configured the instances to fetch the database information from Elastic Beanstalk itself, we no longer have to worry about what will happen to our site if the underlying WordPress instances restart or terminate. This is exactly what we set out to do in the first place, but there's still a small catch.

 

What about the content files that you will eventually upload on your WordPress websites, such as images and videos? These uploads will end up getting stored on your instance's local disks, and that's a potential issue as you may end up losing all of your data if that instance gets terminated by the auto-scaling policies. Luckily for us, AWS has a solution to this problem, and that's exactly what we are going to learn about next.

 

Introducing Amazon Elastic File System

AWS, for one, has really put in a lot of innovation and effort to come up with some really awesome services, and one such service that I personally feel has tremendous potential is the Elastic File System. Why is it so important? Well, to answer this question, we need to take a small step back and understand what type of storage services AWS offers at the moment.

 

First up, we have the object stores in the form of Amazon S3 and Amazon Glacier. Although virtually infinite in scaling capacity, both these services are known to be a tad slower performance-wise compared to the EC2 instance storage and the EBS. This is bound to happen, as the likes of EBS is specially designed to provide fast and durable block storage, but, as a trade-off, you cannot extend an EBS volume across multiple Availability Zones.

 

Elastic File System or EFS, on the other hand, provides a mix of both worlds by giving you the performance of an EBS volume combined with the availability of the same volume across multiple AZs, and that is really awesome! To summarize, EFS is a massively scalable file storage system that allows you to mount multiple EC2 instances to it simultaneously across AZs, without having to worry about the durability, availability, or performance of the system.

 

How does EFS actually work, you ask? Well, that's exactly what we will learn about in the next section.

 

How does it work?

EFS works in a very simple and minimalistic way, so as to reduce the number of configurations that you need to perform and manage as an end user. To start off, EFS provides you with the ability to create one or more filesystems. Each filesystem can be mounted to an instance or instances, and data can be read as well as written to them.

 

Mounting the filesystem requires your instances to have support for the Network File System version 4.0 and 4.1 (NFSv4) protocol. Most Linux operating systems come with the necessary support, however, you may have to install the NFS client on these machines if it is not there to connect to an EFS.

 

So, how is this useful for our WordPress application? Well, for starters, once you have an Amazon Elastic File System in place, you can have multiple EC2 instances connect to it simultaneously and use it as a scalable shared drive that can extend even to petabytes if the need arises.

 

Also, the Amazon Elastic File System does not have any downtime or repercussions if your EC2 instances reboot or even terminate; the data will persist on the filesystem until you manually delete it or terminate the filesystem itself. There are some rules and limitations, however, when it comes to using the Elastic File System, which you ought to keep in mind.

 

You can mount an Amazon EFS on instances in only one VPC at a time, and both the filesystem and the VPC must be in the same AWS region.

Once the filesystem is created, you will be provided with a DNS name for identifying it within your region. Additionally, you will also be required to create one or more supporting mount targets within your VPC, which basically acts as a connectivity medium between your instances present within a subnet and the filesystem. Here is a representational diagram of how an Elastic File System interacts with EC2 instances using mount targets:

 

As an administrator, you can create one mount target in each Availability Zone present in a given region. You can also create a mount target in each of the subnets presents within a particular VPC, so that all EC2 instances in that VPC share that mount target. In the next section, we will be exploring a few simple steps required for setting up your own Elastic File System.

 

Creating an Elastic File System

Setting up your own Elastic File System is as easy as it gets! You can start off by launching the Elastic File System dashboard from the AWS Management Console, or alternatively, by visiting the URL https://console. Amazon Elastic File System (Amazon EFS) – Cloud File Storage – Amazon Web Services (AWS):

 

On the EFS landing page, select the option Create file system to get started.

 

In the Configure file system access page, you can start off by first selecting the VPC you want to associate the filesystem with. Remember, you can have multiple filesystems per VPC, however, they cannot be extended across regions:

 

With the VPC selected, the associated subnets will automatically populate themselves based on the Availability Zones that they are a part of in the Create mount targets section. Here, you can select the appropriate subset that you wish to associate with the Elastic File System, along with its corresponding security group.

 

In my case, I've selected the individual public subnets from my VPC, as the WordPress application instances will be deployed here, and these instances will require access to the filesystem for storing the images and other content.

With the fields populated, select the Next Step option to proceed.

 

The next step is all about Configuring optional settings for your Elastic File System. Here, you can Add tags to describe your filesystem and select the appropriate Performance mode for the filesystem, based on your requirements.

 

Today, EFS provides two modes: the General Purpose, which is ideal for running the majority of workloads, and a Max I/O mode, which is specifically designed for when your environment needs to scale to tens of thousands of EC2 instances, all connecting to this single filesystem itself.

 

Max I/O mode provides much better performances compared to General Purpose, however, there is the chance of a slightly higher latency when handling file operations here.

 

The final option left is Enable encryption, which, if checked, will leverage a KMS key from your existing AWS account and encrypt all the data stored in the filesystem at rest:

 

Complete the EFS setup process by reviewing the configuration changes on the Review and create page, and finally, click on Create to enable the filesystem. This process takes a couple of minutes, but once completed, you will be shown the DNS name of your newly created filesystem. Make a note of this name as you will be required to reference it in our Elastic Beanstalk environment as well.

 

So far, so good! We have our production environment up and running on Elastic Beanstalk, and now we have created a simple yet powerful Elastic File System. In the next section, we will look at how you can integrate the two services for use by WordPress using Elastic Beanstalk's configuration files concept.

 

Extending EFS to Elastic Beanstalk

Although Elastic Beanstalk takes complete care of your environment's provisioning and configuration, there are still methods which you can used to control the advanced configuration of your environment, such as integrating your application with the likes of other AWS services, such as ElastiCache, or even EFS for that matter.

 

This can be performed using a variety of services provided by Beanstalk itself; for example, by leveraging Beanstalk's Saved configurations, or even using Environment Manifest (YML) files. But in this particular section, we will be concentrating on integrating the EFS service with our WordPress application using specialized Configuration Files called .ebextensions.

 

These .ebextensions are simple YAML formatted documents ending with a .config file extension. Once the .ebextensions file is created, you need to place this in the root folder of your application's source code, within a special directory named .ebextensions, and finally, deploy your application over to Beanstalk.

 

These configuration files are so powerful that you don't even have to connect to your instances through SSH to issue configuration commands. You can configure your environment entirely from your project source by using .ebextensions:

 

To start using .ebextensions for your WordPress setup, first we need to create a folder named .ebextensions within the root of your WordPress application. Type the following command from your Dev instance:

cd WordPress && sudo mkdir .ebextensions

 

Create a new file with an extension of .config, and paste the following contents into it:

 

This file causes Elastic Beanstalk to mount the newly created EFS volume on the instance's /mnt/efs directory, and also removes the wp-content/uploads, directory if it exists, and symlinks it to /mnt/efs/uploads so that it persists and is shared between instances:

 

Once the file is created, use the eb deploy command once again to push the application directory and the newly added .ebextensions directory to your production environment.

 

Last but not least, sign in to your production environment and select the Configuration option from the environment dashboard. Here, select the Software configuration tile and add the following key-value pair into the Environment Properties section, as shown:

 

Here, the EFS_NAME has to have the newly created EFS filesystem's DNS name as its value. This is the same DNS name that we copied a while back once the EFS was created.

 

Once the deployment changes states and is made available, select the environment URL and verify whether the WordPress configurations are all working as intended or not. If you have made it this far, then you should have a really awesome, highly available, scalable, WordPress site up and running! Awesome, isn't it?

 

Planning your next steps

Well, we have covered a lot of new features and services in this blog, however, there are still a few things that I would recommend you try out on your own. First up is Elastic Beanstalk's advanced configurations.

 

As mentioned earlier, Beanstalk provides a lot of different ways for you to customize and extend your application with other AWS services using a variety of built-in services such as .ebextensions, which we covered in the previous section. One similar service that can be used to configure a Beanstalk environment's configuration is called the environment manifest file.

 

This is a simple YAML file containing your environment's manifest configurations, such as the environment name, solution stack, and environment links to use when creating your environment. The file is placed in your application's root directory and is generally named env.yaml. One of the key uses of this file is to provide support for environment links that enable you to connect two application environments using simple names as references.

 

For example, if you have a website as a front-ending application that accepts certain inputs from the users, and another application that processes these inputs, you can create a link between the worker and the frontend application using this env.yml file. On invocation, the link between these two environments is set up and managed automatically by Beanstalk.

 

Here's a small snippet of the env.yml file's contents:

  • AWSConfigurationTemplateVersion: 1.1.0.0
  • solution stack: 64bit Amazon Linux 2015.09
  • EnvironmentName: frontend-environment
  • EnvironmentLinks:
  • "WORKERQUEUE" : "worker-environment"

 

You can learn more about Environment Manifest (env.yaml) at http://aws.amazon.com/elasticbeanstalk/latest/dg/environment-cfg-manifest.html.

 

Alternatively, Beanstalk also provides you with an easier configuration saving mechanism, which you can invoke using either the environment dashboard or the EB CLI. This is called a Saved Configuration and can be enabled by selecting the Save configuration option under the Actions tab in your environment dashboard.

 

Once applied, the environment configurations accept any custom configurations that are stored in an S3 bucket as an object. You can even download this configuration object and create clones of your environment using the EB CLI. To learn more about saved configurations, check out this URL: http://docs.aws.amazon.com/elasticb eanstalk/latest/dg/environment-configuration-savedconfig.html.

 

Another very interesting thing worth exploring is the support for Docker containers provided in Elastic Beanstalk! As you may already be aware, Docker containers are the next big thing when it comes to creating microservices-backed applications that can be deployed and scaled at tremendous scale. The Docker platform for Elastic Beanstalk has two generic configurations, a single container and multi-container option, and also provides several preconfigured container images to choose from.

 

From an Elastic File System perspective, one key aspect that is worth reading and exploring is the filesystem's overall performance considerations. This documentation especially highlights the different performance levels and use cases compared to EBS-provisioned IOPS volumes. The document also provides some keen insights and considerations for how to maximize the filesystem's performance. You can check out the documentation at http://docs.aws.amazon.com/efs/latest/ug/perform ance.html.

 

Introducing AWS Web Application Firewall

Security has always been, and always will be, a key concern for a lot of organizations that run their workloads and applications on the cloud. That is precisely why AWS offers a wide assortment of managed services that you, as a cloud administrator, should leverage in order to protect and safeguard your workloads from any compromises or threats. In this section, we are going to explore one such simple, yet really powerful, service, called AWS WAF, or Web Application Firewall.

 

AWS WAF is basically a firewall that helps you to protect your internet-facing applications from common web-based threats and exploits. It is basically a service that enables you to specify a set of web security rules or ACLs that can allow or restrict a certain type of web traffic across Amazon CloudFront as well as the Application Load Balancer (ALB).

 

As of now, WAF can be used to create customized rules that can safeguard your applications against attacks, such as SQL injections, cross-site scripting, Distributed Denial of Services (DDoS), bad bots, scrapers, and much more! You can easily create new rules and attach them to your existing ACL list as per your requirements, enabling you to respond to and mitigate changing traffic patterns more rapidly.

 

WAF also comes equipped with a powerful API, by using which you can automate the deployments of ACL rules as well as manage them programmatically. Alternatively, for the UI people out there, WAF provides customization CloudFormation templates which will allow you to get started with a complete WAF-based security solution in less than a few minutes! We will be looking at how to deploy this template for securing our own WordPress application as well a bit later in this blog.

 

WAF is priced based on the number of ACL rules which you deploy, as well as on the number of web requests that your application receives.

 

Here is a quick summary of benefits that you can obtain by levering AWS WAF:

Enhanced protection: Apart from your standard VPC and security groups, you can additionally safeguard your applications against commonly occurring web attacks by leveraging WAF's ACL rules.

 

Advanced traffic filtering: Unlike your simple NACLs or security groups, WAFs provide you with an ability to define custom rules and conditions based on the characteristics of your incoming web request, such as values present in the headers, origin IP address of the request, whether the request has any SQL code present in it, and so on. Using these conditions, you now have the ability to basically allow, block, or filter traffic based on such preset conditions.

 

Easy management: With WAF rules defined and managed in one central location, you can easily reuse and propagate your custom ACLs across multiple CloudFront CDNs as well as Application Load Balancers, and monitor the traffic as well as mitigate any issues, all using the same WAF API or web user interface.

 

Cost effective security solution: One of the best parts of leveraging WAF is that there are absolutely no upfront fees or costs associated with it. You simply pay based on the number of rules you create using WAF as well as the amount of traffic your web application receives, and not a penny more!

 

With this basic set of information, let's have a look at how WAF actually works!

 

Concepts and terminologies

As discussed briefly, WAF can be enabled over your standard ALBs and over your CloudFront distributions. But before we get started with configuring WAF and its various rules and ACLs, we first need to understand some of its commonly used terms and terminologies:

 

Conditions: Conditions form the core of your WAF rulesets. These are basically configurable characteristics that you want WAF to monitor in each of your incoming web requests. At the time of writing this book, WAF supports the following list of conditions:

 

IP match: You can use this condition to check whether the incoming web request originated from a specified black/whitelisted IP addresses or not. You can then plot corresponding actions to be performed against the same based on your requirements, such as not allowing any incoming traffic other that the whitelisted IP range, and so on. AWS WAF supports /8, /16, /24, /32 CIDR blocks for an IPv4 address.

 

String and regex match: A string match or a regex match condition can be used to specify a part of an incoming web request and its corresponding text that you wish to control access to. For example, you can create a match or regex condition that checks the user agent headers and its value against a preset string or expression. If the condition matches, you can opt to either allow or block that particular traffic using WAF rules.

 

SQL injection match: You can use this condition to inspect certain parts of your incoming web requests, such as the URI or query string, for any malicious SQL code. If a pattern matches, you can then opt to block all traffic originating from that particular request's IP range.

 

Cross-site scripting match: Hackers and exploiters can often embed malicious scripts within web requests that can potentially harm your application. You can leverage the cross-site scripting match condition to inspect your incoming request URI or headers for any such scripts or code, and then opt to block the same using WAF rules.

 

Geographic match: You can use this condition to list countries that your web request originated from and accordingly block or allow the same based on your requirements.

 

Size constraint match: You can use the size constraint match condition to check the lengths of specified parts of your incoming web requests, such as the query string or the URI. For example, you can create a simple WAF rule to block all requests which have a query string greater than 100 bytes, and so on.

 

Rules: With your conditions defined, the next important aspect of configuring WAF are the rules. Rules basically allow you to combine one or more conditions into a logical statement, which can then be used to either allow, block, or count a particular incoming request. Rules are further classified into two categories:

 

Regular rules: Regular or standard rules, apply one or more conditions to your most recent batch of incoming web requests. For example, a rule to block all incoming traffic from the IP range 40.40.5.0/24 or if there is any SQL-like code in the query string of your request, and so on.

 

Rate-based rules: Rate-based rules are very much like your regular rules apart from one addition: the rate limit. You can now configure conditions and pass a rate limit along them as well. The rule will only trigger if the conditions match or exceed that particular rate limit which was set.

 

Rate limits are checked by WAF within a 5-minute window period.

 

For example, you may configure a simple condition that blocks all incoming traffic from the IP range 40.40.5.0/24 with a rate limit of 10,000. In this case, the rule will only trigger a corresponding action (allow, block, count) if the condition is met and the number of incoming requests in a 5-minute period exceeds 10,000 requests. Requests that do not meet both the conditions are simply not compared towards the rate limit and hence will not be blocked by this rule.

 

Web ACLs: Once the rules are defined, you combine them into one or more web ACLs. Here, you have the ability to define an action for your rule if it gets triggered; for example, allow, block, count, or even perform a default action that gets triggered in case a request doesn't match any of the conditions or rules specified.

 

Web ACLs work on a priority basis, so the rule listed first is the one that gets compared to the incoming request first. This makes it extremely important to know the order in which you create and assign your rules in a web ACL.

 

Here is a simple representation of how conditions, rules, and web ACLs work together in WAF:

With the concepts out of the way, let's look at a few simple steps that allow you to set up and configure WAF Web ACLs for safeguarding your web applications.

 

Getting started with WAF

In this section, we are going to look at a few simple and easy-to-follow steps for getting started with AWS WAF. For demonstration purposes, we will be leveraging the same environments and application that we deployed from our previous blog here, so, if you haven't gone through the use case, this might be a good time for a quick revisit!

 

In the previous blog, we leveraged Elastic Beanstalk as well as Elastic File System services to deploy a scalable and highly available WordPress application over the internet. In this section, we will leverage the same setup and secure it even further by introducing AWS WAF into it. Why use WAF for our WordPress application?

 

Well, the simplest answer is to completely abstract the security checks from the underlying web server instance(s), and instead place the security checks at the point of entry of our application, as depicted in the following diagram:

 

To get started, you will first need to ensure that your WordPress application has a CloudFront CDN attached to it, or alternatively an Application Load Balancer frontend its requests. This is a crucial step, as, without a CloudFront CDN or an Application Load Balancer, WAF will simply not work!

 

In my case, I have configured and deployed a simple CloudFront CDN for my production-grade WordPress application. You can refer to the following guidelines for setting up your own CDN using CloudFront, at http://docs.aws.amazon.com/AmazonCloudFront/latest/Develope rGuide/.

 

Creating the web ACL

Once you are done with your CDN, head over to the AWS Management Console and filter out WAF and Shield services using the dashboard, or alternatively, navigate to this URL https://console.aws.amazon.com/waf/home to bring up the WAF dashboard:

 

Assuming that this is the first time you are configuring WAF, you will be prompted by a welcome screen to either opt for AWS WAF or AWS Shield services. Select the Go to AWS WAF option. This will redirect you to the WAF dashboard, where we select the Configure web ACL option to get started.

 

Selecting the Configure web ACL option will bring up a Set up a web access control list (web ACL) wizard that will guide you through your first web ACL setup. The first page on the wizard basically covers the concepts of conditions, rules, and ACLs, so simply select the Next option to proceed further.

 

In the Name web ACL page, provide a suitable Web ACL Name for your new ACL. You will notice that the CloudWatch metric name field gets correspondingly auto-populated with a matching name. You can change the name as per your requirements. This metric name will be later used to monitor our web ACLs using CloudWatch's dashboards.

 

Moving on, from the Region drop-down list, select either Global (CloudFront) or an alternative Region name, based on whether you want to secure a CDN or an Application Load Balancer. In my case, since I already have a CDN set up, I've opted for the Global (CloudFront) option.

 

WAF for the Application Load Balancer is currently supported only for the following regions: US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo).

 

In the AWS resource to associate field, you can opt to select your CloudFront distribution or your Application Load Balancer using the drop-down list; however, for the sake of simplicity, do not configure this option for the time being. Remember, you can always associate your web ACLs with one or more AWS resources after completing this wizard! Once done, click Next to proceed:

 

With the web ACL named, we move on to the next section where we can configure our conditions. On the Create conditions page, select an appropriate condition that you wish to configure for your web application. In this scenario, we will be configuring an IP match condition along with a string match condition.

 

The idea here is to only grant access to our WordPress administrator login page (wp-login.php) from my local laptop's IP, and, conversely, for any other IP that wishes to access the wp-login.php page, the traffic should get dropped.

 

Creating the conditions

As mentioned earlier, conditions are configurable characteristics that you want WAF to monitor in each of your incoming web requests: To get started with a condition, select the Create condition option from the IP match conditions tile.

 

Here, provide a suitable Name for your match condition and select the IPv4 option from the IP Version. Provide your desktop's or laptop's public IP in the Address field. You can alternatively provide a range of IP addresses here using either of the supported CIDR blocks.

 

Remember to select the Add IP address or range option before creating the match condition:

 

With the IP match condition created, let's move on to creating the second condition for our ACL as well. For this, select the Create condition option from the String and regex match conditions section.

 

Once again, we start by providing a suitable Name for our string match condition, followed by selecting the Type of string to match with. Here, select the String match option to begin with.

 

Next, in the Part of the request to filter on the section, select the appropriate section of your request that you wish to filter, using the match condition. In my case, I have selected the URI option as we need to match the resource wp-login.php from the URI. Alternatively, you can also opt to select the following values based on your requirements:

 

Header: Used to match a specific request header, such as user-agent.

 

HTTPMethod: Used to indicate the type of operation the request intends to perform on the origin, such as PUT, GET, DELETE, and so on.

 

QueryString: Used to define a query string in a URL.

 

Body: Used to match the body of the request. In this case, WAF only inspects the first 8,192 bytes (8 KB) contained within the request's body. You can alternatively set up a Size Constraint condition that blocks all requests that are greater than 8 KB in size.

 

Next, in the Match type drop-down list, select the option Contains. The Contains option means that the string to match can appear anywhere in the request. Alternatively, you can also opt to select from these options, based on your requirement:

  • ContainsWord: Used to specify a specific Value to match in the request
  • Exactly matches: Used to match the string and the request value exactly
  • Starts with: Used to check for a matching string at the beginning of a request
  • Ends with: Used to check for a matching string at the end of the request

 

The Transformation field is handy when you need to re-format the web request before WAF inspects the same. This can involve Converting to lowercase, HTML decoding, Whitespace normalization, URL Decode, and so on. For this particular use case, we don't have any particular transformation to perform on the request, and hence I've selected the None option.

 

Finally, in the Value to match field, enter the text (wp-login) that we want WAF to search for in the web requests. Once completed, remember to click on the Add filter option before you proceed with the Create command.

 

With this step completed, our basic conditions are in place. Alternatively, you can set up other relevant conditions based on your criteria and requirements. Once done, select the Next option to proceed with the wizard.

 

Creating rules

With your conditions defined, we now move on to the next important aspect of configuring WAF: rules. Rules basically allow you to combine one or more condition, into a logical statement, which can then be used to either allow, block, or count a particular incoming request:

 

In the Create rules page, you can now merge the conditions we created a while back and assign each rule a corresponding action, such as allow, block, or count. To get started, select the Create rule option.

 

In the Create rule popup, we will be creating two rules: one rule that will basically allow me to access the WordPress admin login page (wp-login.php) from my local laptop, and another rule that blocks traffic to the same login page. Let's first create the Allow traffic rule.

 

To do so, type in a suitable Name for your rule. You will notice the corresponding CloudWatch metric name field auto-populate itself with the same name as well. You can choose to change this name as per your requirements, or leave it to its default value.

 

Next, in the Rule type drop-down list, select whether you want this rule to be a Regular rule or a Rated rule.  Once done, move on to the Add conditions section, where we can associate our rule with one or more conditions. Start by selecting the appropriate drop-down option to form the following rule:

 

When a request: "Does": "Originate from an IP Address in": "

<SELECT_YOUR_IP_ADDRESS_MATCH_CONDITION_HERE>"

 

Here's what your new rule should look like once it is properly set up. Click on Create once completed:

With your Allow rule created, we use the same steps once again to create a Block rule as well. Select the Create rule option once again, and provide a suitable Name for your rule. Similar to the previous case, I've opted for a Regular rule here as well.

 

Next, in the Add conditions section, we first add a condition that matches the following statement:

 

When a request: "Does not": "Originate from an IP Address in": "

<SELECT_YOUR_IP_ADDRESS_MATCH_CONDITION_HERE>"

Next, select the Add condition option to add the string match condition as well:

When a request: "Does": "Match at least one of the filters in the

string match condition": "<SELECT_YOUR_STRING_MATCH_CONDITION_HERE>"

 

Here's what your rule should look like once both the conditions are added to it:

  • With the conditions in place, select the Create option to finally create your blocking rule.
  • Now that your two rules are created, you should see them both listed in the Add rules to a web ACL page.

 

Here, make sure you order your rules correctly, based on their precedence, by selecting the Order option as required. You can additionally configure the Default action for your web ACL as well. This default action will only get triggered if the request does not match any of the conditions mentioned in either the allow or the blocking rules.

 

Once you are confident with your configurations, select the Review and create option, as shown earlier. And voila! Your basic WAF is now up and running!

 

Assigning a WAF Web ACL to CloudFront distributions

With the web ACL created, you can now easily assign it to one or more CloudFront distributions, as per your requirements. To do so, simply log in to your AWS dashboard and filter the CloudFront service, or alternatively, navigate to https://console.aws.amazon.com/cloudfront/home to view the CloudFront dashboard directly:

 

  • Once logged into the CloudFront dashboard, select the appropriate Distribution ID for which you wish to enable the WAF Web ACL rules.
  • Select the Edit option from the General tab to bring up your distribution's configurations and settings.

 

  • Here, in the Edit Distribution page, select your newly created web ACL from the AWS WAF Web ACL drop-down list, as shown in the following screenshot:

 

  • Once the ACL is selected, I would also recommend that you enable the logging of your distribution in case you already haven't done that. This is just an added measure of precaution and security that is a must for any production-grade environment that you may be working on.

 

  • Scroll down on the Edit Distribution page, and select the On option adjoining the Logging field. Provide your logging bucket's name in the Bucket for Logs field and click on the Yes, Edit option once the required fields are all filled in.

 

The changes will take a good few minutes to propagate through the CloudFront distribution. You can then move on to testing your WAF once the distribution's Status has changed to Enabled.

 

To test your WAF, simply open a browser and type in the URL of your WordPress application (<http://YOUR_CLOUDFRONT_URL>/wp-login.php) from your own laptop/desktop. In this case, you should be able to see the wp-login.php page without any issues whatsoever. However, if you try accessing the same page from a different laptop or machine, you will be thrown the following error on screen:

 

At this point, your WordPress administrator login page is now protected from all IPs except those that you specified in your Web ACL's allow list! Amazing, isn't it?

 

You can create a custom error page using the CloudFront distribution settings and redirect your users to this page rather than showing them the standard error page, as depicted in the preceding screenshot.

 

With this, we come towards the end of this basic web ACL configuration section. In the next section, we will be looking at how to enhance your basic ACL setup with more conditions, with more emphasis towards SQL injections and cross-site scripting.