How AWS EFS works (Best Tutorial 2019)



How AWS EFS works (Complete Tutorial 2019)

This tutorial explains the complete process of how AWS  Elastic File System (EFS) work with best examples. And also explain how to configure EFS with Elasticsearch in 2019.


The Elastic File System (EFS) provides a option for storing information on Amazon Web Services (AWS).  This tutorial also configure EFS with EC2 setup. One of the services that EFS supports is Elasticsearch.


The Elasticsearch is an open source search and analytics engine. You use it to perform various sorts of analysis, such as checking out your log files or performing real-time application monitoring.


When you combine EFS with Elasticsearch, you gain considerable functionality in turning your EC2 server into something that can help your organization control precisely how people use and interact with your server.


You can also begin avoiding those embarrassing slowdowns or complete application failures that cause people to go to any other site they can find.


Elastic File System (EFS) Features

Elastic File System (EFS)

You use EFS to perform tasks at the file-system level. Most business users are quite familiar with the file-system level because they use it to retrieve files when working with applications such as word processors.


A file system uses the filing cabinet metaphor, which means that individual files appear in folders, and folders appear in drawers (individual hard-drive partitions).


This particular metaphor has appeared as part of computer systems almost since the start of computing. The only difference with EFS, as the following sections detail, is that the filing cabinet now appears in the cloud rather than on a local or network hard drive.


Introducing EFS

How AWS EFS works

Most operating systems today provide about the same paradigm for managing files. A drive contains a root directory. This root directory can contain as many folders as needed to organize the data that the drive contains. Subfolders contain content that becomes more specific as the hierarchy depth increases.


Each folder or subfolder can contain files that contain data. Depending on the characteristics of the file system, the files can contain simple, complex, fully populated, or sparse data. Some files can even contain multiple data streams. EFS replicates the most common features of file systems found on operating systems today.


It doesn’t replicate every feature, but it replicates enough features to make using the file system easy. An EFS drive looks like any other drive you use on any other operating system.


You can use EFS independently, in conjunction with other AWS services, or through a gateway (to make it appear as part of your local network). The most common service to connect to EFS is Elastic Compute Cloud (EC2), covered in another blog.


In fact, a major attraction of using EFS is that you can connect multiple EC2 instances to a single EFS drive. This means that all the EC2 instances share data and enable all of them to interact with it in meaningful ways.


The common storage uses for EFS are

common storage

  • Application data
  • Big data
  • Analytics
  • Media processing workflows
  • Content management
  • Web application data and content
  • User home directories


EFS emphasizes shared file storage, which implies access from multiple services or applications from multiple locations. You need to consider the construction of service before you put it into use — that is, you should develop an understanding of the goals that engineers had in designing it.


Because EFS emphasizes shared access, it doesn’t necessarily provide speed or some types of flexibility. The goal is to ensure that services, applications, and devices of all sorts can gain access to specific kinds of data, rather than provide flexible configuration options. 


As with other Amazon storage options, EFS scales to meet your current storage demands. Amazon provides the same levels of redundancy and automatic backup for EFS that it does for the other storage options.


In addition, Amazon stresses that you pay only for the resources that you actually use to perform tasks. You don’t pay a minimal fee for storage or for setup costs. EFS also provides the same sort of permission system used by the other storage systems.


Understanding Network File System version 4 (NFSv4)


EFS relies on a standardized Internet Engineering Task Force (IETF) file system, NFSv4.


Compliance with a standard is important because most Network Attached Storage (NAS) and many operating system vendors also comply with NFSv4, enabling these data sources to interoperate seamlessly and appear to have all the data sources residing on the local machine (even when they’re miles apart).


NFS is a mature file system that has seen a use for at least the last 20 years in its various incarnations.


Amazon doesn’t provide full NFSv4 support. The page at the NFSv4 features that EFS doesn’t support, and some of the omissions are extremely important.


For example, EFS doesn’t support NFSv4 Access Control Lists (ACLs), Kerberos-based security, lock upgrades or downgrades, or deny share (which means that every time you share a file, you must share it with everyone).


These omissions will affect your security planning if you aren’t aware of them and take measures to overcome the potential security problems they present. How much these omissions affect security depends on the kinds of information you attempt to store on EFS, so you must look at your data needs carefully.


EFS doesn’t support certain file attributes. It shouldn’t surprise you that EFS lacks support for block devices because Elastic Block Storage (EBS) meets this need. Many of the attribute omissions are for optional features, however, so they may not present much of a problem unless you have specific needs.


Be sure to check the Amazon documentation carefully to ensure that omissions, such as namespace support, won’t cause problems for your particular applications.


Comparing EFS to S3, Standard – IA, and Glacier

Comparing EFS

When comparing EFS to S3, S3 Standard – Infrequent Access (Standard – IA), and Glacier, the first thing that comes to mind is money. EFS is the most expensive of the Amazon storage options, and S3 is the least expensive.


The difference that you pay between the various storage options is substantial, so you need to consider how you use storage in connection with your business carefully. Using all these services together often obtains the best value for your storage needs.


For example, if you really don’t plan to use some of your data for an extended time (with extended being defined by your business model), storing it on EFS will waste money. Using Glacier to store data that you use relatively often, however, will most definitely waste time.


Considering the trade-offs between the various storage options is the smart way to do things. The additional money you spend on EFS does purchase considerably added functionality.


The most important feature that comes with EFS when compared to the other options in this section is the capability to secure files individually. EFS provides full locking support, so you can use it for database management applications or other needs for which locking all or part of a file is essential.


Even though S3 and its associated storage options provide support for the group and individual user permissions, you also find that EFS provides better security support.


Speed is also a consideration. S3 comes with HTTP overhead that robs it of some speed. However, the main speed difference comes from EFS’s use of Solid-State Drives (SSDs) that make access considerably faster.


From a visualization perspective, think of EFS as a kind of NAS, while S3 is most definitely a kind of Binary Large Object (BLOB) Internet storage. Amazon has also optimized S3 for write-once, read-many access, which means that writing incurs a speed penalty that you don’t see when working with EFS.


S3 does offer some functionality that’s not possible with EFS. For example, you can use S3 to offer files as a static website — something that you’d need to configure on EFS by hosting it on a web server.


The point is that EFS is more like the file system you’re used to using, and S3 is more akin to a special-purpose, blog-based database that provides direct web access at the cost of speed.


You can get past some of the S3 limitations by using an S3 File System (S3FS) solution such as s3fs-fuse ( or S3FS Node Packager Manager, S3FS NPM.


However, even though these alternatives overcome some of the interface issues with S3, they still can’t overcome the security and individual object size issues. (A single object can contain 5GB of data.) 


Comparing EFS to EBS


The first thing you notice when working with EBS is that it’s the only storage option without a console. You must configure EBS through an EC2 setup. That’s because EBS provides services to only a single EC2 instance. If you need multiple-instance support, EBS won’t work for you.


EFS is also designed to support organizations that require large distributed file systems of the type provided by Isilon and Gluster (


However, in addition to getting a large distributed file system, you can make this installation available across regions using EFS, which is something you can’t do with these off-the-shelf solutions.


What you get is a large distributed file system that provides­ region-specific support without your having to build this support yourself­. Because EFS scales automatically, you don’t need to worry about the number of resources that each region requires to satisfy its needs.


EBS is restricted to a single region because of its single EC2 instance focus and the fact that it acts like a Storage Area Network (SAN), which provides dedicated network support.


Some applications require block storage devices, such as that provided by EBS, to avoid the rules that a file system imposes and gain a speed benefit. For example, Oracle Automatic Storage Management, or ASM, falls into this category.


EFS doesn’t answer the needs of a block storage device application; you need a product such as EBS in this case. Oracle ASM still has a file system, but the file system is built into the product, so having another file system would prove redundant and counterproductive because the two file systems would fight each other.


Working with EFS


As with the other services, EFS comes with its own management console that helps you create, delete, configure, and monitor the storage that you need. The process for creating and using an EFS drive is similar to the process used for a local hard drive.


But some of the automation found on your local system isn’t present with EFS. For example, you must mount the file system (a task that a local setup normally performs automatically for you).


As with most local drives, security on an EFS drive relies on both group and individual use access rights. Unlike your local drive, however, an EFS drive can automatically scale to accommodate the needs of your applications (so that you don’t run out of storage space).


The following sections describe how to work with EFS drives and ensure that the data they contain remains safe.


Starting the Elastic File System Management Console

Management Console

Before you can perform any tasks with EFS, you need to access the Elastic File System Management Console. The following steps assume that you haven’t used EFS before and need to start a new setup from scratch. If you have already worked with EFS, you can skip to the next subsection, “Mounting the file system.”


1.Sign into AWS using your administrator account.

2.Navigate to the Elastic File System Management Console at https://

You see a Welcome page that contains interesting information about EFS and what it can do for you. However, you don’t see the actual console at this point.


3. Click Create File System.

File System

You see the Create File System page, the first step to create a file system is to decide how to access it. You can assign your EFS to one or more Virtual Private Clouds (VPCs).


4. Choose one or more VPCs with which to interact. If you have only one VPC, Amazon selects it for you automatically.

5. Scroll down to the next section of the page.

Amazon asks you to choose mount targets for your EFS setup. Amount target determines the locations that can mount (make accessible) the drive.


Using all the availability zones in a particular region won’t cost you additional money when you’re working with the free tier. However, using multiple regions will add a charge. Remember that you get to use only one region when working at the free tier.


6. Select the availability zones that you want to use and then click Next Step.

You see the Configure Optional Settings page. These settings help you configure your EFS setup to provide special support, but you don’t have to change the settings unless you want to. This page contains two optional settings:


Add Tags: Using tags allows you to add descriptive information to your setup. The descriptive information is useful when you plan to use more than one EFS setup with your network. Developers can also find the use of tags helpful when locating a particular setup to manipulate programmatically.


Choose Performance Mode: Changing the performance mode enables EFS to read files faster, but at the cost of higher latency (time to find the file) when using the Max I/O setting.


Amazon chooses the General Purpose setting by default, which means that transfer rates and file latency receive equal treatment when determining overall speed.


7. Perform any required optional setup and then click Next Step.


8. Verify the settings and then click Create File System.

The File Systems page appears with the new file system that you created. Even though the file system is accessible at this point, the drive may not be available immediately.


The Creating status indication in the Life Cycle Status column of the Mount Targets table on this page tells you that the process of configuring EFS is ongoing.


Creating additional file systems


You can create as many EFS setups as needed to interact with your EC2 instances and associated applications. To create another EFS setup, click Create File System and you see the initial configuration page.


Follow the steps in the preceding section, “Starting the Elastic File System Management Console,” to complete the new EFS configuration.


After you create a new EFS, you see multiple entries in the Elastic File System Management Console. Select the individual EFS setup that you want to work with when making changes.


Mounting the file system

Mounting the file system

Unlike many of the activities, you perform in this blog, mounting an EFS file system onto the EC2 instance that you create in blog requires that you use the Secure Shell (SSH), client. You have many options for gaining access through SSH.


This blog (and all others that require you to use SSH) depends on the easiest and safest method possible — using the EC2 Management Console. After you have a connection to the instance, you can type commands to mount the EFS file system. The following sections describe how to perform both tasks.


Connect to an EC2 instance using a browser

EC2 instance

To use SSH, you must create a connection to your EC2 instance. The following steps help you connect to your EC2 instance:

1. Select Services ➪ Compute ➪ EC2. You see the EC2 Management Console.

2. Choose Instances in the Navigation pane.

The EC2 Management Console displays all the instances that you have configured for your setup. The blog relies on just one instance, so you use that instance to perform the tasks.


3. Select the box next to the instance that you want to use and click Connect. You see the Connect to Your Instance dialog box


4. Select the A Java SSH Client Directly from My Browser option.


5. Type the location of the key file in the Private Key Path field and then click Launch SSH Client.

Amazon starts the MindTerm SSH terminal for you and logs you into your EC2 instance. If you find that you don’t connect to your instance, you may lack the proper Java support or access to your security key file.


Update the EC2 instance

Update EC2 instance

Before making any major change to your EC2 instance, it pays to perform an update to ensure that you have the latest versions of any required products.


Performing this step each time you perform a major task may seem like too much work, but doing so will save time, effort, and troubleshooting in the long run. The following steps describe how to update your EC2 instance:


1. Type sudo yum update in the terminal window and press Enter.

You see messages telling you that EC2 is installing any needed updates. If no updates exist, you see a No Packages Marked for Update message.


The sudo (superuser do) command makes you a superusersomeone who has rights to do anything on the computer. The yum (Yellowdog Updater Modified) command is the primary method for getting, installing, updating, deleting, and querying the operating system features. 


2. If EC2 performed any updates, type sudo reboot, and press Enter.

EC2 displays a message telling you that it has started the reboot, a process that stops and then restarts the system. You must close the terminal window.


3. Reconnect to the EC2 instance.

EC2 displays the normal terminal window. You generally don’t see any messages telling about the success of the update.


Installing the NFS client

NFS client

To work with EFS, you need the NFS client. The NFS client gives you access to specialized commands to perform tasks such as mounting a drive. You can optionally use the NFS client to interact with the EFS file system in other ways.


To install the NFS client, type sudo yum -y install nfs-utils and press Enter. (The yum –y command-line switch tells yum to assume that the answer to all yes/no questions is yes.)


If EC2 needs to install the NFS client, you see a series of installation messages. Otherwise, you see the following message in most cases:


[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]


Performing the mounting process

mounting process

The EFS file system is ready for use and your EC2 instance has the correct software installed, but you still need to connect the two. It’s sort of like having a car and a tire to go with the car, but not having the tire on the car so that you can drive somewhere.


Mounting, the same process you’d use with that tire, is the step that comes next. The following steps show how to mount your EFS file system so that you can access it from an EC2 instance.


1. Type sudo mkdir efs and press Enter.

This step creates a directory named efs using the mkdir (make directory) command for details on using the mkdir command). You can use other directory names, but using efs makes sense for this first foray into working with EFS. The next step involves a complicated-looking command.


In fact, unless you’re a Linux expert, you may have an incredibly hard time trying to decipher it. Fortunately, Amazon hides a truly useful bit of information from view, and the next step tells how to find it.


2. Select the file system you want to use in the Elastic File System Management Console. Click the EC2 Mount Instructions link that appears in the File System Access section of the page.

You see the mount instructions. Pay particular attention to the highlighted text. This is the command you need to type to mount your EFS file system in the EC2 instance. because Amazon customizes the content of this dialog box to match your system.


3. Highlight the command and paste it into your system’s clipboard.


4. Paste the command into MindTerm by clicking the middle mouse button (unless you have changed the default paste key to something else). 
sudo mount -t nfs4 -o nfsvers=4.1 $(curl -s availability-zone) com:/ efs


5. Press Enter.

EC2 mounts the EFS file system. Of course, you have no way of knowing that the EFS file system is actually mounted.


6. Type cat /proc/mounts and press Enter.

You see a listing of all the mounted drives for your EC2 instance. Note that the EFS file system is the last entry in the list.

The entry tells you all about the EFS file system, such as the mounting point location at /home/ec2-user/efs.


Listing EFS content

Listing EFS content

When you’re working with EFS, it pays to know where the content appears on the EC2 instance.

The example in this blog tells you to create a directory named efs. The full path to this directory according to the cat /proc/mounts command is /home/ec2-user/efs.


To work in that directory, you type cd efs and press Enter because you start in the /home/ec2-user/ directory. When you need to display your current directory (because it really is easy to get lost), type pwd and press Enter.


The pwd (print working directory) command for more details about this command) outputs the full path to the current directory. To see the content of the EFS directory, you type ls and press Enter. The ls (list) command for more details) tells EC2 to list the content of a directory.


However, this command comes with a host of command-line switches to tell EC2 what to list and to define how to list it. You see the ls command used a number of additional times in this blog and the rest of the blog to determine what to do next to configure an AWS service.


Configuring EFS security


The security you set as part of configuring your EFS file system determines who can access the drive from the outside. It doesn’t control the security of the file system itself — the security used to determine whether someone could read or write files.


for example 

To change the security of the EFS file system, you must use EC2 commands within SSH. Of course, the first problem is figuring out what rights (capability to perform tasks) you currently have. You can determine rights in several ways.


The easiest method is to use the ls command. Type ls -al and press Enter to see the long listing format of all the entries, including the hidden directory entries. The output might be puzzling unless you know the secret code for the ls command output. Moving from left to right, here is the meaning of all those letters:


This is a directory entry. The alternative is a file entry, which starts with a dash.

rwx: The owner has read, write, and execute permissions.
r: read
w: write
x: execute
r-x: The group that owns the entry has read and execute permissions, but not write permissions.
r-x: Anyone else who accesses the directory has read and execute permissions as well.
2: This ntry has two links to it. In this case, the current directory and the parent directory are the only two links to this entry. The minimum number of links for a directory is 2 because every directory has a current directory and a parent directory entry.
root: The owner’s name is root.
root: The owner’s group name is root.
4096: The entry consumes 4,096 bytes of disk space.

Jul 17 22:07: The entry was created at this date and time.

.: This is a special entry that defines the current directory. You can use a single period whenever you want to refer to the current directory in a command. The use of two periods (..) signifies the parent directory.


Given that you have logged into EC2 as ec2-user in most cases, you have no rights to write any data to the efs directory. You can test this theory by using the touch command to change the timestamp of a file that doesn’t actually exist yet. (Touching the file creates an empty file entry.)


Type touch myfile.txt and press Enter. You see an error message that tells you that the operating system denied permission to write the file, which is a requirement for touching it.


You have several options for correcting permission errors. The easiest way to fix the problem, in this case, is to use the chmod (change mode) command for details on using this command) to add the required right. In this case, you must give others the right to write data to the current directory as defined by the single period (.).


The following steps lead you through this process:


1. Type sudo chmod o+w . and press Enter.

Don’t forget the period at the end of the command, or the command won’t work. EC2 doesn’t provide you with any output, so you need to validate the change you just made.

2. Type ls -al and press Enter. The output now shows drwxr-xrwx for the current directory (.).


3. Type touch myfile.txt and press Enter. No error message appears this time, but you don’t know for certain that you created the file, either.

4. Type ls -al and press Enter.

The current directory entry now supports a green highlight, showing that you can perform tasks in it. Look carefully at the entries for the new file.


This is a file type, so the security entries begin with a dash. No one has executed rights because this is a text file. The entry has only one link to it because it’s a file, not a directory. 


The ec2-user owns the file, which makes sense given that ec2-user created it. The file size is 0, which also makes sense because you haven’t given it any content. You really don’t need a superfluous file in your EFS file system, so deleting the test entry is important.


5. Type rm myfile.txt and press Enter.

EC2 removes the file using the rm command. You can verify the removal using the ls -al command.


Unmounting and removing the file system


At some point, you may find that you want to unmount (dismount) your EFS file system.

  • To perform this task, you type sudo umount efs and press Enter.
  • As with mounting the file system, efs reflects the name of the folder you created for the file system.
  • If you use a different name, you must replace efs with that name.
  • In most cases, you also want to remove the directory, type sudo rmdir efs the directory you created for EFS. To and press Enter.


At this point, you can remove the file system if desired by selecting the file system in the Elastic File System Management Console. Choose Actions ➪ Delete File System. You see the Permanently Delete File System dialog box.


You must type the file system ID in the field immediately above the Delete File System button and then click Delete File System to make the change permanent.


Working with the Elasticsearch Service

The data your organization creates manages, and monitors are the single most valuable resource that you own. Data defines your organization and helps you understand the direction your organization takes at various times during its evolution.


Consequently, you need to have applications, such as the Amazon Elastic-search Service, that can help you find what you need with as little effort as possible. The following sections help you discover more about the Amazon Elasticsearch Service so that you can use it to make working with your data substantially easier.


Understanding the Elasticsearch Service functionality

Elasticsearch Service

Organizations of all sizes need help in managing various kinds of data generated as part of application logging, application monitoring, the user clicks in web applications, and the results of user text searches (among a growing number of data sources). Elasticsearch (the application, not the service) is an open source product that helps you perform a number of tasks:


Searching: Locating specific pieces of information is hard, and the larger the drive, the harder the task becomes. Entire companies are devoted to the seemingly simple matter of helping you find a particular piece of information you need. You know that the information exists; you just can’t find it without help.


Analyzing: Data contains patterns, most of which aren’t obvious without help. Until you see the pattern, you don’t truly understand the data. Making sense of the data that appears on the hard drive is essential if you want to perform any useful tasks with it.


Filtering: Data storage objects today always have too much information. Anyone attempting to use the data gets buried beneath the mounds of useless data that doesn’t affect the current target. Sifting through the data manually is often impossible, so you need help filtering the data to obtain just the information you need.


Monitoring: Looking without actually seeing is a problem encountered in many areas, not just data storage. However, because data storage serves so many data sources, the problem can become especially critical.


Manually detecting particular data events, especially with a large setup, is impossible and there is no point in even attempting it. You need automation to make the task simpler.


Elasticsearch helps in these areas and many others. Here’s the idea: You begin with a large data source, such as the logs generated when users perform tasks using your application, and you need to make sense of all the data the data source contains.


When working with Elasticsearch, you mainly want to know about usage, which means performing analytics of various types.


To make these analytics happen, Elasticsearch creates indexes that help it locate data quickly so that it answers requests for information in near real time. You can use this information to perform tasks such as identifying outages and errors, even when users don’t directly report them 


Even though Elasticsearch is an open source product, it isn’t necessarily a complete product.


What Amazon is offering is a means to perform the required configuration details so that you can spend more time interacting with the data, rather than performing tasks such as creating data clusters and setting up redundancies to ensure that Elasticsearch works as intended. As with other services, you also gain access to the AWS security, scaling, and reliability features.


Many of these added features rely on Amazon CloudWatch to perform the required monitoring (the same as used by other services). Therefore, the Amazon Elasticsearch Service is a value-added proposition designed to make you more efficient. Fortunately, unlike your EFS setup, you won’t spend a lot of time using SSH to set up Elasticsearch, it comes with full console support.


As with many other Amazon services, Amazon doesn’t charge you for configuring the Amazon Elasticsearch Service. However, you do pay for the resources that Elastic-search needs to perform its work.


Therefore, you do pay for processing time and data storage access. You can find pricing information for this service at https://


Creating the Elasticsearch domain


Before you can do anything with Elasticsearch, you must create a domain. The domain determines what Elasticsearch monitors. The following steps show you how to create a setup for the example EC2 system used in this blog:

1. Sign into AWS using your administrator account.


2.Navigate to the Elastic File System Management Console at https://

You see a Welcome page that contains interesting information about the Amazon Elasticsearch Service and what it can do for you. However, you don’t see the actual console at this point


3. Click Get Started.

.You see the Create Elasticsearch Domain page. As shown in the figure, the first step is to give your domain a name.


4. Type my-domain and click Next.

Choose your domain name carefully. Amazon has some strict naming rules that it fully enforces. You see a Configure Cluster page, where you must define the characteristics of the cluster you want to use.


One of the most important settings for this example is the Instance Type. To keep costs as low as possible (conceivably free), you must select one of the free instance types.


5. Choose the t2.micro.elasticsearch option in the Instance Type field.


6. Scroll down to the Storage Configuration section of the page and choose EBS in the Storage Type field.

Amazon automatically configures the other required settings. You must use EBS storage when working with the free instance types.


7. Click Next.

You see the Set-Up Access Policy page, where you define who can access the Elasticsearch setup.


8. Choose the Allow Open Access to the Domain option.

Amazon automatically configures the access template for you. Normally Amazon doesn’t recommend using the open access option because it creates a potential security hole. However, in this case, the open setup makes experimenting with Elasticsearch easier.


9. Click Next.

You see a Review page, where you can review all the configuration settings you made. Each section provides an Edit button so that you can change specific settings without having to change them all.


10. Click Confirm and Create.

AWS creates the Elasticsearch domain for you. The Domain Status field tells you about the progress AWS makes in completing your setup. The Loading indicator shows that AWS is still working on the Elasticsearch domain. 


To complete this setup, you create a connection between the item you want to monitor using Elasticsearch and Amazon CloudTrail. For example, you can use Amazon CloudTrail to monitor log file output from your applications.


Deleting the Elasticsearch domain

Elasticsearch domain

Eventually, you may need to remove the Elasticsearch domain. Perhaps the application you use it with no longer exists. Use these steps to perform this task:


1.  Select the domain you want to remove in the Navigation Pane of the Elasticsearch Management Console.

AWS displays the selected domain page for you.

2. Click the Delete Elasticsearch Domain link. You see a Delete Domain dialog box that warns you about the permanent nature of a deletion.

3. Click Delete. AWS removes the domain for you.


Understanding the AWS messaging services

AWS messaging services

We all know by now that AWS provides a plethora of services designed to help you with developing a rich set of cloud-ready applications; but with so many different services to choose from, how do you make the right set of choices, to begin with? 


Amazon SNS:

Amazon SNS, or Simple Notification Service, is asynchronous, managed service that provides the end user with the ability to deliver or send messages to one or more endpoints or clients. This works by using a Publisher–Subscriber-like model, as depicted in the following diagram:


One or more publishers or producers post a message to a corresponding SNS topic without knowing which subscribers or consumers will ultimately consume the message. The producer also doesn't wait for a response back from the consumers, thus making SNS a loosely-coupled service.


It is the consumer's task to subscribe to the topic and get notified of the incoming messages. SNS supports a variety of consumer implementation options, such as email, mobile push notifications or SMS, HTTP/HTTPS notifications, and even Lambda functions.


Amazon Simple Queue Service:

Amazon Simple Queue Service, or Simple Queue Service, on the other hand, is an asynchronous managed service that provides users with the ability to push and pull messages from a queue.


Here too, one or more producers can be used to push messages into the queue, which a corresponding set of consumers on the other end consume and process the messages one at a time.


An important point to note here is that, unlike its counterpart, SNS, where the consumers are notified of a new message, here, the consumers have to poll the queue in short intervals of time for newer messages. Once a message is found, the consumer has to process it and then delete it from the queue. The process is shown here:


Amazon Kinesis:

Amazon Kinesis functions a lot like Amazon Simple Queue Service; however, it is fundamentally designed and optimized for high-throughput data writes and reads. Here, instead of a queue, you are provided with a stream that consumers can use to read from multiple times.


The stream is automatically trimmed after a span of 24 hours, so, unlike your consumers from the queue, here you are not required to delete the messages once they are processed:


Similar to Amazon Kinesis, AWS also provides a streaming functionality with DynamoDB as well, called DynamoDB streams. Using this feature, you can basically enable real-time changes to certain items within your tables in the form of a stream.


And, finally, you also get the standard request-reply model of messaging using a combination of Amazon API Gateway, ELBs, AWS Lambda, and other services.


This mode of communication is also synchronous in nature and can be used to fit a variety of use cases, as per your requirements. Keeping these basic differences in mind, let's now move forward and learn more about SNS.


Getting started with Amazon Simple Notification Service

Simple Notification Service

As discussed briefly earlier, SNS is a managed web service that you, as an end user, can leverage to send messages to various subscribing endpoints.


SNS works in a publisher-subscriber or producer and consumer model, where producers create and send messages to a particular topic, which is in turn consumed by one or more subscribers over a supported set of protocols.


At the time of writing this blog, SNS supports HTTP, HTTPS, email, push notifications in the form of SMS, as well as AWS Lambda and Amazon Simple Queue Service, as the preferred modes of subscribers.


SNS is a really simple and yet extremely useful service that you can use for a variety of purposes, the most common being pushing notifications or system alerts to cloud administrators whenever a particular event occurs.


We have been using SNS throughout this blog for this same purpose; however, there are many more features and use cases that SNS can be leveraged for.


For example, you can use SNS to send out promotional emails or SMS to a large group of targeted audiences or even use it as a mobile push notification service where the messages are pushed directly to your Android or IOS applications.


With this in mind, let's quickly go ahead and create a simple SNS topic of our own:

To do so, first, log in to your AWS Management Console and, from the Filter option, filter out SNS service. Alternatively, you can also access the SNS dashboard by selecting


2. If this is your first time with SNS, simply select the Get Started option to begin. Here, at the SNS dashboard, you can start off by selecting the Create topic option.


Once selected, you will be prompted to provide a suitable Topic name and its corresponding Display name. Topics form the core functionality for SNS. You can use topics to send messages to a particular type of subscribing consumer.


Remember, a single topic can be subscribed by more than one consumer. Once you have typed in the required fields, select the Create topic option to complete the process. That's it! Simple, isn't it?


Having created your topic, you can now go ahead and associate it with one or more subscribers. To do so, first, we need to create one or more subscriptions. Select the Create subscription option provided under the newly created topic, as shown in the following screenshot:


Here, in the Create subscription dialog box, select a suitable Protocol that will subscribe to the newly created topic. In this case, I've selected Email as the Protocol.


Next, provide a valid email address in the subsequent Endpoint field. The Endpoint field will vary based on the selected protocol. Once completed, click on the Create subscription button to complete the process.


With the subscription created, you will now have to validate the subscription. This can be performed by launching your email application and selecting the Confirm subscription link in the mail that you would have received.


Once the subscription is confirmed, you will be redirected to a confirmation page where you can view the subscribed topic's name as well as the subscription ID.


You can use the same process to create and assign multiple subscribers to the same topic. For example, select the Create subscription option, as performed earlier, and from the Protocol drop-down list, select SMS as the new protocol.


Next, provide a valid phone number in the subsequent Endpoint field. The number can be prefixed by your country code, as shown in the following screenshot. Once completed, click on the Create subscription button to complete the process:


With the subscriptions created successfully, you can now test the two by publishing a message to your topic. To do so, select the Publish to topic option from your topics page.


Once a message is published here, SNS will attempt to deliver that message to each of its subscribing endpoints; in this case, to the email address as well as the phone number.


Type in a suitable Subject name followed by the actual message that you wish to send. Note that if your character count exceeds 160 for an SMS, SNS will automatically send another SMS with the remainder of the character count.


You can optionally switch the Message format between Raw and JSON to match your requirements. Once completed, select Publish Message.


Check your email application once more for the published message. Similarly, you can create and associate one or more such subscriptions to each of the topics that you create. In the next section, we will look at how you can leverage SNS to send SMS messages or text messages to one or multiple phone numbers.


Sending text messages using SNS

text messages

Amazon SNS also provides users with a really easy-to-use interface which allows you to send text messages or SMS messages to one or multiple phone numbers.


It also provides you with the ability to classify and send messages based on their criticality, as well as specify the maximum amount that you wish to spend on sending SMS messages each month. So, without wasting any time, let's get straight to it:


To send SMS messages using SNS, first log in to the SNS dashboard by selecting


Once logged in, select the Text messaging (SMS) option from the navigation pane. This will bring up the Text messaging (SMS) dashboard, where you can set your SMS preferences as well as send messages to one or more phone numbers. First up, let's set some preferences by selecting the Manage text messaging preferences option from the dashboard.


Fill in the following preference fields:

Default message type: SNS provides two message types: Promotional and Transactional. The Promotional option can be selected if the messages that you wish to send require less criticality, for example, simple marketing messages, and so on.


On the other hand, Transactional messages are ideally suited for critical messages, such as one-time passwords, transaction details, and so on. SNS optimizes the message delivery for Transactional messages to achieve the best reliability.


At the time of writing this blog, sending SMS messages is supported in the countries listed at ns/latest/dg/sms_supported-countries.html.


For this particular scenario, I've selected the Promotional option.

Account spend limit: The maximum amount you wish to spend, in USD, for sending messages in a month. By default, the limit is set to USD 1.00. For this scenario, we are not going to change this value.


Both Promotional and Transactional message types have different costs based on the specified country or region. You can look up the prices at


IAM role for CloudWatch Logs access: This option is used to create an IAM role that basically allows Amazon SNS to write its logs to CloudWatch. Since this is the first time we are configuring this feature, select the Create IAM role option.


This will redirect you to a new page where you should select the Allow option to grant SNS the necessary rights. Here is a snippet of the rights that are provided for your IAM role:

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"Resource": [


Default percentage of success to sample: This option is used to specify the percentage of successful SMS messages delivered, based on which SNS will write logs into CloudWatch. To write only logs for failed message deliveries, set this value to 0. By default, SNS will write logs for all successful deliveries (100%).


Default sender ID: This option is used to specify the name of the message's sender. You can provide any meaningful name here.


Reports storage: Use this option to configure an S3 bucket that will store daily SMS usage reports from Amazon SNS.

If you are providing an existing bucket as your Reports storage then ensure that it has the necessary access rights to interact with the SNS service. Once the required fields are filled in, select the Update preferences option to complete the process.


To send the SMS messages, simply select the Send a text message (SMS) option from the Text messaging (SMS) dashboard. This will bring up the Send text message (SMS) dialog box, as shown in the following screenshot. Provide a valid phone Number and a Message. Remember to prefix your country code in the phone number as well:


You can optionally even overwrite the Sender ID field here, however, for this case, we have left it to the default value that was configured in the preferences stage.


After the required fields are filled in, simply select Send a text message to complete the message-sending process. You can also verify the delivery status of each message sent, either Transactional or Promotional, by using the Account stats section provided in the Text messaging (SMS) page.


Using Amazon SNS as triggers


One of the key benefits of having a service such as SNS is that it can also be used as a trigger mechanism for a variety of use cases. Messages sent by SNS can be used to trigger simple Lambda functions that in turn perform some action over another AWS service, or simply process the message from SNS and forward its contents to another application.


In this section, we will be exploring a really simple use case where an SNS topic is used as a trigger mechanism for a Lambda function to push CloudWatch alerts over to Slack!


The alerts will be sent out to a custom-made Slack channel that your IT team can use to track alerts and other important notifications with regards to your AWS environment.


At a broader level, here is the list of things that we plan to do for this activity:

  • Create an SNS topic that will act as the Lambda trigger
  • Create a CloudWatch alarm for one of our EC2 machines, say, if CPU utilization goes higher than 80% then trigger the alarm


The CloudWatch alarm will post the notification to an SNS topic. The SNS topic will act as a trigger to our Lambda function As soon as the Lambda function gets a trigger, it will post the notification to our Slack channel Sounds simple? Let's get down to implementing it then:


1. First, we will need to create a simple SNS topic which will act as a trigger for the Lambda function. Go ahead and create a simple SNS topic as we did in our earlier steps.


Once completed, make a note of the SNS topic's ARN from the topics dashboard. In this case, our SNS is configured to send notifications to an email subscriber in the form of an IT admin email alias.


Next up, we create our CloudWatch alarm. To do so, select the CloudWatch service from the AWS Management Console and click on Alarms in the navigation panel. Select Create alarm to get started.


In this scenario, we will be monitoring the EC2 instances in our environment, so I've gone ahead and selected the EC2

Metrics option. Alternatively, you can select any other Metrics, as per your requirements. In our case, we have gone ahead and configured a simple CPU Utilization alarm.


Make sure that you set up a notification for the alerts and point it to the newly created SNS topic. With the SNS topic and CloudWatch alarm in place, we now need to configure a Slack channel where the alert notifications will be posted. For that, we will need an incoming webhook to be set and a hook URL that will be used to post the notifications:


Go to your Slack team's settings page and select the Apps & integrations option. You can sign up for a free Slack account at


2. Once you click on Apps & integrations, it will take you to a new page which lists a variety of pre-configured apps. Search for Incoming and select the Incoming Webhooks from the options that appear.


Next, click on Add Configuration. It will ask you to select the Channel to post, along with a few other necessary parameters. Make sure that you copy and save the Webhook URL before you proceed any further with the next steps.


Now that we have our Slack hook URL ready, we can finally get started with deploying our Lambda function. For this exercise, we will be using an existing AWS Lambda function blueprint designed for Slack integration, using the Node.js 4.3 version:


From the AWS Management dashboard, filter the serviceLambda using the Filter option, or alternatively, select


From the AWS Lambda landing page, select the Create a function option to get started.

AWS Lambda

For working with Lambda functions, you can choose to create your own function from scratch, or alternatively, filter and use a function from a list of predefined and configured blueprints.


In this case, select the Blueprints option and use the adjoining blueprints filter to search for the following function: Blueprint name: cloudwatch-alarm-to-slack.


Select the blueprint and fill out the necessary information for your function, such as its name, role name, and so on. Once done, from the SNS section, select the newly created SNS topic from the drop-down list.


Remember to select the Enable trigger checkbox before proceeding with the next steps.


Finally, in the Environment variables section, provide the appropriate values for the slack Channel and kmsEncryptedHookUrl parameters, as shown in the following screenshot.

Remember, the kmsEncryptedHookUrl is nothing but the Slack hook URL that we created a while back:


7. With the values filled in, simply select the Create function option and let the magic begin!

Based on the selected CloudWatch metric for your alarm, go ahead and create some synthetic load for your EC2 instance. Once the load crosses the set threshold in the alarm, it triggers a corresponding message to the SNS topic, which in turn triggers the Lambda function to post the alert over on the Slack channel.


In this way, you can also use the same SNS topic for subscribing to various other services, such as Amazon Simple Queue Service, for other processing requirements.


Monitoring Amazon SNS using Amazon CloudWatch metrics

CloudWatch metrics

Amazon SNS automatically collects and sends various metrics about message deliveries to Amazon CloudWatch.


You can view these metrics and assign them with alarms to alert you, in case a message delivery rate drops beyond a certain threshold. You can additionally view the message delivery logs, as well as using the CloudWatch Logs page:


To get started, first ensure that you have assigned an IAM role that allows SNS to write SMS delivery logs over to CloudWatch. To do so, from the navigation pane, select the Text messaging (SMS) option.


Next, from the Manage text messaging preferences option, ensure that you have a valid IAM role provided under the IAM role for CloudWatch Logs access field.


Once the IAM role is created, log in to your CloudWatch dashboard by selecting

Here, select the Logs option from the navigation pane to bring up the CloudWatch Log Groups page.


  • You should see a default Log Group created here, by the name of DirectPublishToPhoneNumber.
  • Select the Log Group to view the SMS delivery log messages. The logs will either show a SUCCESS or FAILURE in the status field.


You can additionally create and associate CloudWatch alarms with your monitored SNS metrics. To do so, from the CloudWatch dashboard, select the Metrics option.


From the All metrics tab, filter and select the SNS option.

SNS option

Based on the requirements, you can now select between viewing the metrics based on the PhoneNumber, or Country, SMS type, and so on.


In this case, we have selected the PhoneNumber option to view the NumberOfNotificationsFailed and NumberOfNotificationsDelivered metrics.


Next, select the Graphed metrics tab to view the two metrics and their associated actions. Using the Actions column, select the Create alarm option for the metric that you wish to monitor.


Fill in the respective details and configure the alarm's threshold values based on your requirements. Once completed, click on Create Alarm to complete the process.


In this way, you can leverage Amazon CloudWatch to create and view logs and alerts generated by the SNS service. In the next section, we will be exploring and learning a bit about the second part of the AWS messaging services: Simple Queue Service.


Introducing Amazon Simple Queue Service

Amazon SQS

Amazon Simple Queue Service is a managed, highly scalable, and durable service that provides developers with a mechanism to store messages that can be later consumed by one or more applications.


In this section, we will be exploring a few of the concepts and terminologies offered by Simple Queue Service along with an understanding of which Simple Queue Service, queue to use for what scenarios, so let's get started!


To start off with, Simple Queue Service is provided in two different modes:

Standard queue: Standard queues are the default selection when it comes to working with Simple Queue Service. Here, the queues created offer a nearly-unlimited transaction per second (TPS) rate coupled with an at-least-once delivery model.


What this model means is that a message can be delivered at least once, but occasionally there is a good probability that more than one copy of that same message can be delivered as well.


This is due to the fact that SQL is designed and built on a highly distributed system that is known to create copies of the same message in order to maintain a high-availability scenario. As a result, you may end up with the same message more than once.


Standard queues also work on a best-effort ordering model, in which case, messages might be delivered in a different order to the one in which they were sent.


It is up to your application to sort the messages into the right order in which the messages should be received. So, when is the standard queue an ideal choice for decoupling your applications?


Well, if your application has a high throughput requirement, for example, processing of batch messages, decoupling incoming user requests from an intense background processing work, and so on, then standard queues are the right way to go.


Standard queues are available across all AWS regions.

AWS regions

FIFO queues: When working with standard queues, there is a problem of maintaining the order of the messages and also ensuring that each message is processed only once.


To solve this issue, AWS introduced the FIFO queue that provides developers with a guaranteed order of delivery of messages, as well as the assurance that each message is delivered only once, where no duplicates or copies are ever sent out.


FIFO queues, on the other hand, do not offer an unlimited throughput capacity, unlike their predecessor. At the time of writing this blog, FIFO queues support up to 300 messages sent per second, with an additional 3,000 messages per second capacity if a batch of 10 messages per operation is performed.


Such queues are really useful when the order of the messages is of critical importance, for example, ensuring that a user follows the correct order of events while registering or purchasing of a product, and so on.


FIFO queues are currently only available in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions. With this basic understanding, let's look at some simple steps to get you started with your very own queue in a matter of minutes!


Creating your first queue

SQS queue

Getting started with your own Simple Queue Service queue is a fairly straightforward process. In this section, we will be looking at how you can create your very own standard queue using the AWS Management Console:


To begin with, log in to your AWS Management Console and filter out the Simple Queue Service service using the Filter option provided. Alternatively, you can also access the Simple Queue Service dashboard by selecting Queue Service/home.


Since this is our first time configuring the Simple Queue Service queue, select the Get started now option to continue.


Here, in the Create New Queue page, start off by providing a suitable name for your queue by filling in the Queue Name field. 

If you are building a FIFO queue, you will need to suffix .fifo after your queues name, for example, myQueue.fifo.


With the queue name filled out, the next step is to select the type of queue you wish to set up. In this case, let's first start off by selecting the Standard Queue option.


Next, select the Configure Queue option to go through some of the queue's configuration parameters. Alternatively, you can also select the Quick-Create Queue option to select all the default parameters for your queue.


In the Queue Attributes section, feel free to modify the following set of parameters for your queue, based on your requirements:


Default Visibility Timeout: Amazon Simple Queue Service does not automatically delete messages from the queue, even if they are processed by the consumers. Hence, it is the consumer's duty to delete the respective message from the queue after it has been received and processed.


However, due to the distributed nature of SQL, there is no guarantee that other consumers may not try to read from a copy of the same message.


To prevent such scenarios from occurring, Simple Queue Service sets a small Visibility Timeout period on a message once it is received by a consumer. This prevents other consumers from reading that message until the timeout expires.


By default, the Visibility Timeout can be set to a minimum of 30 seconds to a maximum of 12 hours. If by chance, the consumer is not able to process the message in the allocated timeout window, then the message will be delivered to another consumer and the process will continue until the message is not deleted from the queue by a consumer.


Message Retention Period: The amount of time Amazon Simple Queue Service retains a message in case it is not deleted. The accepted values here are a minimum of 1 minute and a maximum of 14 days.


Maximum Message Size: The maximum message size in bytes accepted by Amazon Simple Queue Service. The maximum limit is 256 KB.


Delivery Delay: Amazon Simple Queue Service allows you to temporarily delay the delivery of new messages in a queue for a specified amount of seconds. This is achieved by placing the new messages in a Delay queue which is completely managed by AWS itself.


Although it seems similar to the concept of Visibility Timeouts, a delay queue hides a message when it is first added to the queue, unlike the latter where the message is hidden when it is picked up by a consumer. The accepted values here are between 0 seconds and 15 minutes:


Receive Message Wait Time: Amazon Simple Queue Service periodically queries a small subset of the servers to determine if any new messages are available for consumption. This method is called short polling and is generally enabled by default when the Receive Message Wait Time is set to 0.


This method, however, results in a lot of empty responses as well, as sometimes messages just may not be present in the queue for consumption.


In that case, SQL also provides a concept of long polling, whereby Amazon Simple Queue Service waits until a message is available in the queue before sending a response.


This drastically reduces the number of empty responses and is helpful in reducing the overall running costs of your system. To enable long polling, simply change the value of Receive Message Wait Time to a value between 0 and 20 seconds.


With these basic settings configured, you can now go ahead and create your very own queue. Note, however, that there are a few additional settings that you can configure, such as a dead letter queue and server-side encryption. However, we will park these out for the time being. Select Create Queue once did.


With the new queue created, you can now start using it by simply copying the queue's URL (https://Simple Queue<ACCOUNT_ID>/<QUEUE_NAME>) and providing the same to your applications or consumers to consume from:


You can also test the functionality of your queue by sending a text message to it using the Simple Queue Service dashboard itself. Select the newly-created queue from the Simple Queue Service dashboard, and from the Queue Actions, drop-down menu selects the Send a Message option.


This will bring up the Send a Message dialog box, as shown in the following screenshot. Next, type in a text message in the Message Body section and click on Send Message to complete the process.


You can optionally also change the delivery delay of this individual message by enabling the Delay delivery of this message by option and providing a value between 0 and 15 minutes.


With the message sent, you will be notified of the message's identifier along with an MD5 checksum of the body. Click on Close to close the Send a Message dialog box.


With this, the status of the Messages Available column should change to 1 as the new message is now waiting to be read or consumed. To read the message from the Simple Queue Service dashboard, once again select the Queue Actions drop-down menu and select the View/Delete Messages option.


This brings up the View/Delete Messages dialog box, as shown in the following screenshot. Here, the dialog will poll the queue once every 2 seconds until you have specified the polling to run using the Poll queue for the option.


You can also change the maximum number of messages viewed by modifying the View up to field. Once done, select the Start Polling for Messages option to get things underway:


With the polling started, you should see your text message in the display area, shortly. You can also verify the validity of the message by selecting the More Details option adjoining the message and verifying the MD5 checksum from the earlier recorded one.


Once completed, select the message and click on the Delete Messages option to remove the message from the queue. Remember, this is a permanent action and it cannot be undone. With the message deleted, your queue should once again show zero messages in flight or available.


Creating a FIFO queue using the AWS CLI


Working with the AWS Management Console is easy enough, but the AWS CLI makes things even simpler! In this section, we will look at a few simple AWS CLI commands that you can use to create and work on your first FIFO queue:


To get started, we require a server or instance with the latest version of the AWS CLI installed and configured.


If you don't already have this working, you might want to have a quick look at the detailed steps provided at


With the AWS CLI installed and prepped, you can now use the following command to create your first FIFO queue. First, create a simple JSON file that will store the necessary list of attributes that we wish to pass to our new FIFO queue:

# vi fifo-queue.json
{"VisibilityTimeout" : "30",
"MaximumMessageSize" : "262144",
"MessageRetentionPeriod" : "345600",
"DelaySeconds" : "10",
"ReceiveMessageWaitTimeSeconds" : "0",
"FifoQueue" : "true",
"ContentBasedDeduplication" : "true"


Here, most of the values are probably known to you already, such as the VisibilityTimeout, the MaximumMessageSize, DelaySeconds, and so on.

The two new attributes listed here specifically for the FIFO queue are:


FIFO queue: Used to designate a queue as a FIFO queue. Note that you cannot change an existing standard queue to a FIFO queue. You will have to create a new FIFO queue altogether. Additionally, when you set this attribute for your queue, you must also provide the Message Group Id for your messages explicitly.


Content Based Deduplication: It enables each message to be processed exactly one time from the queue. Once Content Based Deduplication is enabled, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is actually delivered.


Once the JSON file is created, run the following command to create your FIFO queue:

aws Simple Queue Service create-queue --queue-name myQueue.fifo --attributes http://file://fifo-queue.json


With the queue created, you can additionally use the CLI to pass messages to the queue as well. This is also accomplished by using the following command:

# aws Simple Queue Service send-message

--queue-url --message-body "Well this is far easier than I expected." --message-group-id "R@nD0M"


The send-message command accepts the queue URL as one of the input parameters, along with the actual message that has to be sent. The message can be raw, JSON, or XML formatted.


In addition to this, the send-message command also uses the -- message-group-id parameter that essentially tags the message to belong to a specific message group. 


Messages that belong to the same message group are processed in a FIFO manner:

The --message-id-group parameter is mandatory when working with FIFO queues.


With the message now sent to the queue, you can use the AWS CLI to receive the message as well. Use the following command to fetch the messages from your FIFO queue:


# aws Simple Queue Service receive-message


You can also additionally use the --max-number-of-messages attribute to list up to 10 messages that are currently available in the queue. 


Here is a snippet of the output that you may get with the previous command:

"Messages": [
"Body": "Well this is far easier than I expected.",
"ReceiptHandle": "AQnmzJjGNrI9cl7ZyZ2NyVDDDy==",
"MD5OfBody": "d733b7da2656ffc18d99bea3613e24d7",
"MessageId": "a075bd88-4942-416d-b632-0258ac8"

You can similarly use the AWS CLI to list the available queues in your environment, modify their parameters, push and poll for new messages, delete messages, and much more!

Remember, the messages will persist in the queue unless you manually delete them or the validity of the queue's MessageRetentionPeriod has expired.


Integrating Amazon SNS and Amazon Simple Queue Service

Amazon SNS SQS

One of the key features of Amazon Simple Queue Service is that it can easily be integrated with other AWS services, such as Amazon SNS. Why would I need something like that? To begin with, let us quickly recap the things we have learned so far about both SNS and Simple Queue Service:


Amazon SNS Amazon Simple Queue Service

  • Leverages the push
  • Leverages the polling mechanism


Amazon SNS messages can push messages to mobile devices or other subscribers directly Amazon Simple Queue Service needs a worker to poll the messages Persistence of messages is not supported Amazon Simple Queue Service supports message persistence which can come in really handy if you can't reach your consumers due to a network failure


From the table, it is easy to see that both the services offer their own pros and cons when it comes to working with them.


However, when we join the two services, you can actually leverage them to design and build massively scalable yet decoupled applications. One common architectural pattern that you can leverage by combining both SNS and Simple Queue Service is called the fan out the pattern.


In this pattern, a single message published to a particular SNS topic can be distributed to a number of Simple Queue Service queues in parallel. Thus, you can build highly-decoupled applications that take advantage of parallel and asynchronous processing. Consider a simple example to demonstrate this pattern.


A user uploads an image to his S3 bucket, which triggers an SNS notification to be sent to a particular SNS topic. This topic can be subscribed by a number of Simple Queue Service queues, each running a completely independent process from the other.


For example, one queue can be used to process the image's metadata while the other can be used to resize the image to a thumbnail, and so on. In this pattern, the queues can work independently of each other without even having to worry about whether or not the other completed its processing or not. Here is a representational figure of this pattern:


To integrate both the SNS and Simple Queue Service services, you will first be required to create a simple SNS topic of your own. Go ahead and create a new SNS topic using the AWS Management Console, as performed earlier in this blog.


Once the topic is ready, the next step involves the creation of an associated subscription. To do so, from the SNS dashboard, select the Subscriptions option from the navigation pane and click on Create subscription to get started.

In the Create subscription dialog box, copy and paste the newly created topic's ARN in the Topic ARN field.


Once the Topic ARN is pasted, select the Amazon Simple Queue Service option from the Protocol drop-down list, followed by pasting a queue's ARN in the Endpoint field. In this case, I'm using the standard queue's endpoint that we created a while back in this blog. With the required fields filled out, select Create a subscription to complete the process.


Next, from the Simple Queue Service dashboard, select the queue that you have identified for this integration and, from the Permissions tab, select Add a Permission to allow the SNS service to send messages to the queue. To do so, provide the following set of permissions:

1. Effect: Allow

Principal: Everybody

Actions: SendMessage


Once done, click on Add Permission to grant the SNS service the required set of permissions.

We are now ready to test the integration! To do so, simply fire a sample message using the Publish to Topic option from the SNS dashboard. Once the message is successfully sent, cross over to the Simple Queue Service dashboard and poll the queue using the View/Delete Messages option from under the Queue Actions drop-down list.


Here is a snippet of the Message Body obtained after long polling the queue: Similarly, you can use such a fan out pattern to design and build your very own highly scalable and decoupled cloud-ready applications.


Planning your next steps

Well, that was really quite a lot to learn and try out, but we are not done yet! There are still a few things that you ought to try on your own with SNS, as well as with SQL. First up, Amazon SNS mobile push notifications.


We have already touched upon the fact that Amazon SNS can be used to send notifications to a variety of subscribers, including HTTP, HTTPS endpoints, Amazon Simple Queue Service, and AWS Lambda, but one other key feature recently added is SNS' ability to push notifications directly to your applications on mobile devices.


This is called SNS mobile push notifications and, as of now, SNS supports the following push notification services:

  • Amazon Device Messaging (ADM)
  • Apple Push Notification Service (APNS) for both iOS and macOS
  • Baidu Cloud Push (Baidu)
  • Google Cloud Messaging (GCM) for Android
  • Microsoft Push Notification Service (MPNS) for Windows Phone
  • Windows Push Notification Services or Windows Notification Service (WNS)


It's pretty easy and straightforward to get started with mobile push notifications. All you need is a set of credentials for connecting to one of the supported push notification services, a device token or registration ID for the mobile application and device itself, and an Amazon SNS configured to send push notification messages to the mobile endpoints.


You can read more about SNS mobile push notification services at html.


The other important feature worth trying out is the configuration of server-side encryption for your Amazon Simple Queue Service queue. You can leverage SSE to encrypt and protect data stored in your queue, however, this feature is only available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) regions at present.


Encrypting the queue can be done at the time of the queue's creation, as well as after the queue has been created. Old messages present in the queue, however, are not encrypted if the SSE is switched on in an existing queue.


You can configure SSE for an existing queue simply by selecting it from the Simple Queue Service dashboard and selecting the Configure Queue option present in the Queue Actions drop-down menu. Here, check the Use SSE checkbox to enable the server-side encryption on your queue.


At this time, you will be prompted to select a customer master key (CMK) ID which you can leave to the default value if you do not have a CMK of your own.


Once done, set a duration for the Data key reuse period of between 1 minute and 24 hours. Click on Save changes to apply the recent modifications to the queue.


Last, but not the least, I also recommend that you try out the dead letter queue feature provided by Amazon Simple Queue Service. Dead letter queues are nothing more than queues that you create for storing messages that could not be processed by your application's main processing queue.


This comes in really handy when you need to debug issues in your application or the messaging system. However, it is very important to note that the dead letter queue of a standard queue is always a standard queue, and the same applies for a FIFO-based queue as well.


You can configure any queue within your account to be a dead letter queue for another queue by simply configuring the Redrive Policy of your application's main queue.