What is AWS Cloud (Best Tutorial 2019)

What is AWS Cloud

What is AWS Cloud (New Tutorial 2019)

Amazon Web Services (AWS) is a huge array of services. This tutorial explains What is AWS Cloud and how to implement AWS cloud with best examples 2019.

 

Amazon Web Services (AWS) started out as a tiny bit of software that enabled people to perform a limited number of tasks directly on Amazon, such as querying a product, placing a product request, or checking on order status. 

 

Today, AWS is a huge web service, so big that it’s nearly impossible for anyone to explore it fully. It performs all sorts of tasks that don’t even relate to buying and selling products.

 

In fact, the buying and selling of products are more of a sideline today as people use AWS more for computing services of all types (things like data storage and running applications).

 

Part of making AWS small enough to understand is to define the AWS environment. For such an understanding, you need to know a little about Infrastructure as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS).

 

Even though you can use AWS quite well without a certification, obtaining an AWS certification will help you get a better job with the organization of your dreams. Finally, you need to round out your AWS education to use AWS effectively

 

What is AWS Cloud

AWS Cloud

Amazon Web Services (AWS) is actually a huge array of services that can affect consumers, Small to Medium-Sized Business (SMB), and enterprises. Using AWS, you can do everything from backing up your personal hard drive to creating a full-fledged IT department in the cloud.

 

The installed base is immense. You can find case studies of companies like Adobe and Netflix that use AWS at https://aws. amazon.com/solutions/case-studies/

 

AWS use isn’t just for private companies either — even the government makes use of its services. The technologies that make all these services possible are actually simple in ­conception. Think of a pair of tin cans with a string attached between them. Amazon holds one tin can and you hold the other.

 

By talking into one tin can, you can hear what is said at the other end. The implementation, however, relies on details that make communication harder than you might initially think. The following sections give you an overview of how the AWS cloud works.

 

Understanding service-driven application Architectures

service-driven application

Service-driven application architectures, sometimes known as Service-Oriented Architectures (SOA), come in many forms.

 

No matter how you view them, service-driven application architectures are extensions of the client-server technologies used in the early days of computing, in that a client makes a request that a server fulfills by performing an action or sending a response.

 

However, the implementation details have changed significantly over the years, making modern applications far more reliable, flexible, and less reliant on a specific network configuration.

 

The request and response process can involve multiple levels of granularity, with the term microservice applied to the smallest request and response pairs.

 

Developers often refer to an application that relies on a service-driven application architecture­ as a composite application because it exists as multiple pieces glued together to form a whole.

 

Service-driven application architectures follow many specific patterns, but in general, they use the following sequence to perform communication tasks.

 

1. Create a request on the client using whatever message technology the server requires.

2.  Package the request, adding security or other information as needed.

 

3. Send the request using a protocol, such as Simple Object Access Protocol (SOAP), or an architecture, such as Representational State Transfer (REST). 

You can discover how SOAP works at http://www.w3schools.comand how REST works at http://www.tutorialspoint.com/restful/ a passing knowledge of both is helpful in working with AWS. Process the request on the server. Perform an action or return data as required by the request.

 

6.  When working with data, process the response on the client and present the results to the user (or another recipient).

AWS provides a service-driven application architecture in which you choose a specific service, such as S3, to perform specific tasks, such as to back up files on a hard drive. In many cases, you must perform setup steps in addition to simply interacting with the service.

 

For example, if you look at the ten-minute tutorial at http://aws.amazon.com/getting-started/tutorials/backup-files-to-amazon-s3/, you find that you must first create a bucket to store the files you want to upload to Amazon.

 

This additional step makes sense because you have to establish a location from which to retrieve the files later, and you don’t want your files mixed in with files from other people.

 

Even though many of the processes you perform with AWS require using an app (so that you have a user interface rather than code to work with), the underlying process is the same. The code provided in the app makes requests for you and then waits for a response.

 

In some cases, the app must determine the success or failure of action on the server. You need to realize, however, that these actions take place in code and that the code uses a sequence of steps to accomplish the task you’ve asked it to perform.

 

Understanding process- and function-driven workflows

function-driven workflows

In creating apps to help manage underlying services, AWS also defines workflows. A workflow is an organized method of accomplishing tasks. For example, when you want to save a file to AWS using S3, you must first create a bucket to hold the file. Only after you create a bucket can you save a file to AWS.

 

In addition, you can’t retrieve a file from the bucket until you first save a file there, which makes sense because you can’t grab a file out of thin air. In short, a workflow defines a procedure for working with software, and the concept has been around for a long time.

 

Workflows can consist of additional workflows. In addition, workflows manage the interaction between users and underlying services. A process is the aggregation of services managed by workflows into a cohesive whole.

 

The workflows may perform­ generic tasks, but processes tend to be specific and help users accomplish particular goals. A process-driven workflow is proactive and attempts to circumvent­ potential problems by

  • Spotting failure patterns and acting on them
  • Looking for trends that tend to lead to failures
  • Locating and extinguishing potential threats

 

In looking through the tutorials at http://aws.amazon.com/getting-started/ tutorials/you find that they all involve using some type of user interface. The user interface provides the workflow used to manage the underlying services.

 

Each major tutorial step is a workflow that performs a specific task, such as creating a bucket. When you combine these individual workflows into an aggregate whole, the process can help a user perform tasks such as moving files between the cloud and the user’s system.

 

Creating a cloud file system is an example of a ­process-driven workflow: The workflow exists to make the process viable. Work-flows can become quite complex in large-scale operations, but viewing them helps you understand AWS better.

 

You can find a more detailed discussion of workflows and processes at https://msdn.microsoft.com/library/bb833024.aspx.

 

function is the reactive use of services managed by workflows to address specific problems in real time. Even though it would be nice if process-driven workflows worked all the time, the reality is that even with 99.999 percent reliability, the process will fail at some point, and a function-driven workflow must be in place to address that failure.

 

Although process-driven workflows focus on flexible completion of tasks, function-driven workflows focus on procedurally attenuating the effect of a failure. In short, function-driven workflows address needs.

 

The AWS services and workflows also deal with this issue through the user interface, such as by manually restoring a backup to mitigate a system failure.

 

Cloud service models – IaaS, PaaS, and SaaS

SaaS

There are three cloud-based service models, IaaS, PaaS, and SaaS. The main features of each of these are listed here:

 

Infrastructure as a Service (IaaS) provides users the capability to provision processing, storage, and network resources on demand. The customers deploy and run their own applications on these resources.

 

Using this service model is closest to the traditional on-premise models and the virtual server provisioning models (typically offered by data center outsourcers). The onus of administering these resources rests largely with the customer.

 

In Platform as a Service(PaaS), the service provider makes certain core components, such as databases, queues, workflow engines, e-mails, and so on, which are available as services to the customer.

 

The customer then leverages these components for building their own applications. The service provider ensures high service levels and is responsible for scalability, high-availability, and so on for these components.

 

This allows customers to focus a lot more on their application's functionality. However, this model also leads to application-level dependency on the providers' services.

 

In the Software as a Service(SaaS) model, typically, third-party providers using a subscription model provide end-user applications to their customers.

The customers might have some administrative capability at the application level, for example, to create and manage their users. Such applications also provide some degree of customizability.

 

for example, customers can use their own corporate logos, colors, and many more. Applications that have a very wide user base most often operate in a self-service model.

 

In contrast, the provider provisions the application for the customer for more specialized applications.

The provider also hands over certain application administrative tasks to the customer's application administrator (in most cases, this is limited to creating new users, managing passwords, and so on through well-defined application interfaces). 

 

From an infrastructure perspective, the customer does not manage or control the underlying cloud infrastructure in all three service models. The following diagram illustrates who is responsible for managing the various components of a typical user application across IaaS, PaaS, and SaaS cloud service models.

 

The column labeled user application represents the main components of a user application stack, while the following columns depict the varying levels of management responsibilities in each of the three service models. The shaded boxes are managed by the service provider, while the unshaded boxes are managed by the user.

 

The level of control over operating systems, storage, applications, and certain network components (for example, load balancers) is the highest in the IaaS model, while the least (or none) in the SaaS model. 

 

We would like to conclude our introduction to cloud computing by getting you started on AWS, right away. The next two sections will help you set up your AWS account and familiarize you with the AWS management console.

 

Discovering IaaS

IaaS

Even though this blog frequently refers to virtual environments and services that you can’t physically see, these elements all exist as part of a real computer environment that Amazon hosts on your behalf.

 

You need to understand how these elements work to some extent because they have a physical presence and impact on your personal or business needs. Three technologies enable anyone to create a virtual computer center using AWS:

 

IaaS: A form of cloud computing that provides virtualized computing resources. You essentially use IaaS to replace physical resources, such as servers, with virtual resources hosted and managed by Amazon.

 

SaaS: 

SaaS

A software distribution service that lets you use applications without actually having the applications installed locally. Another term used to describe this service is software on demand. The host, Amazon, maintains the software, provides the required licenses, and does all the other work needed to make the software available.

 

PaaS: platform provides a complete solution for running software in an integrated manner on a particular piece of hardware. For example, Windows is a particular kind of platform. The virtual platform provided by PaaS allows a customer to develop, run, and manage applications of all sorts.

 

The following sections provide an extended discussion of these three technologies and help you understand how they interact with each other. The point of these sections is that each element performs a different task, yet you need all three to create a complete solution.

 

Defining IaaS

Defining IaaS

The simplest way to view IaaS is as a means of providing access to virtualized computer resources over an Internet connection. IaaS acts as one of three methods of sharing resources over the Internet, alongside SaaS and PaaS.

 

AWS supports IaaS by providing access to virtualized hardware, software, servers, storage, and other infrastructure components.

 

In short, you can use IaaS to replace every physical element in your computing setup except those required to establish and maintain Internet connectivity and those required to provide non virtualized services (such as printing).

 

The advantages of IaaS are many, but here are the ones that most people consider essential:

 

  • The host handles tasks such as system maintenance, backup, and resiliency planning.
  • A client can gain immediate access to additional resources when needed and then doesn’t need to worry about getting rid of them when the need has ended.

 

  • Detailed administrative tasks are handled by the host, but the client can manage overall administrative tasks, such as deciding how much capacity to use for a particular task.
  • Users have access to desktop virtualization, which means that their desktop appears on whatever device they happen to use at a given moment.

 

  • The use of policy-based services ensures that users must still adhere to company requirements when using computer resources.
  • All required updates (software and hardware) occur automatically and without any interaction required by the client.

 

  • Keep in mind that there is no free lunch. AWS and other IaaS providers are interested in making a profit. They do so by investing in huge quantities of hardware, software, and management personnel to oversee it all. The benefits of scale help create profit, and many businesses simply can’t create setups they require for less money.

 

However, you must consider the definite disadvantages of IaaS as well:

disadvantages of IaaS

  • Billing can become complex because some services are billed at different rates and within different time frames. In addition, billing can include resource usage.

 

  • The client must ensure that the amount on the bill actually matches real-world usage; paying too much for services that the client didn’t actually use can easily happen.

 

  • Systems management monitoring becomes more difficult. The client loses control over the precise manner in which activities occur.
  • A lag often occurs between the time a change in service is needed and the host provides it, so the client can find that even though services are more flexible, they aren’t as responsive.

 

  • Host downtime can affect a large group of people and prove difficult to fix, which means that a particular client may experience downtime at the worst possible time without any means to resolve it.

 

  • Building and testing custom applications can become more difficult. Many experts recommend using in-house equipment for application development needs to ensure that the environment is both protected and responsive.

 

Comparing IaaS to SaaS

 IaaS to SaaS

SaaS is all about cloud-based applications. Products like online email and office suites are examples of cloud-based applications. A client typically accesses the application using a local application, such as a browser.

 

The browser runs on local hardware, but the application runs on the host hardware. What a client sees is the application running in the browser as if it is working locally.

 

In most cases, the application runs within a browser without any alteration to the local system. However, some applications do require the addition of plug-ins. The difference between IaaS and SaaS is the level of service.

 

When working with IaaS, a client typically requires detailed support that spans entire solutions. A SaaS solution may include only the application. However, it can also include the following:

  • Application runtimes
  • Data access
  • Middleware
  • Operating system support
  • Virtualization
  • Server access
  • Data storage
  • Networking

 

SaaS typically keeps the host completely in control and doesn’t offer any sort of monitoring.

 

Even though the host keeps the application updated and ensures data security, the client company administrators typically can’t access SaaS solutions in any meaningful way (SaaS offers application usage, but not necessarily application configuration, and is therefore not as flexible as other alternatives).

 

In addition, the client company typically accepts the application as is, without any modifications or customizations. Using client-developed applications is out of the question in this scenario.

 

Comparing IaaS to PaaS

Middleware

PaaS is more of a development solution than a production environment solution. A development team typically uses PaaS to create custom solutions or modify existing solutions.

 

The development staff has full control over the application and can perform all development-related tasks, such as debugging and testing. As with the SaaS solution, the host normally maintains control over

  • Middleware
  • Operating system support
  • Virtualization
  • Server access
  • Data storage
  • Networking

 

In this case, however, the development staff can access the middleware to enhance application development without reinventing the wheel.

 

Writing application code to make the application cloud-ready isn’t necessary because the middleware already contains these features. The development team gains access to cloud-based application features that include the following:

  • Scalability
  • High availability
  • Multitenancy
  • SaaS enablement

 

Administrators can also perform monitoring and management tasks within limits when working with a PaaS (depending on the contract the client has with the host).

 

However, realize that PaaS is oriented toward development needs, so the developer takes precedence when it comes to performing some tasks that an administrator might normally perform. In addition, PaaS relates to development, not production setups, so the host may take care of all administration tasks locally.

 

Determining Why You Should Use AWS

server

Even though AWS has a lot to offer, you still need to consider how it answers your specific needs.

 

 This consideration goes beyond simply determining whether you really want to move to cloud-based services, but also taking into account other offerings that might serve your needs just as well (if not better). Even though this blog is about AWS, you should compare AWS with other cloud services.

 

You may choose to use AWS as part of your solution rather than as the only solution. Of course, this means knowing the areas in which AWS excels. The following sections address both of these possibilities: using other cloud services instead of AWS, or in addition to it.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Comparing AWS to other cloud services

cloud services

You have many ways to compare cloud services. One of the ways in which ­companies commonly look at services is by the market share they have. A large market share tends to ensure that the cloud service will be around for a long time and that many people find its services both useful and functional.

 

A recent InfoWorld article (http://www.infoworld.com/article/3065842/cloud-computing/beyond-aws-the-clouds-next-stage.html) points out that AWS currently corners­ 70 to 80 percent of the cloud market.

 

In addition, AWS revenues keep increasing, which lets Amazon continue adding new features while maintaining existing features at peak efficiency.

 

Large market share and capital to invest don’t necessarily add up to a cloud service­ that fulfills your needs. You also need to know that the host can provide the products you need in a form that you can use.

 

The AWS product list appears at http://aws.amazon.com/products/. It includes all the major IaaS, SaaS, and PaaS categories. However, you should compare these products to the major AWS competitors:

 

Of the competitors listed here, Google Cloud Platform comes closest to offering the same feature set found in AWS. However, in looking at the Google offerings, you should note the prominence of machine learning services that aren’t found in AWS.

 

On the other hand, AWS has more to offer in the way of the Internet of Things (IoT), applications, and mobile services. 

 

Each of the vendors offering these services is different. For example, Joyent offers a simple setup that may appeal more strongly to an SMB that has only a few needs to address and no desire to become involved in a complex service.

 

Microsoft, on the other hand, has strong SQL database-management support as well as the connection with the Windows platform that businesses may want to maintain.

 

The point is that you must look at each of the vendors to determine who can best meet your needs (although, as previously stated, most people are voting with their dollars on AWS).

 

Defining target areas where AWS works best

AWS works

In looking at the services that AWS provides, you can see that the emphasis is on enterprise productivity. For example, Google Cloud Platform offers four enhanced machine learning services that you could use for analysis purposes, but AWS offers only one.

 

However, Google Cloud Platform can’t match AWS when it comes to mobile service, which is an area that users most definitely want to be included for accessing applications.

 

Unless your business is heavily involved in analysis tasks, the offerings that AWS provides are significantly better in many ways. Here are the service categories that AWS offers:

  • Compute
  • Storage and content delivery
  • Database
  • Networking
  • Analytics
  • Enterprise applications
  • Mobile services
  • IoT
  • Developer tools
  • Management tools
  • Security and identity
  • Application services

 

Understanding the AWS Certifications

AWS Certifications

A certification doesn’t make you an expert. However, it does provide a quantified description of your minimum level of expertise — a textbook look of what you know, but not an assessment of real-world knowledge.

 

In other words, you get a certification to prove that you have a given level of provable expertise and most employers will probably assume that you possess expertise in addition to what the certification tests.

 

The pursuit of certification can also help you better understand areas in which your current education is weak. Going through the learning and testing process can help you become a better administrator.

 

With the need to obtain the guidelines to achieve proficiency and later demonstrate proficiency in mind, the following sections discuss the various AWS certifications so that you can get a better idea of where to spend your time when getting one.

 

Getting a certification is generally useful only when you want to apply for a new job or advance in your current job. After all, you likely know your own skills well enough to determine your level of proficiency to some degree without a certification.

 

Filling out your education and then demonstrating what you know to others for specific personal gains are the reason to get a certification. Some people miss the point and discover later that they’ve spent a lot of money and time getting something they really didn’t need in the first place.

 

Gaining an overview of the certifications

AWS currently provides a number of certifications, which you can see at https:// aws.amazon.com/certification/. You can expect Amazon to add more as AWS continues to expand. The following list provides a quick overview of the levels of certifications:

 

AWS Certified Solutions Architect Associate:

Tests the ability of a developer to perform basic AWS design and development tasks. Before you can even contemplate taking this exam, you need to know how to program and have experience designing applications on AWS.

 

A number of sources also recommend this certification for the administration because many of the administration tasks build on the knowledge you get here.

 

AWS Certified Solutions Architect Professional:

AWS Certified Solutions

Tests the ability of a developer to perform the next level of development tasks on AWS, such as migrating complex, multitier applications to AWS.

 

The exam still focuses on development tasks but depends on the developer’s having already passed the AWS Certified Solutions Architect – Associate exam and mastering new skills. (The resources specify a minimum of two years of hands-on AWS programming.)

 

AWS Certified Developer Associate: 

Determines whether the developer can perform specific levels of application development using AWS. For example, you need to know which of the services to use to add specific features to an application.

 

Rather than have you actually use AWS to host the application, this exam focuses more on using AWS in conjunction with existing applications.

 

AWS Certified SysOps Administrator Associate:

Determines whether an administrator has the skills required to deploy and manage applications on an AWS setup. In addition, the administrator must show proficiency in operating various AWS services and in determining which service to use to meet a specific need.

 

AWS Certified DevOps Engineer Professional

AWS Certified DevOps

Evaluates the ability of the test taker to perform DevOps (that is, create an interface between developers and other IT professionals). This means having some level of skill in both administration and development.

 

In addition, the candidate must have knowledge of processes that enable smooth design, development, deployment, management, and operation of applications. 

 

If you find that potential employers really do want you to obtain certifications to prove your skill level, you may find that obtaining just an AWS-specific certification may not be enough to get that six-figure income. Cloud administrators typically need to demonstrate proficiency with more than one service.

 

Locating certification resources

certification resources

You can find all sorts of interesting aids online for getting your certification. However, the best place to start is directly on the Amazon website. Unfortunately, the information you find isn’t the best organized at times.

 

Start by ensuring that you meet the requirements in the Candidate Overview section. Until you meet those requirements, it isn’t particularly useful to move forward (unless you want to end up with a paper certification — one that doesn’t actually mean anything). 

 

After you have fulfilled the minimum requirements, download the Exam Guide. The guide tells you that you need to be proficient in a number of areas in order to pass, which shouldn’t surprise you.

 

AWS wants to ensure that you actually know the material. Fortunately, you can also find online sources to help you make sense of the Exam Guide.

 

For example, there is an excellent video on the requirements for the AWS Certified SysOps Administrator – Associate exam at https://www. youtube.com/watch?v=JCkD8lpadj8. Watching the video and going through the Exam Guide can help you get a better idea of what you need to do.

 

Setting up your AWS account

AWS account

You will need to create an account on Amazon before you can use the Amazon Web Services (AWS). Amazon provides a 12 month limited fully functional free account that can be used to learn the different components of AWS.

 

With this account, you get access to the services provided by AWS, but there are some limitations based on resources consumed. The list of AWS services is available at http://aws.amazon. com/free

 

We are assuming that you do not have a pre-existing AWS account with Amazon (if you do, please feel free to skip this section). Perform the following steps:

 

Point your browser to http://aws.amazon.com/ and click on Create a Free Account.

The process to create a brand new AWS account has started. You can sign in using your existing Amazon retail account, but you will have to go through the process of creating an AWS account; the two accounts are different for accounting purposes, even though they share the same common login.

 

After creating a new account or using your existing retail Amazon account, select the I am a returning user and my password is: option and clicks on Sign in using our secure server. A set of intuitive screens will guide you through multiple screens in order to create an AWS account, these include:

 

Contact Information Amazon also uses this information for billing and invoicing. The Full Name field is also used by the AWS management console to identify your account, as shown in the following screenshot:

Payment

Payment Information When you create an AWS account and sign up for services you are required to enter payment information. Amazon executes a minimal amount transaction against the card on file to confirm that it is valid and not reported lost or stolen.

 

This is not an actual charge it merely places the 'X' amount on hold on the card which will eventually drop off. The 'X' amount depends on the country of origin.

 

Identity Verification Amazon does a call back via an automated system to verify your telephone number.

At this stage, you have successfully created an AWS account, and you are ready to start using the services offered by AWS.

 

The AWS management console The AWS management console is the central location from where you can access all the Amazon services. The management console has links to the following:

 

Amazon Web Services This is a dashboard view that lists all the AWS services currently available in a specific Amazon region. Clicking on any one of these launches the dashboard for the selected service.

 

Shortcuts for Amazon Web Services On the console management screen, you can create shortcuts of frequently accessed services via the Edit option.

 

Account related information This allows you to access your account-related data. This includes security credentials needed to access the AWS resources by your application. The Billing & Cost Management option gives you real-time information on your current month's billing; this helps in managing costs.

 

Top 50 Tips for Amazon Free Tier

Amazon Free Tier

One of the purposes of this blog is to help you discover a lot more about Amazon Web Services (AWS) through experimentation. Of course, Amazon would just love to have you buy these services, but a free option, which is the focus of this blog, is also available. The issue is one of figuring out just what Amazon means by free.

 

Going through the various procedures is the best way to build an understanding of AWS. Yes, some people can get a feeling for how things work just by reading, but doing things hands on really is better.

 

The blog ends by having you perform a simple task using AWS, just to get a feel for how it works. Don’t worry: You really can’t mess up the account.

 

Limits of AWS Free Services

Amazon Free Tier

Amazon does provide the means for using many of its cloud services for free. In fact, you can see some of these services at http://aws.amazon.com/free/. However, as you look through the list of services, you see that the some expire, others don’t. In addition, some have limits and others don’t.

 

Those that do have limits don’t have the same limits, so you need to watch usage carefully. It’s really quite confusing. The following sections help clarify what Amazon actually means by saying some services are free.

 

Expiring services versus nonexpiring AWS services

nonexpiring AWS services

Many of the AWS services you obtain through the free tier have expiration dates, and you need to consider this limitation when evaluating and possibly using the service to perform useful work.

 

Notice that you must begin paying for the service 12 months after you begin using it. In some cases, the product itself doesn’t have an expiration date, but the service on which it runs does.

 

For example, when viewing the terms for using the free software, the software itself is indeed free. However, in order to run the software, you must have the required service, which does come with an expiration date.

 

You also have access to some products that are both free and have no expiration date. These nonexpiring offers still have limitations, but you don’t have to worry about using those products within the limits for however long you want (or until Amazon changes the terms).

 

Knowing the terms under which you use a service is essential. The free period for services with an expiration date goes all too quickly, and you may suddenly find yourself paying for something that you thought remained free for a longer time frame.

 

Given that Amazon can change the terms of usage at any time, you need to keep checking the terms of service for the services that you use. A service that lacks an expiration date today may have an expiration date tomorrow.

 

Considering the usage limits

usage limits

Note that all these products have some sort of usage limit attached to them — even the free software — because of the software’s reliance on an underlying service. (Some software relies on more than one service, so you must also consider this need.)

 

For example, you can use Amazon Elastic Compute Cloud (EC2) for 750 hours per month as either a Linux or ­Windows setup. A 31-day month contains 744 hours, so you really don’t have much leeway if you want to use the EC2 service continuously. 

 

The description then provides you with an example of usage. Amazon bases the usage terms on instances. Consequently, you have access to a single Linux or single Windows setup.

 

If you wanted to work with both Linux and Windows, you would need two instances and could use them for only 15 days and 15 hours each month. In short, you need to exercise care in how you set up and configure the services to ensure that you don’t exceed the usage limits. 

 

The free, nonexpiring services also have limits. For example, when working with Amazon DynamoDB, you have access to 25GB of storage, 25 units of reading capacity, and 25 units of write capacity.

 

Theoretically, this is enough capacity to handle 200 million requests each month. However, whether you can actually use all that capacity depends on the size of the requests and how you interact with the service.

 

You could easily run out of storage capacity long before you run out of request capacity when working with larger files, such as graphics. Again, you need to watch all the limits carefully or you could find yourself paying for a service that you thought was free.

 

Considering the Hardware Requirements

Hardware Requirements

No matter how many services AWS offers, you still require some amount of hardware to use the services. The amount of hardware you require when working with services in the cloud is minimal because the AWS hardware does all the heavy lifting­.

 

When working with services locally, you need additional hardware because AWS is no longer doing the heavy lifting for you.

 

Therefore, you should consider different hardware requirements depending on where you host the AWS service. The following sections help you obtain additional information about working with both cloud and local services.

 

Hosting the services in the cloud

cloud

Hidden in the AWS documentation is all sorts of useful information about various services.

For example, AWS Storage Gateway (http://aws.amazon.com/ documentation/storage-gateway/) will connect an on-premises software appliance (an application combined with just enough operating system capability to run on hardware or on a virtual machine) with cloud-based storage.

In other words, you use the gateway to connect your application to the data storage it requires.

 

It might seem as if running the gateway in the cloud would be a good idea because you wouldn’t need to invest in additional hardware. However, when you see that using an EC2 instance allows you to run only gateway-cached volumes and gateway-VTLs (you can’t run ­gateway-stored volumes).

 

You don’t need to know what these terms mean, but you do need to understand that the cloud present limits that you must consider during any planning stage. After you make certain that you can run your intended configuration, you can begin to consider the advantages and disadvantages of working in the cloud.

 

For example, when hosting the service in the cloud, you get automatic scaling as needed, and Amazon performs many of the administrative tasks for you. Blog­ 1 discusses many of the advantages of the cloud for you. However, from a realistic perspective, you must offset these advantages with disadvantages, such as:

  • The potential for lower application speed
  • Need to maintain a reliable Internet connection
  • Loss of flexibility
  • Vendors going out of business

 

Even though basic hardware needs become less expensive, you do need to consider additional expenses in the form of redundancies. Most organizations find that the hardware costs of moving to the cloud are substantially less than maintaining a full IT department, which is why they make the move.

 

However, you must make the move with the understanding that you have other matters to consider when you do.

 

Hosting the services locally

hosting

When hosting services locally, you need to provide all the required infrastructure, which can get expensive. AWS does provide guidance on the minimum requirements for hosting a service locally.

 

For example. A good rule of thumb when hosting services locally is to view any vendor-supplied requirements as minimums. If you don’t plan to load the service heavily, these minimums usually work.

 

However, when you click the Optimizing Gateway Performance link, the first suggestion you see is adding resources to your gateway. Planning for too much capacity is better than for not enough, but getting the configuration as close as possible to what you need will always help financially.

 

Not all the services will work locally, but you may be surprised to find that many do. The issue is one of defining precisely how you plan to use a given service­ and the trade-offs that you’re willing to make.

 

For example, when ­hosting a service locally, you may find it hard to provide the same level of connectivity that you could provide to third parties when hosting the same service in the cloud.

 

Considering the Network Requirements

network connection

To use the AWS services, you need a network connection. In some cases, you need more than one. You not only need an Internet connection for the AWS user interface, but the services may require dedicated connections as well and these connections can become part of your business network.

 

Because of this close relationship, creating the network configuration carefully is essential.

 

Otherwise, you may find that the AWS network connection conflicts with the configuration used for your business (a problem that occurs more often than you might think). 

 

Interestingly enough, when you host certain services, such as DynamoDB, locally, you may not need to spend much time considering the network requirements.

 

The reason is that you’re hosting the service locally, and the AWS hardware doesn’t come into play. However, the local hosting scenario is for development purposes in most cases, so eventually, you need to create a network connection to the online services.

 

The following sections discuss network requirements for AWS. The amount of configuration required depends on the services you use, how you use them, how you host them, and where your own business services come into play. The most important thing to consider is the need to plan carefully before you perform any setups.

 

Designing for connectivity

AWS

 

Many of the services that you use with AWS require some sort of connectivity solution when you host them in the cloud. A common way to create the required connectivity is to use Amazon Virtual Private Cloud (AmazonVPC) (https://aws. amazon.com/vpc/). For example, you can make Amazon VPC part of the EC2 setup.

 

You use Amazon VPC to create the connection to your EC2 configuration. The blog discusses the configuration requirements in more detail, but be aware that you do need connectivity to access some of the services that Amazon offers. Another method of creating the connection is to rely on Direct Connect (https:// aws.amazon.com/directconnect/).

 

In this case, you create a direct connection between AWS services and your network. This means that you can access the AWS services as just another resource on your network, and the services actually become invisible to end users.

 

This implementation relies on the 802.1q VLAN standard to make the required connection. (You can find an 802.1a VLAN tutorial at http://www.microhowto.info/tutorials/802.1q.html.) When configured correctly, you can create a private IP interface for local network resources and a separate public IP interface for AWS services.

 

Amazon offerings are just the tip of the connectivity iceberg. For example, you could rely on a third-party vendor, such as AT&T, to help you make the connection. The AT&T NetBond service (https://www.business.att.com/enterprise/ Family/cloud/network-cloud/)

 

lets you connect your Virtual Private Network (VPN) to multiple cloud providers, so you can use a single connection to address all your connectivity needs.

 

In this case, instead of just connecting to AWS using its service, you can connect with the following cloud services using a single connection, which makes managing the connections infinitely easier (assuming that you use more than one cloud provider).

 

Amazon Web Services

Amazon Web Services

  • Blue Jeans Network
  • Box
  • Cisco WebEx
  • CSC Agility Platform
  • IBM Managed Cloud Service
  • IBM SoftLayer
  • Microsoft Azure and Office 365
  • Salesforce.com
  • Sungard Availability Services
  • VMware vCloud Air

 

The third-party options may seem complex and initially cost quite a bit more than the Amazon offerings, but they have distinct advantages as well. For example, according to InformationWeek, the AT&T NetBond service lets larger organizations use Multi-Protocol Label Switching (MPLS), which the organization may have already installed.

 

However, the big advantage is that this approach lets the organization skip the public Internet in favor of a private connection that can significantly improve network performance.

 

For example, using a private connection can reduce network latency (the time it takes for a packet of data to get from one designated point to another) by 50 percent. After this kind of solution is in place, a larger organization can save as much as 60 percent on its monthly bill, so the savings eventually pay back the larger initial investment.

 

Balancing cloud and internal needs

Balancing cloud

The connectivity solution you choose must reflect a balance between cloud and internal needs. You don’t necessarily want to move right into a Direct Connect solution when your only goal is to experiment with AWS to determine whether it can meet certain organizational goals.

 

Likewise, a third-party solution, such as AT&T NetBond, is the better solution when you’ve already made a commitment to AWS but also plan to support a number of other cloud provider solutions.

 

Choosing the right level of connectivity is essential to ensuring that you get the best performance at the right price, but with the least layout of initial capital. 

 

To help you keep costs low and reduce the potential for serious problems with your own network, the exercises in the blog assume that you’re using the Amazon VPC solution. It presents the smallest investment and lowest risk. However, these features come at the cost of convenience, speed, and potentially cost.

 

Specifying a subnet

It’s important to consider precisely how you plan to configure the service before you choose network settings. Using the default AWS subnet may cause conflicts with the local network when you host the service locally.

 

However, choosing the wrong subnet can create conflicts as well. Make certain that you choose a subnet that actually works with your local networking setup. The Amazon offerings usually provide more than one scenario for creating a subnet. 

 

For example, when using Amazon VPC.

you have the options described at http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarios. HTML .

For example, Scenario 1: VPC with a Single Public Subnet works best for a single-tier, public-facing web application. You can also use it for development purposes. 

 

Each of the scenarios provides you with helpful information about the subnet configuration. Using the information found with each scenario helps you make a better decision about which configuration to use and decide how to configure it to meet your specific needs (potentially avoiding those conflicts that will cause problems later).

 

Getting AWS Signed Up

AWS

Before you can really do anything other than plan, you need an account. Discovering the wonders of AWS is a hands-on activity, so you really do want to work with it online. Consequently, this blog assumes that you’ve gone through the free sign-up process described in the following steps:

1. Navigate your browser to http://aws.amazon.com/. The main Amazon Web Services page appears.

 

2. Click Create a Free Account.

Unless you already signed into Amazon, you see a Sign In or Create an AWS Account dialog box like. If you already have an Amazon account and want that account associated with AWS, you can sign in using your Amazon account. Otherwise, you need to create a new account.

 

3. Sign into an account or create a new one as required.

The Contact Information page appears. Notice that different pages exist for the company and personal accounts.

 

4.  Supply the required company or personal contact information. Read and accept the customer agreement.

 

5. Click Create Account and Continue when you complete the form.

You see the Payment Information page. Be aware that Amazon will bill you for any usage in excess of the free tier level. You can click View Full Offer Details if you have any questions about the level of support provided before you enter your credit or debit card information.

 

6. Provide the required credit or debit card information, supply the address information needed, and then click Continue.

You see the Identify Verification page. Amazon performs an automated call to verify your identity. You see a PIN provided onscreen. During the call, you say or type this PIN into your telephone keypad. The screen automatically changes as you perform each step of the identification process.

 

7. Click Continue to Select Your Support Plan.

You see a listing of support plans. Only the Basic plan is included as part of the free tier. If you want to obtain additional support, you must pay a monthly fee for it. This is an example of one of the potential charges that you might pay for the free tier service. You have the following support-plan options:

 

Basic: Free support that Amazon offers as part of the free tier support. Amazon doesn’t offer any support through this option. You must instead rely on community support, which usually works fine for experimentation.

 

Developer: Support that comes at $49/month at the time of this writing. A single developer (or another organizational representative) can contact the Support Center and expect a response within 12 to 24 hours.

 

However, if you’re serious about developing an application and also anticipate using third-party products, you really need to consider the Business level.

 

Business: Support that comes at $100/month at the time of this writing. A business user may contact the Support Center by phone and expect a one-hour response to urgent support problems as well as obtain help with third-party products.

 

Enterprise: Support that comes at $15,000/month. This is the level of support provided for organizations that use AWS for mission-critical applications. The response time is only 15 minutes, and Amazon is willing to provide all sorts of technical help. Of course, the price is a tad on the steep side.

 

8. Choose a support plan and click Continue.

Normally, you see a welcome page like. (However, you might also see a message saying that Amazon is setting up your account and will send you emails when your account is ready.

 

Wait for the emails to arrive if you see these messages.) At this point, you can sign into the console and try a few tasks. The 10-minute tutorials are helpful in getting you started. The next section of the blog gives you help getting started as well.

 

Performing a Few Simple Tasks

Now that you have a free account to use, you can give something a try. In this case, you create an online storage area, move a file to it, copy the file back to your hard drive, and then delete the file in the online storage.

 

Moving data between local drives and the AWS cloud is one of the most common activities you perform, so this exercise is important, even if it seems a bit simplistic.

 

The following steps help you through the process of working with files in the cloud.

1. Click Sign in to the Console or choose My Account ➪ AWS Management Console.

2. Sign in to your account.

You see a list of Amazon Web Services like. Remember that not all these services are free. Only the services in the free tier are free to use.

 

3. Click S3 in the Storage & Content Delivery group (you may need to select S3 from the Services drop-down at the top of the page).

You see an introduction to the Sample Storage Service (S3) page. This page explains a little about S3. Make sure to read the text before you proceed. To use S3, you must first create a bucket. The bucket will hold the data that you transfer to AWS. In this case, you use the bucket to hold a file.

 

4. Click Create Bucket.

You see the Create a Bucket dialog box. The Bucket Name field is simply the name that you want to give to your bucket. Choose a name that seems appropriate for the bucket’s use.

(See the restrictions for naming buckets at http://docs.aws.amazon.com/AmazonS3/latest/dev/ BucketRestrictions.html).

 

The Region field tells where your bucket is physically stored. A local bucket will respond faster, but a bucket somewhere else in the world may provide additional resilience because it won’t be as susceptible to local events, such as storms.

 

5. Type a bucket name (the example uses johnm.test-bucket) and select a region (the example uses Oregon); then click Create.

You see a new page with a list of all your buckets. You can configure each bucket differently using the properties shown on the right side of the screen. For now, use the default properties to work with a file.

 

6. Click the bucket entry you just created. You see a console for that bucket that tells you the bucket is empty.

7. Click Upload. You see an Upload – Select Files and Folders dialog box.

8. Click Add Files. You see a File Upload dialog box that will conform to the standard used for your platform.

9. Select the file you want to upload and click Open. The Upload – Select Files and Folders dialog box now contains a list of the files you plan to upload

10. Click Start Upload.

11. Check the box next to the file you uploaded.

Your browser displays a dialog box asking what to do with the file. Depending on the browser’s capabilities, you can open the file for editing or simply download it to your system.

12. Click Cancel to close the dialog box without doing anything with the file.

13. Choose Actions ➪ Delete.

You see a dialog box asking whether you want to delete the file.

14. Click OK. S3 deletes the file. Your bucket is now empty again.

Congratulations! You have now used S3 to perform the first set of tasks for the blog.

15. Choose <Your Name> -->Sign Out.

AWS logs you out of the console. Logging out when you finish a session is always a good idea.

 

What's New in AWS?

Elastic Compute Cloud

Elastic Compute Cloud (EC2) is by far one of the oldest running services in AWS, and yet it still continues to evolve and add new features as the year's progress. Some of the notable feature improvements and additions are mentioned here:

 

Introduction of the t2.xlarge and t2.2xlarge instances:

The t2 workloads are a special type of workload, as they offer a low-cost burstable compute that is ideal for running general purpose applications that don't require the use of CPU all the time, such as web servers, application servers, LOB applications, development, to name a few.

The t2.xlarge and t2.2xlarge instance types provide 16 GB of memory and 4 CPU, and 32 GB of memory and 8 CPU respectively.

 

Introduction of the I3 instance family: Although EC2 provides a comprehensive set of instance families, there was a growing demand for a specialized storage-optimized instance family that was ideal for running workloads such as relational or NoSQL databases, analytical workloads, data warehousing, Elasticsearch applications, and so on.

 

Enter I3 instances! I3 instances are run using non-volatile memory express (NVMe) based SSDs that are suited to provide extremely optimized high I/O operations. The maximum resource capacity provided is up to 64 vCPUs with 488 GB of memory, and 15.2 TB of locally attached SSD storage.

 

This is not an exhaustive list in any way. If you would like to know more about the changes brought about in AWS, check this out, at https://aws.amazon.com/about-aws/whats-new/2016/.

 

Availability of FPGAs and GPUs

One of the key use cases for customers adopting the public cloud has been the availability of high-end processing units that are required to run HPC applications.

 

One such new instance type added last year was the F1 instance, which comes equipped with field programmable gate arrays (FPGAs) that you can program to create custom hardware accelerations for your applications. Another awesome feature to be added to the EC2 instance family was the introduction of the Elastic GPUs concept.

 

This allows you to easily provide graphics acceleration support to your applications at significantly lower costs but with greater performance levels.

 

Elastic GPUs are ideal if you need a small amount of GPU for graphics acceleration, or have applications that could benefit from some GPU, but also require high amounts of computing, memory, or storage.

 

Simple Storage Service

Simple Storage Service

Similar to EC2, Simple Storage Service (S3) has had its own share of new features and support added to it. Some of these are explained here:

 

S3 Object Tagging: S3 Object Tagging is like any other tagging mechanism provided by AWS, used commonly for managing and controlling access to your S3 resources.

 

The tags are simple key-value pairs that you can use for creating and associating IAM policies for your S3 resources, to set up S3 lifecycle policies, and to manage transitions of objects between various storage classes.

 

S3 Inventory: S3 Inventory was a special feature provided with the sole purpose of cataloging the various objects and providing that as a useable CSV file for further analysis and inventorying. Using S3 Inventory, you can now extract a list of all objects present in your bucket, along with its metadata, on a daily or weekly basis.

 

S3 Analytics: A lot of work and effort has been put into S3 so that it is not only used just as another infinitely scalable storage. S3 Analytics provides end users with a medium for analyzing storage access patterns and defines the right set of storage class based on these analytical results.

 

You can enable this feature by simply setting a storage class analysis policy, either on an object, prefix, or the entire bucket as well.

 

Once enabled, the policy monitors the storage access patterns and provides daily visualizations of your storage usage in the AWS Management Console. You can even export these results to an S3 bucket for analyzing them using other business intelligence tools of your choice, such as Amazon QuickSight.

 

S3 CloudWatch metrics:

S3 CloudWatch metrics

It has been a long time coming, but it is finally here! You can now leverage 13 new CloudWatch metrics specifically designed to work with your S3 buckets objects.

 

You can receive one-minute CloudWatch metrics, set CloudWatch alarms, and access CloudWatch dashboards to view real-time operations and the performance of your S3 resources, such as total bytes downloaded, number of 4xx HTTP response counts, and so on.

 

Brand new dashboard: Although the dashboards and structures of the AWS Management Console change from time to time, it is the new S3 dashboard that I'm really fond of.

 

The object tagging and the storage analysis policy features are all now provided using the new S3 dashboard, along with other impressive and long-awaited features, such as searching for buckets using keywords and the ability to copy bucket properties from an existing bucket while creating new buckets, as depicted in the following screenshot:

 

Amazon S3 transfer acceleration: This feature allows you to move large workloads across geographies into S3 at really fast speeds. It leverages Amazon CloudFront endpoints in conjunction with S3 to enable up to 300 times faster data uploads without having to worry about any firewall rules or upfront fees to pay.

 

Virtual Private Cloud

Virtual Private Cloud

Similar to other services, Virtual Private Cloud (VPC) has seen quite a few functionalities added to it over the past years; a few important ones are highlighted here:

 

Support for IPv6:

With the exponential growth of the IT industry as well as the internet, it was only a matter of time before VPC too started support for IPv6. Today, IPv6 is extended and available across all AWS regions.

 

It even works with services such as EC2 and S3. Enabling IPv6 for your applications and instances is an extremely easy process. All you need to do is enable the IPv6 CIDR block option, as depicted in the VPC creation wizard:

Each IPv6 enabled VPC comes with its own /56 address prefix, whereas the individual subnets created in this VPC support a /64 CIDR block.

 

DNS resolution for VPC Peering:

With DNS resolution enabled for your VPC peering, you can now resolve public DNS hostnames to private IP addresses when queried from any of your peered VPCs. This actually simplifies the DNS setup for your VPCs and enables the seamless extension of your network environments to the cloud.

 

VPC endpoints for DynamoDB:

Yet another amazing feature to be provided for VPCs later this year is the support for endpoints for your DynamoDB tables. Why is this so important all of a sudden? Well, for starters, you don't require internet gateways or NAT instances attached to your VPCs if you are leveraging the endpoints for DynamoDB.

 

This essentially saves costs and makes the traffic between your application to the DB stay local to the AWS internal network, unlike previously where the traffic from your app would have to bypass the internet in order to reach your DynamoDB instance.

 

Secondly, endpoints for DynamoDB virtually eliminate the need for maintaining complex firewall rules to secure your VPC. And thirdly, and most importantly, it's free!

 

CloudWatch

CloudWatch

CloudWatch has undergone a lot of new and exciting changes and feature additions compared to what it originally provided as a service a few years back. Here's a quick look at some of its latest announcements:

 

CloudWatch events:

One of the most anticipated and useful features added to CloudWatch is CloudWatch events! Events are a way for you to respond to changes in your AWS environment in near real time.

 

This is made possible with the use of event rules that you need to configure, along with a corresponding set of actionable steps that must be performed when that particular event is triggered.

 

For example, designing a simple back-up or clean-up script to be invoked when an instance is powered off at the end of the day, and so on. You can, alternatively, schedule your event rules to be triggered at a particular interval of time during the day, week, month, or even year! Now that's really awesome!

 

High-resolution custom metrics:

We have all felt the need to monitor our applications and resources running on AWS at near real time, however, with the least amount of configurable monitoring interval set at 10 seconds, this was always going to be a challenge.

 

But not now! With the introduction of the high-resolution custom metrics, you can now monitor your applications down to a 1-second resolution!

 

The best part of all this is that there is no special difference between the configuration or use of a standard alarm and that of a high-resolution one. Both alarms can perform the exact same functions, however, the latter is much faster than the other.

 

CloudWatch dashboard widgets:

A lot of users have had trouble adopting CloudWatch as their centralized monitoring solution due to its inability to create custom dashboards. But all that has now changed as CloudWatch today supports the creation of highly-customizable dashboards based on your application's needs.

 

It also supports out-of-the-box widgets in the form of the number widget, which provides a view of the latest data point of the monitored metric, such as the number of EC2 instances being monitored, or the stacked graph, which provides a handy visualization of individual metrics and their impact in totality.

 

Elastic Load Balancer

 Elastic Load Balancer

One of the most significant and useful additions to ELB over the past year has been the introduction of the Application Load Balancer. Unlike its predecessor, the ELB, the Application Load Balancer is a strict Layer 7 (application) load balancer designed to support content-based routing and applications that run on containers as well.

 

The ALB is also designed to provide additional visibility of the health of the target EC2 instances as well as the containers. Ideally, such ALBs would be used to dynamically balance loads across a fleet of containers running scalable web and mobile applications.

This is just the tip of the iceberg compared to the vast plethora of services and functionality that AWS has added to its services in just a span of one year! Let's quickly glance through the various services that we will be covering in this blog.

 

Introduction of newer services

The first edition of AWS Administration - The Definitive Guide covered a lot of the core AWS services, such as EC2, EBS, Auto Scaling, ELB, RDS, S3, and so on. In this edition, we will be exploring and learning things a bit differently by exploring a lot of the services and functionalities that work in conjunction with the core services:

 

EC2 Systems Manager:

EC2 Systems Manager

EC2 Systems Manager is a service that basically provides a lot of add-on features for managing your computer infrastructure. Each computer entity that's managed by EC2 Systems Manager is called a managed instance and this can be either an EC2 instance or an on-premise machine!

 

EC2 Systems Manager provides out-of-the-box capabilities to create and baseline patches for operating systems, automate the creation of AMIs, run configuration scripts, and much more!

 

Elastic Beanstalk:

Beanstalk is a powerful yet simple service designed for developers to easily deploy and scale their web applications. At the moment, Beanstalk supports web applications developed using Java, .NET, PHP, Node.js, Python, Ruby, and Go.

 

Developers simply design and upload their code to Beanstalk, which automatically takes care of the application's load balancing, auto-scaling, monitoring, and so on.

 

At the time of writing, Elastic Beanstalk supports the deployment of your apps using either Docker containers or even directly over EC2 instances, and the best part of using this service is that it's completely free! You only need to pay for the underlying AWS resources that you consume.

 

Elastic File System:

Elastic File System

The simplest way to define Elastic File System or EFS is an NFS share on steroids! EFS provides simple and highly scalable file storage as a service designed to be used with your EC2 instances.

 

You can have multiple EC2 instances attach themselves to a single EFS mount point which can provide a common data store for your applications and workloads.

 

WAF and Shield: In this blog, we will be exploring quite a few security and compliance providing services that provide an additional layer of security besides your standard VPC.

 

Two such services we will learn about are WAF and Shield. WAF, or Web Application Firewall, is designed to safeguard your applications against web exploits that could potentially impact their availability and security maliciously.

 

Using WAF you can create custom rules that safeguard your web applications against common attack patterns, such as SQL injection, cross-site scripting, and so on. Similar to WAF, Shield is also a managed service that provides security against DDoS attacks that target your website or web application:

 

CloudTrail and Config:

CloudTrail is yet another service that we will learn about in the coming blogs. It is designed to log and monitor your AWS account and infrastructure activities. This service comes in really handy when you need to govern your AWS accounts against compliances, audits, and standards, and take necessary action to mitigate against them.

 

Config, on the other hand, provides a very similar set of features, however, it specializes in assessing and auditing the configurations of your

 

AWS resources.

 AWS resources

Both services are used synonymously to provide compliance and governance, which help in operational analysis, troubleshooting issues, and meeting security demands.

 

Cognito: Cognito is an awesome service which simplifies the build and creation of sign-up pages for your web and even mobile applications. You also get options to integrate social identity providers, such as Facebook, Twitter, and Amazon, using SAML identity solutions.

 

CodeCommit, CodeBuild, and CodeDeploy:

AWS provides a really rich set of tools and services for developers, which are designed to deliver software rapidly and securely. At the core of this are three services that we will be learning and exploring in this blog, namely CodeCommit, CodeBuild, and CodeDeploy.

 

As the names suggest, the services provide you with the ability to securely store and version control your application's source code, as well as to automatically build, test, and deploy your application to AWS or your on-premises environment.

 

SQS and SNS: SQS, or Simple Queue Service, is a fully-managed queuing service provided by AWS, designed to decouple your microservices-based or distributed applications.

 

You can even use SQL to send, store, and receive messages between different applications at high volumes without any infrastructure management as well. 

 

SNS is a Simple Notification Service used primarily as a pub/ sub messaging service or as a notification service. You can additionally use SNS to trigger custom events for other AWS services, such as EC2, S3, and CloudWatch.

 

EMR: Elastic MapReduce is a managed Hadoop as a Service that provides a clustered platform on EC2 instances for running Apache Hadoop and Apache Spark frameworks.

 

EMR is highly useful for crunching massive amounts of data as well as to transform and move large quantities of data from one AWS data source to another. EMR also provides a lot of flexibility and scalability to your workloads with the ability to resize your cluster depending on the amount of data being processed at a given point in time. 

 

It is also designed to integrate effortlessly with other AWS services, such as S3 for storing the data, CloudWatch for monitoring your cluster, CloudTrail to audit the requests made to your cluster, and so on.

 

Redshift:

Redshift is a petabyte-scale, managed data warehousing service in the cloud. Similar to its counterpart, EMR, Redshift also works on the concept of clustered EC2 instances on which you upload large datasets and run your analytical queries.

 

Data Pipeline:

Data Pipeline is a managed service that provides end users with an ability to process and move data sets from one AWS service to another as well as from on-premise datastores into AWS storage services, such as RDS, S3, DynamoDB, and even EMR!

 

You can schedule data migration jobs, track dependencies and errors, and even write and create preconditions and activities that define what actions Data Pipeline has to take against the data, such as run it through an EMR cluster, perform a SQL query over it, and so on.

 

IoT and Greengrass:

IoT

AWS IoT and Greengrass are two really amazing services that are designed to collect and aggregate various device sensor data and stream that data into the AWS cloud for processing and analysis.

 

 AWS IoT provides a scalable and secure platform, using which you can connect billions of sensor devices to the cloud or other AWS services and leverage the same for gathering, processing, and analyzing the data without having to worry about the underlying infrastructure or scalability needs.

 

Greengrass is an extension of the AWS IoT platform and essentially provides a mechanism that allows you to run and manage executions of data pre-processing jobs directly on the sensor devices. With these services out of the way, let's quickly look at how we plan to move forward with the rest of the blogs in this blog!

 

Plan of attack!

Just as in the previous edition, we will be leveraging a simple plan of attack even for this blog! By the plan of attack, I just mean how I've planned to structure the contents of the blogs and tie them all together!

 

For the most part of the blog, we will be focusing on a simple use case, such as hosting a WordPress application on AWS with the use of some really cool services in the form of Elastic Beanstalk, Elastic File System, WAF and Shield, EMR, and Redshift, and much more! Here's a simple depiction of what we will aim to achieve by the end of the blog:

 

Here is the brief outline of how the next few blogs are spread out:

We will begin the setup of our WordPress by first hosting it manually over an EC2 instance as a standalone installation and then learning how to manage those instances with the help of the EC2 Systems Manager utility.

 

With this completed, we shall then use a combination of Elastic Beanstalk and Elastic File System to host the same WordPress with some more control over high availability and scalability, all the while learning the internals of both these services and use cases as we go along.

 

Now that the site is hosted, we will create an added layer of security over it by leveraging both WAF and Shield as well as enabling governance in the form of CloudTrail and Config.

 

Later we will also see how to leverage the code development services provided by AWS, namely CodeCommit, CodeBuild, and CodeDeploy, to create an effective CICD pipeline to push updates to our site.

 

Finally, we will also be executing some essential log analysis over the site using Elastic MapReduce and Redshift, and learn how to back up our site's data using Data Pipeline.

 

But that's not all! As mentioned earlier, we will also be learning about a few additional services in the form of IAM and AWS Cognito services for authentication and security, as well as AWS IoT and AWS Greengrass.

 

Connecting the World with AWS IoT and AWS Greengrass

AWS IoT

It's been quite a long journey so far and yet here we are; at the last blog of this blog! If you made it to here, then you definitely need to take a moment and give yourself a well-deserved pat on the back!

 

So far in this blog, we have covered a plethora of services, such as Amazon EFS, AWS Beanstalk, AWS Code Suite, AWS Shield, and AWS Data Pipeline, just to name a few. In this final blog, we will be exploring the IoT suite of services provided by AWS, with more emphasis on two core products, namely, AWS IoT and AWS Greengrass.

 

Let's have a quick look at the various topics that we will be covering in this blog:

  • A brief look at the building blocks required for IoT
  • An introduction to the AWS IoT suite of services followed by a deep dive into AWS IoT, its concepts and terminologies
  • Connecting to AWS IoT using a Raspberry Pi Zero device
  • Exploring the AWS IoT Device SDK, using a few simple code examples
  • Integrating AWS IoT with other AWS services, using IoT rules
  • An introduction to AWS Greengrass, along with a simple getting started example
  • Effectively monitoring IoT devices, as well as the IoT services
  • So without any further ado, let's get started!

 

IoT – what is it?

 Internet of Things

Well, to the uninitiated, IoT or Internet of Things is all about connecting everyday objects or things together, using a common medium of communication (in this case, the internet) for the exchange of data.

 

I know it doesn't sound much, but today, IoT is practically being implemented everywhere around us; from wearable devices, smartphones, home appliances, such as refrigerators, air conditioners, to vehicles and heavy machinery, and much more!

 

Gartner predicts that by the year 2020, there will be an estimated 26 billion devices connected using IoT, and this number is set to grow even further, as IoT adoption becomes mainstream. But what exactly is IoT and how do you build it? Here's a quick look at some of the basic building blocks required in order to get started with IoT:

 

Things: To begin with, any form of IoT comprises end-user devices that we use or leverage to perform some of our day-to-day tasks.

 

These devices, or things, can be anything and everything, including simple electronic devices, such as smartphones, wearables, alarm clocks, light bulbs, to washing machines, garage doors, vehicles, ships, and the list just goes on!

 

Sensors: Sensors are devices that can be incorporated into things that are used to capture or supply our data. Some of the most commonly used sensors are IR sensors, moisture sensors, gas and pressure sensors, and so on.

 

Sensors are not designed to process data on their own. They simply collect and push the data out to one or more processors. For example, a light sensor monitoring whether a light bulb is switched on or off, and so on.

 

Processors: Processors are the brains of the IoT system. Their main function is to process the data that is captured by the sensors. This processing can be based on certain triggers or can be performed close to real time, as well.

 

A single processor can be used to connect and process data from multiple sensors, as well. The most commonly used type of processors include microcontrollers, embedded controllers, and so on.

 

Gateways: Gateways are special devices that are responsible for collecting and routing data, processed by one or more processors, to IoT applications for further analysis. A gateway can collect, aggregate, and send data over the internet, either as streams or in batches, depending on its configuration and connectivity options.

 

Application: Once data from various gateways is collected, it needs to be further analyzed to form meaningful insights so that appropriate actions on the respective operation can be performed.

 

This can be achieved by leveraging one or more applications, such as an industrial control hub, or even a home automation system. For example, an application can be used to remotely trigger a light bulb to switch on, once the ambient light in the room starts to fade, and so on.

 

With this essential information in mind, let's look at a few key AWS services you can use to get started with your very own IoT on the cloud.

 

Introducing the AWS IoT suite of services

AWS IoT suite

AWS IoT Device Management: The AWS IoT Device Management service allows you to register, organize, and manage a large number of IoT devices, easily. You can use this service to onboard devices in bulk and then manage them all, using a single pane of glass view.

 

AWS Greengrass: AWS Greengrass is a software service designed to execute Lambda functions locally, on your IoT devices. In addition to this, you can also use Greengrass to sync data between the device and the IoT Core, using data caching along with other functionalities, such as ML inference, messaging, and so on.

 

AWS IoT Analytics: Connecting and managing billions of IoT devices is one task, and querying the large IoT dataset is quite another.

 

AWS IoT Analytics is a completely managed service that allows you to run analytics on extremely large volumes of IoT data, without having to configure or manage an underlying analytics platform. Using this service, you obtain better insights into your devices, as well as build resilient IoT applications.

 

AWS IoT Button: The AWS IoT Button is a Wi-Fi enabled, a programmable button which enables you to write and integrate an IoT application, without having to know about any device-specific code.

 

AWS IoT Device Defender: With so many devices to manage and maintain, it is equally important to safeguard the devices against malicious attacks. AWS IoT Device Defender is a managed service that allows you to secure, manage, and audit remote devices against a set of security rules and policies. If any deviations are found, IoT Device Defender triggers appropriate notifications for them.

 

Amazon FreeRTOS: Amazon FreeRTOS is a custom operating system built specifically for small, low-powered edge devices or microcontrollers. The operating system is based on the FreeRTOS kernel and helps to easily connect and manage devices with the AWS IoT service.

 

With this, we come to the end of this section. In the next section, we will learn a bit about the AWS IoT Core service in detail, along with a simple and easy-to-follow getting started guide.

 

Getting started with AWS IoT Core

 AWS IoT Core

With a brief understanding of the AWS IoT suite of services covered, we can now dive deep into the world of the AWS IoT Core! However, before we get started with some actual hands-on projects, here is a quick look at some important AWS IoT Core concepts and terminologies:

 

The AWS IoT Core service provides bidirectional communication between devices and the AWS cloud, using a set of components described in the following list:

 

Device gateway: This provides a secure mechanism for the IoT device to communicate with the AWS IoT service.

 

Device shadow: A device shadow is a persistent representation of your IoT device on the cloud. A JSON-based document stores the current state of your device, which you can use to sync with the cloud.

 

Message broker: The message broker provides a secure and reliable channel, using which the IoT device can communicate with the cloud. The broker is based on a publisher-subscriber model and can be used to leverage either the standard MQTT protocol or the advanced MQTT over WebSockets for communication.

 

Registry: Registry is a service that is used to securely register your IoT device with the cloud. You can use the registry to associate certificates and MQTT client IDs with your devices.

 

Groups: Groups are logical containers used to group together similar devices in order to effectively manage them. You can use groups to propagate permissions and perform bulk actions on your connected devices.

 

Rules: The rules engine service in AWS IoT Core provides a mechanism which enables you to process IoT data using simple SQL queries. You can additionally write rules that can integrate AWS IoT Core with other AWS services, such as AWS Lambda, Amazon S3, Amazon Kinesis, and so on.

 

Here is how it all fits together! You start off by preparing a device for connection with the AWS IoT Core. This involves creating a set of certificates that essentially authenticate the device when it connects to the AWS IoT Core.

 

Once connected, the device starts publishing its current state in a JSON format using the standard MQTT protocol. These messages are sent to the message broker, which essentially routes them to their respective subscribing clients, based on the message's topic.

 

You can even create one or more rules to define a set of actions based on the data contained within the messages. When a particular data matches the configured expression, the rules engine invokes that particular action, which can be anything from sending the data to a file in Amazon S3 to processing the data using AWS Lambda or Amazon Kinesis.

The following is a representation of these components put together:

Keeping this in mind, let's look at how you can connect your IoT device with the AWS IoT Core!

 

Connecting a device to AWS IoT Core

AWS IoT supports a wide variety of specialized IoT-embedded devices and microcontrollers that you can connect to. However, for simplicity, you can also simulate an IoT device using either a locally set up virtual machine or an EC2 instance, as well. For this section, we will be using a simple Ubuntu-based virtual machine, hosted using VirtualBox.

 

The virtual machine has the basic operating system packages installed in it and runs off a 512 MB RAM and 1 CPU core allocation with a 10 GB disk. Ensure that your virtual machine has open internet connectivity and a valid hostname set before you proceed with any further steps.

 

The following list demonstrates the simulated IoT device's configuration for your reference:

CPU: 1 CPU

RAM: 512 MB

Operating System: Ubuntu Server 16.04.2 LTS (Xenial) x86_64 architecture

Packages: Core server packages along with vim, node, npm, git, wget

Once the device or virtual machine is prepped, we are good to connect with the AWS IoT Core:

From the AWS Management Console, filter and select the AWS IoT service using the Filter provided. Alternatively, select this

URL, https://console.aws.amazon.com/iot/home to launch the AWS IoT console.

 

Select the Get started option to continue.

Once logged into the console, select the Onboard option from the navigation pane on the left-hand side of the console. Here, you can opt to get started with configuring your first device with the IoT service as well as other options, such as configuring the AWS IoT Button or getting started with the AWS IoT Starter Kit. For this section, select the Get started option under the Configure a device section.

 

The Get started option is a simple three-step process that involves first registering your device, followed by downloading a set of credentials and SDKs for the device to communicate with the IoT Core, and finally testing to check whether the device is successfully connected or not.

 

Select Linux/OSX from the Choose a platform option followed by Node.js from the Choose an AWS IoT Device SDK, as shown in the following screenshot. Note here, you can alternatively select the Java or Python SDKs as well; however, the rest of this particular use case will be based only upon Node.js:

 

Once the appropriate platform and IoT SDK are selected, click on Next to continue.

The next step involves the registration of a thing or in our case, the IoT device itself. Start off by providing a suitable Name for your thing and then select the Show optional configuration option.

 

In the Apply a type to this thing section, select the Create type option. A Thing Type simplifies managing IoT devices by providing a consistent registry data for things that share a particular type. Provide a suitable Name and an optional Description for your Thing Type and select Create thing type when done.

 

Here's what the final configuration should look like. In my case, I've created a Thing Type called dummyIoTDevice for logically classifying all virtual machine-based IoT devices together. Select the Create Thing option once completed:

 

With the thing successfully created, we now need to establish the connection between the thing and AWS IoT Core. To do so, select the newly created thing tile from the Things console to view the thing's various configurations. Among the important options is the Security option. Go ahead and select the Security option from the navigation pane.

 

Here, you can create and associate the necessary certificates, as well as policies that will be required for Thing to communicate with the IoT Core. Select the Create certificate option, to begin with.

 

The necessary certificates are created automatically by AWS Core. Download these files and save them in a safe place. Certificates can be retrieved at any time, but the private and public keys cannot be retrieved after you close this page:

1. A certificate for this thing: xyz.cert.pem

A public key: xyz.public.key

A private key: xyz.private.key

 

In addition, you will need to download the root CA for AWS IoT from Symantec. You can do that by selecting the following URL:

https://www.symantec.com/content/en/us/enterprise/verisign/

Remember to select the Activate option to successfully activate the keys.

 

Since this is our first time working with the IoT Core, we will be required to create a new policy from scratch. The policy will be used to authorize the certificates we created in the previous step. Select the Create new policy option to get started.

 

In the Create a Policy page, start by providing a suitable Name for your new policy. Once completed, you can use either the basic or advanced mode to create your IoT policy. For simplicity, select the Advanced mode option and paste the following policy snippet as shown:

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Action": "iot:*",

"Resource": "*"

}

]

}

 

The following policy grants all devices permission to connect, publish and subscribe to the AWS IoT message broker. You can alternatively tweak this policy as per your requirements, as well.

 

Once done, select the Create option to complete the policy creation process.

With this step completed, we are but a few steps away from establishing the connection between our IoT device and the AWS IoT Core.

 

With the necessary policy created and the certificates downloaded, we now need to copy these to our IoT device, in this case, the Ubuntu virtual machine. You can use any SCP tool to perform this activity, such as WinSCP, as well.

 

For this scenario, I've called the downloaded Symantec Root

CA file root-CA.crt.

Once the files are copied over to a destination folder in your IoT device, you are now ready to test the connectivity, but in order to do that, we will first need to install and configure the AWS IoT Device SDK on our IoT device.

Recommend