Host a website on AWS (Best AWS Tutorial 2019)

Host a website on AWS

How to host a Static website on AWS Tutorial 2019

This Tutorial explains how to host a static website on Amazon Web Services (AWS) by using the AWS S3 in 2019

 

It is important to understand a bit more about the content and the file assets that you’ll be working within the next several blogs. Laying this foundation will allow you to see parallels in working with your own website content and how you will be able to migrate it to AWS.

 

Website Content Overview

Website Content

This hosting scenario will consist of a static website made up of five pages. You will be using a template provided from www.templated.co; this site offers an excellent selection of responsive, clean HTML/CSS templates for use with your website. The web pages that you’ll be working with are described below.

 

Home The home page will be the landing page of the website. This page will hold basic information about the law firm, a high-level overview of the services provided, and important contact information.

 

Services The services page will hold detailed information about the services offered by the law firm.

 

Testimonials The testimonials page will hold the best of the feedback and testimonials that the firm has collected from previous and existing clients.

 

As you progress through this web hosting scenario, I’ll show you how you can use page templates to add new testimonials and update this page over time. This same process can be used if you want to do the same with a page for the latest firm news or blog posts (more on this later).

 

About Us, The “About Us” page will hold information about the history of the firm, the founding partners, and other firm-specific information.

 

Contact The contact page will hold contact information for the firm. This will include the full address and an embedded map, which will illustrate how you can still use external resources even though this is a static website.

 

In addition, you see how other AWS services such as Lambda can be leveraged from your static files to process information.

 

This will be your first exposure to the power of Amazon Web Services Managed Services offerings. This one will dive a bit deep, so I’ll save it until you work through some of the other foundational services.

 

Website Asset Overview

Website Asset

As you can see, you have an HTML page for each of the pages described in the previous section. You have a folder to hold your CSS content files (which control the styling of your website), a folder for your image assets, and a folder for any JavaScript that will be used.

 

Relevant AWS Services

Let's briefly go over the services that I will be introducing in this website hosting scenario. The services listed below are some of the core Amazon Web Services infrastructure service offerings

 

AWS S3 Amazon Web Services Simple Storage Service is Amazon’s core object-based storage service, and you’ll be using it in this hosting scenario. I’ll cover all of the fundamentals and I’ll give you some tips for working with this service when hosting static web content.

 

AWS Route53 Amazon Web Services Route53 DNS service is a fully managed domain name system that resolves domain names to IP addresses. This service offers so much more, though, and while working with this hosting scenario you’ll explore some of the basics, such as domain registration and working with DNS record sets.

 

AWS Lambda Amazon Web Services Lambda is a managed service that enables you to leverage AWS infrastructure to process compute workloads. Think of it as a processing factory. If you have work to be done, you can have it done on your dedicated web servers or you can outsource the workload to AWS Lambda.

 

A service like this comes in very handy when trying to handle simple tasks without the need to invest in full server architecture.

 

You’ll learn a bit about this service when you use it to process data input from your visitor’s input on your Contact web page. It is important to note that AWS Lambda is not available in all AWS regions at the time of writing.

 

To learn more about regional service availability, go to https://aws.amazon.com/about-aws/ global-infrastructure/regional-product-services/.

 

 AWS IAM

Although you have now created your user, by default the account does not yet have a password, so you’ll set it up. You should be back at the AWS IAM Users screen with the new user you just created listed; if not, just click the Users link in the left-hand navigation.

 

From here, place a checkbox next to the user account you would like to manage, and then choose the User Actions drop-down menu near the top of the user list.

 

You will want to log this password in a safe location because this first account will be the one that you will be using to do the majority of your tasks moving forward.

 

The last step in securing your AWS Root Account is to enable multi-factor authentication (MFA). To do this last step, let’s head back to the IAM Dashboard/Home screen by clicking the Dashboard link in the left-hand navigation.

 

At this point, your Security Status section should be looking much better, with four of the five items listed with a green checkbox next to them showing that they have been completed. The last item listed with a caution symbol is to enable multi-factor authentication on the AWS root account.

 

MFA by design requires more than one authentication factor to access your account. This means that in addition to a username and password, another factor of authentication will be required for you to log into your AWS root account.

 

This may seem painful, but securing your AWS root account is an important thing to do; if someone compromises this account, they have the keys to the kingdom and can start using resources and services that could end up costing you a lot of money.

 

The easier of the two ways to use and enable MFA on your root account is to use a virtual MFA device loaded on a cell phone. Google Authenticator is an MFA application that can be downloaded from the Apple or Android App Store and can be used with AWS.

 

If you do not have a mobile device to use as a virtual MFA device, you do have the option of ordering a hardware device from Amazon.

 

In addition to these options, there are desktop applications that can be loaded on your PC to act as a virtual MFA device. Installation and configuration of specific MFA applications go beyond the scope of this blog, but you can find more information about the MFA setup options at https://aws.amazon.com/mfa.

 

After you’ve downloaded Google Authenticator on your mobile phone or installed an MFA application on your PC, click the drop-down arrow to the right of “Activate MFA on your root account” and then click the “Manage MFA” button. Choose the “A virtual MFA device” radio button and click the “Next Step” button.

 

At this point, you will be reminded that you need a compatible MFA software application on your mobile phone or computer to continue. Click the Next Step button when you’re ready to proceed. From the next screen, you can use your smartphone to scan the QR code presented or use the secret keys for manual configuration.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Amazon Web Services: S3 Overview

AWS_overview

AWS S3 is Amazon’s highly redundant, highly available storage solution for the AWS Platform.

 

In contrast to an operating file system, AWS S3 is object-based. This is an important concept to understand: S3 allows for the storage of objects and these objects have properties that control information about them.

 

A bucket is a collection of objects. An object can’t exist outside of a bucket, so a bucket is the top-level storage unit within AWS S3 and the first thing that you’ll create to hold your website content.

 

As mentioned, Amazon S3 is highly available, meaning that data stored in S3 is replicated to all other AWS regions. A region is a geographical area that holds a minimum of two availability zones.

 

An availability zone can be thought of as a data center location. In AWS, some services have a scope of a given region, such as Oregon or North Virginia, but the two services that you’ve been introduced to, IAM and S3, are global in scope.

 

This means that when you upload data to S3, it is replicated across Amazon’s infrastructure to allow for extremely fault-tolerant storage.

 

Creating a Bucket for Web Content

Web Content

As mentioned, a bucket is the top-level organization structure for S3 content. It can hold folders and file objects within it. In this section, you’ll create a bucket to hold your static website content files and then you’ll upload your content files. You’ll also examine some of the properties and settings of file objects and S3 buckets.

 

After clicking the Create Bucket button, choose an AWS region where the bucket will be created. The home region for your bucket should be the one that is closest to the majority of your website visitors to minimize latency.

 

If you are unsure about where your visitors will be coming from, simply choose the default AWS region selection that is presented. S3 content is replicated to other regions as part of the service; there is nothing you need to do to other than upload your content to provide high availability and access.

 

You are going to name your bucket the exact name of the website that you want to host, I use the website www.nadonhosting.com; you will choose a bucket name that is unique to you and to all other AWS accounts. 

 

As mentioned, S3 is a global platform service, so each bucket name must be unique across the platform. This means that once any AWS account creates a bucket with a specific name, that name is unique to that account and can never be used again, in any other account, even if deleted.

 

Click the Create button once you have entered your bucket name and chose an AWS region. Your bucket will be created and you’ll be brought to the S3 main administration page. 

 

From this page, you can see that the current bucket is www.nadonhosting.com and the properties of that bucket are displayed to the right-hand side of the screen. From here you can manage all aspects of this bucket you just created.

 

You can see the bucket name and the AWS region that resides in; you can set permissions and enable logging and versioning and much more. You’ll dig into these bucket properties soon, but for now, let’s upload your static website content. Click the bucket name to navigate to that bucket.

 

You can upload your content by clicking the Upload button, which will bring you to the Upload - Select Files and Folders wizard. S3 objects must have a minimum size of 1 byte and can support a maximum size of 5 terabytes! The wizard allows you to select files by clicking the Add Files button or by dragging them from a File Explorer window onto the wizard window.

 

I’ve included sample files with this blog that can be used to follow along. Once you have downloaded them and unzipped them to a folder on your local computer, you can use them as the content to upload to S3.

 

I recommend the drag-and-drop method because this will allow for the upload of folders and files in the same operation. Select the folders and files, and click the Start Upload button.

 

The upload process will start; when completed, Congratulations, you now have content hosted in AWS S3! Because you took a look at S3 bucket properties earlier, let’s investigate the object-level properties of a file you uploaded. Click the index.html file and then click the Properties tab in the top right-hand corner of the screen. 

 

When an object is selected in S3 and the Properties tab view is enabled, you will see all of the information related to that object, the S3 bucket in which it resides, and additional details such as object size, last modified date, and more.

 

This should feel pretty familiar to the information you can get from a file’s properties in an operating system’s File Explorer window.

 

One difference that you will notice is that each object has a unique link property, which is the HTTP endpoint that this specific file is available from.

 

By default, uploaded files become objects that have no permissions and are not publically available. As part of making these objects available on the Internet, you will perform a step in the next blog to change a property to make them “public.”

 

If you were to copy the object link property of index.html and paste it in a browser, you would not be able to resolve the page since it is currently set to “private.” In fact, you would receive an error, when you enable these files to be accessed over the Internet.

 

Accessing S3 Resources via the AWS CLI

S3 Resources

Until this point in the blog, you have only accessed your AWS S3 resources via the AWS Console. Although this is the most effective way to get started using S3, once you have resources (buckets, folders, and objects) in S3 you may want to interact with these resources in other ways.

 

Some of you are very familiar with using command-line interfaces (CLI) to complete tasks, and AWS has a CLI that can be installed on your local computer and can give you access to all of your AWS platform resources (IAM, S3, and more).

 

The installation of the CLI is a bit outside of the scope of this blog, but Amazon offers excellent documentation on how to get this handy tool installed here: http://docs.aws.amazon.com/cli/latest/ userguide/installing.html.

 

Once you have installed the CLI, open a shell window or command prompt, enter the following command AWS --version

 

The AWS CLI allows you to perform any action that you can perform through the AWS Console via the command line. It relies on the secret key and access ID of a given AWS IAM account to authenticate and access AWS platform resources.

 

As you’ll remember, you created that account as an administrative account with full access to all AWS platform resources. To verify that the account is set up correctly, let’s issue a command to get a view of available buckets in your S3 account: AWS s3 ls

 

This command first calls AWS, and then it states that you want to use the s3 resource. The last part of the command lists available resources. A full command reference for AWS S3 can be found at http://docs.aws. amazon.com/cli/latest/reference/s3/. You’ll just focus in on what I feel may be of the most useful to you when needing to update your static website content. 

 

In the response of that command, you’ll see a listing of the single S3 bucket that you created. If you’d like to see a listing of all files in that bucket, you can add the name of the bucket to the command by using the following command (replace my website name with yours):

AWS s3 ls www.hosting.com

 

This command will list all objects and folders in the www.hosting.com bucket.

Other useful commands for finding out what resources are in S3 are the --summarize, --recursive, and --human-readable options with the command.

 

The last command that I’d like to cover is one of the most useful commands for managing your static website content: the sync command.

 

The command will synchronize a local folder with your S3 bucket and can be used as a very simple way to push any content changes that you’ve made to your local files up to S3 without having to log in to the AWS Console.

 

In addition, the sync command can also sync content between S3 buckets, making it an easy method for moving files around in AWS. In the following code, I have my local directory named the exact same name as my S3 bucket and I changed my working directory to be the one that has the content that I’d like to sync.

 

Doing this allows me to just pass the “.” in the command to reference the current working directory as the source of the sync process. I’ve also made an update to the about_us.html file and run the sync command with the --dry run option.

 

This option will tell you what would happen, but won’t actually do it; it’s good for testing the sync before actually performing it.

aws s3 sync . s3://www.nadonhosting.com --dryrun

 

In this CLI output, I can see that there is only one file that has been changed and needs to be updated. When I’m OK with this process happening, I can rerun the command without the --dry-run option and it will sync my content and update the object in my S3 bucket.

 

This process makes it very easy to update your website content via a single command rather than logging into the AWS Console, navigating to the S3 service, navigating to your bucket resource, and uploading the file manually.

 

In these code examples, you were accessing the default credential profile by not specifying the profile option in the command. It is worth noting that the CLI can support multiple named profiles. Once these are set up in the credentials file, using the profile option in the command will allow you to switch between profiles.

 

This is useful when you have multiple AWS accounts that you want to manage with the CLI. Another example is if you want to switch between users within a specific account, such as one user that has read-only access vs. one that has the ability to create, update, or delete resources.

 

Accessing S3 Resources via Third-Party Applications

S3 resources

You’ve now learned how to access your S3 resources through the AWS Console and the command-line interface.

Let’s briefly talk about another way that you can access your AWS S3 resources: third-party applications that can use AWS IAM credentials in a similar way to how the CLI uses them. One application that I have found particularly helpful for managing S3 content is CloudBerry Explorer for Amazon S3. 

 

There are two versions: a freeware version and a Pro version. My recommendation is to use the freeware version, and if you find it to be valuable, you can upgrade when you are ready.

 

The configuration of the client software will ask you for your AWS account access keys and secret keys, so be sure to have them handy from your work in the last blog.

 

This interface should seem pretty familiar to you in that it is a graphical user interface (GUI) that feels very much like an FTP client application or File Explorer.

 

CloudBerry has built in all of the functionality that is exposed through the AWS CLI into this interface, enabling you to create and delete resources, sync locations, and work with your resources in an easy way.

 

As you can see, although the AWS Console is an effective way to work with all of the AWS platform resources, in certain cases, such as with S3, there are many other ways to work with your resources. 

 

I also want to mention that although I have specifically talked about using AWS S3 to host your website content, the platform is a highly available, highly durable, inexpensive solution for all of your storage needs. Personally, I use this platform for backing up personal data, photos, music, and other files in addition to using it to host website content.

 

Enabling Website Hosting on S3 Buckets

Website Hosting

I have talked quite a bit about S3 and the basics of managing content within buckets as well as how to access this content via each individual object’s web endpoint.

 

You know that your content is accessible from the Internet, but only an object at a time. Although you could link from one object to another using each public object’s endpoint, this would be a nightmare for managing the collection of content as a website.

 

AWS S3 has a feature that makes this task easier: static website hosting. The feature can be found in the Properties screen of an S3 bucket.

 

After a brief description of this feature, you are presented with three options for configuration. By default, all S3 buckets created have this feature configuration set to “Do not enable website hosting.”

 

The third radio button, which is labeled “Redirect all requests to another hostname,” can be used to redirect website traffic bound to this S3 bucket endpoint to another location. I’ll discuss this configuration option in a bit, so let’s just move forward by choosing the second radio button option of “Enable website hosting” for your S3 bucket.

 

When you choose this option, you need to fill in at least one additional piece of information and that is the name of your main page to be served at your S3 bucket endpoint.

 

In my case, I’m going to enter the index. HTML; however, if you have a different name for your landing page, you can enter it here. Once you enter your home page name in the Index Document field, click the Save button to complete the configuration.

 

Browsing Your Website

website

When this endpoint is called in a browser, S3 checks the configuration that you set up and issues a GetObject method against the value that you entered into the Index Document field. In your case, this was index.html and this can be seen being returned in the browser. 

 

At this point, your static website is fully functional. HTML is being delivered by S3 and rendered in your browser. You have a URL that your visitors can access, but it is not a “friendly” URL.

 

I don’t know about you, but I don’t want to print http://www.hosting.com.s3 on my business cards, nor do I want to say that mouthful of words when telling potential customers how to access my site. The logical next step is to set up a domain name to point to this S3 bucket.

 

Next-Level Static Content Hosting

AWS_hosting

The term static is used to define something that does not change at any regular interval. The Internet started with only static websites.

 

These websites were usually text-based in nature and may have used some images, text formatting, and colors to make them more visually appealing, but they were quite limited in terms of the type of information that could be displayed and how visitors could interact with that information.

 

Fast forward a bit and you started to see websites that had an element of server-side processing happening that allowed for a website/web page to be assembled using static elements as well as dynamic elements (information that could be updated and read from data storage).

 

This server-side processing gave birth to systems referred to as content management systems (CMS) that allowed you to dynamically manage and display content.

 

An example of a very popular CMS platform is WordPress. I’ll cover how to host such a platform using AWS in the second section of this blog, but as you’re in this section dedicated to the static content website, I want to focus on how you can deal with the challenges and limits of static files.

 

Static files are just that: static. The content in these files does not change at runtime; they are delivered to the website visitor as they sit on disk. So if you want to update your website, you need to update the static content on your pages.

 

If you want to update a section on each page of your website and you happen to have five web pages, this means that you must update five separate pages. I would classify this as the first limitation of a static website: the need to manually update content.

 

The time that you have to dedicate to these updates could be a key factor when deciding if you want to choose static content as your main type of website content.

 

Since there is a time commitment involved in updating the content, the likelihood of you doing it often is low. This brings me to another limitation: the chance that the content will be out of date or irrelevant. Having out-of-date information displayed on your website, in my opinion, is worse than having no website presence at all.

 

Think of the frustration and lost time that bad information can cause visitors who come to your site. Losing a potential customer’s trust is not a good way to start a long-term relationship.

 

A final factor that could be considered a limitation is that you may need to learn some HTML and other technologies to be able to update your content efficiently.

 

With all this said, static hosting requires no server-side processing because the rendering of the static content takes place on the client side, meaning the browser and device of the person visiting your website. This also means lower resource requirements from a hosting perspective, which means lower hosting charges in general.

 

A static website is also the least complex of the types that I’ll be covering in this blog and for these reasons, I felt it was worth dedicating this blog to discussing how to overcome some of these limitations.

 

Extending the Boundaries of Static Content

Through the rest of this blog, I am going to talk about a few ways that you can manipulate your static content to give it a more dynamic feeling. I think it is important for me to set a proper expectation before getting started: updating your website is something that your visitors want you to do and it takes effort.

 

I’ll cover a few tricks to make things easy to update, but the actual updating is fully on you. You will only get out of your website what you are willing to put into it. Now let’s talk about the first way to extend your static website to make it feel like a dynamic website: by using web page templates.

 

As you may recall, your static website is made up of five pages. The home page uses a sidebar layout on the right to highlight your firm’s latest news, Although this sidebar content looks like it has been updated recently because the date of the article is listed along with how many comments have been made, this is a bit of an illusion.

 

This is just HTML text; when a visitor clicks one of these images or article titles, they will be taken to a static web page template that has been used to create the content.

 

Updating this part of the site with new content is a two-step process. First, you will want to use a page that you have defined to use as a template to create the new article content. Second, you update the index.

 

HTML page to add a new article on top of the existing ones. I’ve included a file called article_template. HTML to give you the shell of an article page that you can use or edit to fit your needs.

 

The main areas to be updated are in the sections that are marked Sidebar and Content. In the Sidebar section, updates should be made to the link to the article template, article date, the small image to be used, and the short description of the article. In the Content section, updates should be made to the article title, article date, and article content.

 

You may also notice that I’ve placed a holder for an image that is currently set to "visibility: hidden" so that the image won’t be displayed by default. There may be times when you want an image at the top of your article, and simple update to point the image source of the image to be used and removal of the style definition will make the image visible in the article.

 

You will want to copy this section from the page you updated earlier and then paste this new entry right on top of the existing ones. Depending on how many articles you would like to display, you may want to remove the oldest entry. Personally, I think having the three most recent articles is sufficient on the home page.

 

Once you have pasted the information for the new article page, remember to save the file. Now that you’ve updated these two files locally on your system, you’ll need to push them up to AWS S3 so that your website will use the updated content.

 

My preference is to use the AWS S3 Sync command as described in Blog or to use Cloudberry Lab S3 Explorer to copy the files up to the S3 bucket.

 

Another option is to use the Console to upload the updated files. Any of the choices are fine; it is really your preference for which one to use. 

 

Although this may not seem like the easiest way to update content, once you have done it a couple times you will see that it can be an effective way of keeping content fresh with minimal effort. There is another option for pulling in content from other sources to your static website: embedding content.

 

Next, I’ll talk about how you can embed your latest posts from Facebook on your homepage in that same sidebar section.

 

Embedding Content from Other Sources

content

In the previous section, I discussed how to manually update a section of your static website to give the appearance that this section was being updated automatically or in a dynamic fashion.

 

The process was quite manual and would fit certain use cases, such as a website that has a few content updates a month. You also have the ability to bring in content from other websites to be displayed on your static website.

 

The content from these other sources could be content that is updated frequently, which means that your static website would also have the updated content on it, ensuring that your website doesn’t feel stale. There are many methods for displaying content from other sources on an HTML-only based website.

 

JavaScript/Client-Side Scripting

JavaScript or other client-side scripting languages process code during runtime using the processing resources of the client machine. In short, this means that you can include JavaScript code in your HTML web page that will be run when the visitor to your website opens that page with their browser.

 

The benefit of client-side scripting is that it can use resources on the visitor’s computer and interact with their session in a way that feels like a lightweight, positive experience for the visitor.

 

Many web-based forms use JavaScript to validate data that is being entered into a form by a web visitor and interact with the visitor as they are entering the data. An example is validating that an email address is entered in the proper format before accepting the input.

 

In your use case, JavaScript can be used to load content from another source. To give an example of this type of content embedding, you will pull in some information from Facebook using code that is available from within the Facebook UI.

 

For your example, you will update the “Legal Issues and News” sidebar with content that pulls from a public Facebook post. The important part to note here is that the post has to be public to be able to be displayed without any form of authentication being required.

 

You’ll use a public post that has relevant data that could be listed in your sidebar section, but if you already have a Facebook page with content on it, you may choose to link to that content instead. 

 

The first step is to open in a browser the Facebook post that has the content that you would like to display on your page. Once opened, click the Settings drop-down in the top right corner. Click the Embed option to open up a window that will provide you with the code that you need to add to your page.

 

Let’s bypass this code for the moment and click the Advanced Settings option. This will bring you to Facebook’s Code Generator wizard, Once this has been added to the page, when loaded in a browser the HTML page runs the JavaScript and reaches out to Facebook for the content to be displayed and then displays it in your browser.

 

JavaScript can be used to do many things on your website, such as adding scripts to the page that can alternate through a set of featured images. An effect like this enhances the user’s experience on your website.

 

As you can see, embedding content using JavaScript is a viable way to add content to your static website. There is, however, a potential drawback. Since this is run within the client-side browser, if the visitor has disabled the use of JavaScript, your script will not work and the content will not be loaded.

 

These can be controlled by setting another property in the tag. To see a full list of supported properties for HTML tags, I highly recommend the W3 Schools website located at www.w3schools.com/.

 

Getting a Notification of a New Message

It’s great that your contact form is now working and visitors to your website can interact with you and leave you a message, but it would not be great to have to check your S3 bucket daily to see if there are new messages.

 

Thankfully the AWS Platform has yet another service that will help you out by letting you know when a new message arrives: Simple Notification Service (SNS).

 

AWS SNS is a notification service that every other service on the AWS Platform can interact with as a way to indicate when an action has occurred.

 

Notifications are sent to topics, and topics can be subscribed to by a variety of methods. To finish off this blog, I’ll walk you through setting up SNS to create a new topic and subscribing to that topic via email so that you’ll get an email message when a new message becomes available in your S3 bucket.

 

The easiest way to set this up is by heading to the SNS dashboard from within the AWS Console. Click the Services menu; under the Messaging heading, choose the SNS link.

 

After clicking the “Create topic” button, you will be brought back to the Topics screen where you can see your new topic listed with the ARN value.

 

Make note of this ARN; it will be needed in the next step. From here you need to allow S3—and specifically the bucket holding your messages—to publish notifications to this new topic.

 

The second is the AWS: source is a property where you should have the name of the S3 bucket that you created to hold your messages. Once you’ve adjusted the code accordingly, click the Update Policy button.

 

From here, you’ll head to the S3 bucket that you created to hold these messages.

 

Click the Services menu and click the S3 service link. From the dashboard, select the bucket you created to hold your messages and then choose to make the properties of the bucket visible by clicking the Properties tab near the top right-hand corner of the screen.

 

Expand the Events section and fill out detail, giving the configured event a name, selecting when the event should be triggered and choosing to publish the event notification to SNS and the topic you created above.

 

After entering the information the above event configuration form, click the Save button to create the event notification.

Your last step is to head back to SNS and subscribe to your new topic. From the SNS Dashboard, click Topics and select the checkbox next to the name of the topic you recently created. Click the Actions menu and choose “Subscribe to the topic.”

 

Configure the subscription option to use email as the notification method and enter an email address where you would like to receive email notifications.

 

Click the Save Subscription button. A confirmation email will be sent to the email address that you enter in this configuration; you will need to confirm that you want to receive an email before you will be able to test your contact form again to see if you receive a notification email.

 

I have found that it can take up to an hour for the email subscription confirmation email to be processed. After waiting a while, give your contact form another test and you should receive an email with the subject “Amazon S3 Notification.”

 

The email content isn’t the best format, but this is a way of letting you know that a new object has been created in the S3 bucket, which is better than having to check it manually. In later blogs, you’ll explore more advanced options in SNS, including the ability to tailor these messages to be more reader-friendly.

Recommend