Host a website on AWS

Host a website on AWS

How to host a Static website on AWS 2018

A static website is one that consists of files and web pages that do not change based on user interaction. When a user visits the website, the content displayed to them is static in nature, meaning that it doesn’t change unless the author of that content updates the files with new content to be displayed.

 

This type of website is the least complex architecture to work with since the assets that make up the site are limited to files. As you progress through future blogs you’ll learn how those asset lists will include files and also application dependencies such as web servers and databases. 

 

In this scenario, you will be setting up web hosting in Amazon Web Services (AWS) for a fictional local law office. This small law firm employs less than five staff members but realizes the importance of having a “web presence” for potential customers to learn about their services, read past client testimonials, and get contact information for the office.

 

It is important to understand a bit more about the content and the file assets that you’ll be working within the next several blogs. Laying this foundation will allow you to see parallels in working with your own website content and how you will be able to migrate it to AWS.

 

Website Content Overview

How to host a website on AWS

This hosting scenario will consist of a static website made up of five pages. You will be using a template provided from www.templated.co; this site offers an excellent selection of responsive, clean HTML/CSS templates for use with your website. The web pages that you’ll be working with are described below.

 

Home The home page will be the landing page of the website. This page will hold basic information about the law firm, a high-level overview of services provided, and important contact information.

 

Services The services page will hold detailed information about the services offered by the law firm.

 

Testimonials The testimonials page will hold the best of the feedback and testimonials that the firm has collected from previous and existing clients. As you progress through this web hosting scenario, I’ll show you how you can use page templates to add new testimonials and update this page over time. This same process can be used if you want to do the same with a page for the latest firm news or blog posts (more on this later).

 

About Us The “About Us” page will hold information about the history of the firm, the founding partners, and other firm-specific information.

 

Contact The contact page will hold contact information for the firm. This will include the full address and an embedded map, which will illustrate how you can still use external resources even though this is a static website. In addition, you see how other AWS services such as Lambda can be leveraged from your static files to process information.

 

This will be your first exposure to the power of Amazon Web Services Managed Services offerings. This one will dive a bit deep, so I’ll save it until you work through some of the other foundational services.

 

Website Asset Overview

As you can see, you have an HTML page for each of the pages described in the previous section. You have a folder to hold your CSS content files (which control the styling of your website), a folder for your image assets, and a folder for any JavaScript that will be used.

 

Relevant AWS Services

Let's briefly go over the services that I will be introducing in this website hosting scenario. The services listed below are some of the core Amazon Web Services infrastructure service offerings

 

AWS S3 Amazon Web Services Simple Storage Service is Amazon’s core object-based storage service, and you’ll be using it in this hosting scenario. I’ll cover all of the fundamentals and I’ll give you some tips for working with this service when hosting static web content.

 

AWS Route53 Amazon Web Services Route53 DNS service is a fully managed domain name system that resolves domain names to IP addresses. This service offers so much more, though, and while working with this hosting scenario you’ll explore some of the basics, such as domain registration and working with DNS record sets. In later blogs, you’ll extend your knowledge by using some of the more advanced features available with this service.

 

AWS Lambda Amazon Web Services Lambda is a managed service that enables you to leverage AWS infrastructure to process compute workloads. Think of it as a processing factory. If you have work to be done, you can have it done on your dedicated web servers or you can outsource the workload to AWS Lambda.

 

A service like this comes in very handy when trying to handle simple tasks without the need to invest in full server architecture. You’ll learn a bit about this service when you use it to process data input from your visitor’s input on your Contact web page. It is important to note that AWS Lambda is not available in all AWS regions at the time of writing.

 

To learn more about regional service availability, go to https://aws.amazon.com/about-aws/ global-infrastructure/regional-product-services/.

 

Introduction to AWS Free Tier

AWS

After setting up your AWS accountI’ll walk you through how to secure your account and I’ll give you a high-level overview of how to access your account resources. For new customers, Amazon offers a 12-month period where you can try out many of the features of AWS at a minimal cost. They call this offer period the AWS Free Tier.

 

The free tier allows for a certain set of AWS resources to be run under your account at no cost. When choosing to start resources within AWS, Amazon will highlight whether the resource or service is covered under the free tier usage terms.

 

I’ll also highlight whether a service or resource is included in the free tier as you look at each service in future blogs. For a current list of what is covered within the free tier usage terms, go to https://aws.amazon.com/free/.

 

This link is also the place where you sign up for the AWS account that you’ll use throughout this blog. When you visit the link, you will see information about which resources and services are covered under the free tier usage agreement. From this page, click the “Sign in to the Console” button and that will bring you to the login page 

 

From here you can log into your account if you already have one, or you can choose the “I am a new user” radio button, enter your email address, and click the “Sign in using your secure server” button. When you’ve done that. Behind the scenes, Amazon will verify that this is indeed a unique email address and will require you to fill out your name and email address and have you choose a password to be used with your new account.

 

After entering your name, email address, and new password, you will walk through the New Account Setup Wizard where additional personal information will be collected and where you will specify your billing information. Although the free tier allows for a limited amount of free usage per month for a period of 12 months, a valid credit card is required to complete the new account process.

 

During the new account setup process, you will also be required to give a telephone number where you can be reached and an automated verification can be performed. This automated verification will call the phone number that you provide and will deliver a verification code that you will then enter into the New Account Setup Wizard to complete the verification process.

 

Although this sounds like a lot of information gathered, it is important to understand that Amazon Web Services is a pay-as-you-go platform that offers flexibility and control over the resources that you use. It also gives you the option of trying services without locking you into a long-term contract like you find at other web hosting services or Internet service providers.

 

As part of the setup process, you also have the opportunity to select the level of AWS support that you would like to have associated with your account. The Basic Support plan is included with all AWS accounts; however, you do have the option to pay for a higher level of support and engagement based on your operating requirements.

 

For this first web hosting scenario example, you will use a limited amount of services, and I believe that the Basic Support plan is sufficient at this time. As you start to use more resources and have a need to receive answers to your questions in a more expedited fashion, you do have the ability to upgrade the support plan associated with your AWS account at any time.

 

Once the new account setup process is complete, you will receive an email welcoming you to AWS and giving you a plethora of information about how to get started using their services. You can now return to the link above, or to http://aws.amazon.com and click the “Log in to the Console” button and, when prompted, enter your new account email address and password.

 

Once logged in, you’ll be presented with the AWS Console, the central location for accessing all of the AWS resources and services. Up next I’ll dive into a Console overview and introduce AWS IAM (Identity and Access Management), a service used to control access to your AWS Account and resources.

 

Now that you have an account and are able to log into the Amazon Web Services Console, the first thing you need to do is to protect that account. The account that you just used to sign up for AWS and to sign into the console is known as the root account. This is the administrative account and it has full access to all of Amazon’s services within AWS.

 

Although it may seem quite convenient to continue to use this account for day-to-day administration, I suggest that you follow AWS best practices and lock this account down and create a new account that will be used for creating resources within AWS. To do this, let’s take a look at your first AWS managed service, IAM.

 

Introducing IAM and Securing the Root Account

AWS_root

To access AWS IAM from the console screen, search for the category “Security and Identity.” Underneath this category you will see a link for IAM, so click it.

 

you will be presented with the IAM Management screen. From here you can view the IAM user sign-in link, which is a direct link to your AWS account sign-in page. This can be shared with others whom you wish to have access to your account, or bookmarked for direct login access for future visits.

 

You will notice that you currently have zero users, groups, policies, roles, policies, and identity providers set up. These five areas cover what you can manage within IAM. You can create users, who can access resources in your AWS account. You can create groups, which are a collection of users. You can create roles, which are a type of account that can be assumed by other AWS resources.

 

You can create policies, which are permissions that can be assigned to users, groups, or roles, and you can utilize identity providers to allow for integration with third-party identity providers (think corporate domain account or your Facebook login), but this is a topic outside the scope of this blog.

 

In this section, you will focus on securing your root account as best as you can and creating a new group and account for day-to-day use. The root account information should be stored away for safekeeping and only accessed if the new administrator account that you’re going to create is lost. After you perform these steps

 

Steps to Secure Your AWS Root Account

The following steps are needed to secure your AWS root account:

  • Step 1: Implement a strong password policy
  • Step 2: Create an Administrators group
  • Step 3: Create a user and add it to the Administrators group
  • Step 4: Secure your root account with multi-factor authentication

 

In Step 1, you will implement a strong password policy to be used in your AWS account. This password policy will apply to all accounts created, which is why you want to do this step first. From the IAM home screen, click the drop-down arrow next to “Apply an IAM Password Policy” and then click the “Manage Password Policy” button. 

 

From this screen, you can customize a policy that will enforce all accounts created to meet certain minimum requirements as it relates to passwords used.

 

My recommendations for choosing settings for a strong password policy are the following:

  • Require at least one uppercase letter.
  • Require at least one lowercase letter.
  • Require at least one number.
  • Require at least one non-alphanumeric character.

 

These settings are what I recommend for the majority of accounts. For accounts that will have Administrative Account access, I also recommend adding the additional security options available on this screen to increase the level of security on the account.

 

Once you have selected the features you would like to enable on your password policy, click the “Apply password policy” button to save your changes. The dialog box at the top of the screen should indicate “Successfully updates password policy.” 

 

Now you’ll move to Step 2 and create your first group to hold the user that will be created in Step From the password policy screen, click the Dashboard link in the top left-hand corner; this will bring you back to the AWS IAM home screen. From here, click the Groups link from the left-hand navigation and you will go to a screen where you can click the “Create New Group” button to launch the Create New Group Wizard.

 

In the first part of this wizard, you will choose a name for your group; let’s call this first one “Administrators.” Note that this is the same way you can create other groups to hold other accounts.

 

After you enter your group name, click the Next Step button in the bottom right-hand corner to progress to the second part of group creation, attaching a policy to the group. This is an important concept to understand because the policy that you will apply to this group will be inherited by all users that are members of that group.

 

This is an easy way to manage permissions within your AWS account because you are able to assign permissions at the group level and you do not have to worry about assigning them to each user account individually.

 

You can read more about AWS account access policies at http://docs.aws.amazon.com/IAM/latest/ UserGuide/access_policies.html.

 

On the Attach Policy screen, you are presented with a list of prebuilt policies that make it easy to grant permissions by AWS service area. You can also custom create a policy to fit your needs;

 

for example, if you have a collaborator who needs access to upload files to your website, but you want to lock their access down to a list of specific folders and not allow them to access anything else, a custom policy can be created and applied to a group that you create for that user.

 

For your case, you’re going to choose the first option, “administrative access,” which gives Administrative Access to all AWS services. Place a checkmark in the box next to the policy option, and then click the Next Step button to progress to the final step of the Create New Group Wizard, which is to review your selections.

 

Review the information and click the Create Group button in the bottom right-hand corner of the screen. Your new group will now be listed on the Groups page.

 

Now you’ll move on to the third of four steps to secure your AWS root account. Step 3 involves creating your first user and making them a member of the Administrators group that you just created. Click the Users link in the left-hand navigation and then click the “Create New User” button to start the Create User Wizard.

 

The creation of users is a pretty straightforward process: enter the username to be used and click the Create button to create the user. For your first account, I recommend using the format first name.lastname for your user account but feel free to use any formation that you find easy to remember.

 

There is an important option in this process, which is enabled by default, to generate access keys for each user. You want to make sure that you do this for any account that will need access to the AWS Console (logging in via your IAM users sign-in URL, shown on the IAM Dashboard) as well as through other tools or integration points.

 

You will be using an integration point that will require these access keys in the next blog, so be sure to download the credentials after the user is created and keep this file in a very safe place that you will be able to access when the time comes.

 

After the user is created, you will be prompted with the screen. The Download Credentials button can be found in the bottom right-hand corner of this screen. Once you have downloaded the user credential files, which will be in CSV format, you can click the Close button to exit out of the New User Wizard.

 

Although you have now created your user, by default the account does not yet have a password, so you’ll set it up. You should be back at the AWS IAM Users screen with the new user you just created listed; if not, just click the Users link in the left-hand navigation. From here, place a checkbox next to the user account you would like to manage, and then choose the User Actions drop-down menu near the top of the user list.

 

From the drop-down list, choose “Manage Password.” You will have the option to have AWS generate a password (based on the password policy you created in Step 1) or to assign a custom password that you will deliver to the new user. You will also have the option to force the user to change their password upon first login.

 

This is recommended because it allows the user to create their own password to be used when logging into your AWS account, but don’t worry; even the one that they generate will be forced to comply with the password policy that you created in Step 1. Select a password option and click “Apply.”

 

You will want to log this password in a safe location because this first account will be the one that you will be using to do the majority of your tasks moving forward.

 

The last step in securing your AWS Root Account is to enable multi-factor authentication (MFA). To do this last step, let’s head back to the IAM Dashboard/Home screen by clicking the Dashboard link in the left-hand navigation.

 

At this point, your Security Status section should be looking much better, with four of the five items listed with a green checkbox next to them showing that they have been completed. The last item listed with a caution symbol is to enable multi-factor authentication on the AWS root account.

 

MFA by design requires more than one authentication factor to access your account. This means that in addition to a username and password, another factor of authentication will be required for you to log into your AWS root account.

 

This may seem painful, but securing your AWS root account is an important thing to do; if someone compromises this account, they have the keys to the kingdom and can start using resources and services that could end up costing you a lot of money.

 

The easier of the two ways to use and enable MFA on your root account is to use a virtual MFA device loaded on a cell phone. Google Authenticator is an MFA application that can be downloaded from the Apple or Android App Store and can be used with AWS. If you do not have a mobile device to use as a virtual MFA device, you do have the option of ordering a hardware device from Amazon.

 

In addition to these options, there are desktop applications that can be loaded on your PC to act as a virtual MFA device. Installation and configuration of specific MFA applications go beyond the scope of this blog, but you can find more information about the MFA setup options at https://aws.amazon.com/mfa.

 

After you’ve downloaded Google Authenticator on your mobile phone or installed an MFA application on your PC, click the drop-down arrow to the right of “Activate MFA on your root account” and then click the “Manage MFA” button. Choose the “A virtual MFA device” radio button and click the “Next Step” button.

 

At this point, you will be reminded that you need a compatible MFA software application on your mobile phone or computer to continue. Click the Next Step button when you’re ready to proceed. From the next screen, you can use your smartphone to scan the QR code presented or use the secret keys for manual configuration.

 

Once this information is entered into your MFA application, you’ll be presented with your first 6-digit MFA code, so enter it in the first text input field. You will then wait 30 seconds until the MFA 6-digit code refreshes and enter that next code in the sequence into the second text input field and click the “Activate MFA Device” button and then the Finish button.

 

Once you do this, your AWS root account will now have MFA enabled; each time you log in with this account you’ll need to use your AWS root account username and password as well as the generated MFA code to gain access to the AWS Console.

 

AWS Account Access Overview

Now that you have set up your root account and an administrative user account in your AWS account, it is time to talk a bit about the access methods and concepts for using your AWS resources.

 

The first method of accessing your AWS account and resources is by logging into the AWS website and using the web interface, which is referred to as the AWS Console. You can log into the AWS console with your root account, or any other account that you create within AWS IAM.

 

As you also learned when you set up your first user account earlier in this blog, each user has not only a username and password but a set of security credentials called keys that can be used to access account resources programmatically. These keys are used to identify your user account through various tools or software integrations.

 

An example of a tool that implements the use of these keys is the AWS command-line interface (CLI). This tool can be installed on your desktop PC and can be used to interact, from the command line, with your AWS account and resources. Everything that can be done via the AWS Console can be done via command-line commands using the CLI.

 

This makes the CLI a great resource to use when performing tasks that are repetitive in nature. You can read more about the CLI in the AWS documentation at http://aws.amazon.com/cli/.

 

Third-party software is another example of an integration that will use your IAM user account security credential keys for access to resources within your AWS account. In the next blog, you will walk through setting up Cloudberry Explorer, a client interface that allows you to access AWS S3 like an FTP client or File Explorer.

 

This software will use your credential keys to impersonate your user account as it accesses your AWS S3 resources. The three methods presented here will be the main forms of access that you’ll use throughout this blog when accessing and working with your AWS account resources.

 

Amazon Web Services: S3 Overview

AWS_overview

AWS S3 is Amazon’s highly redundant, highly available storage solution for the AWS Platform. The easiest way to think of it is as a file system that allows for the organization and storage of files. In contrast to an operating file system, AWS S3 is object-based. This is an important concept to understand: S3 allows for the storage of objects and these objects have properties that control information about them.

 

I’ll give you a good summary of these properties later in this blog and you’ll dive much deeper into using object properties in later web hosting scenarios. Objects stored in AWS S3 can be organized into folders, and folders are organized into buckets. A bucket is a collection of objects. An object can’t exist outside of a bucket, so a bucket is the top-level storage unit within AWS S3 and the first thing that you’ll create to hold your website content.

 

As mentioned, Amazon S3 is highly available, meaning that data stored in S3 is replicated to all other AWS regions. A region is a geographical area that holds a minimum of two availability zones.

 

An availability zone can be thought of as a data center location. In AWS, some services have a scope of a given region, such as Oregon or North Virginia, but the two services that you’ve been introduced to, IAM and S3, are global in scope. This means that when you upload data to S3, it is replicated across Amazon’s infrastructure to allow for extremely fault-tolerant storage.

 

In terms of service level agreements (SLA), Amazon promises 99.99% (also known as “four nines”) for S3 durability and they promise 99.999999999% availability. This can be summarized by saying that the data you store in S3 will be there when you need it and loss of data is very rare.

 

Accessing AWS S3 via the Console

Now let’s see how to access S3 via the console. you created an administrative user account that you can use for day-to-day console access and management. Go ahead and log in as that user. After you successfully log into the console, you will be presented with the main page, which lists all of the Amazon Web Services resources sorted into categories. You will find S3 listed under the “Storage and Content Delivery” category heading.

 

Click the S3 link under that category to be brought to the S3 landing page. Since you have not yet done any work in S3, the landing page will present you with an introduction to the AWS S3 service, and link to full documentation for all the features of the service.

 

Most of the AWS platform services have a similar landing page that explains the service, offers links to documentation, and gives you a call to action to get started. On this landing page, the call to action is to create your first bucket. A list of all services will be displayed, and you can click and drag a service to the top to blog mark it for easier access in the future.

 

Creating a Bucket for Web Content

As mentioned, a bucket is the top-level organization structure for S3 content. It can hold folders and file objects within it. In this section, you’ll create a bucket to hold your static website content files and then you’ll upload your content files. You’ll also examine some of the properties and settings of file objects and S3 buckets.

 

After clicking the Create Bucket button, choose an AWS region where the bucket will be created. The home region for your bucket should be the one that is closest to the majority of your website visitors to minimize latency.

 

If you are unsure about where your visitors will be coming from, simply choose the default AWS region selection that is presented. S3 content is replicated to other regions as part of the service; there is nothing you need to do to other than upload your content to provide high availability and access.

 

You are going to name your bucket the exact name of the website that you want to host, I use the website www.nadonhosting.com; you will choose a bucket name that is unique to you and to all other AWS accounts. 

 

As mentioned, S3 is a global platform service, so each bucket name must be unique across the platform. This means that once any AWS account creates a bucket with a specific name, that name is unique to that account and can never be used again, in any other account, even if deleted.

 

Click the Create button once you have entered your bucket name and chose an AWS region. Your bucket will be created and you’ll be brought to the S3 main administration page. 

 

From this page you can see that the current bucket is www.nadonhosting.com and the properties of that bucket are displayed to the right-hand side of the screen. From here you can manage all aspects of this bucket you just created.

 

You can see the bucket name and the AWS region that is resides in; you can set permissions and enable logging and versioning and much more. You’ll dig into these bucket properties soon, but for now, let’s upload your static website content. Click the bucket name to navigate to that bucket.

 

You can upload your content by clicking the Upload button, which will bring you to the Upload - Select Files and Folders wizard. S3 objects must have a minimum size of 1 byte and can support a maximum size of 5 terabytes! The wizard allows you to select files by clicking the Add Files button or by dragging them from a File Explorer window onto the wizard window.

 

I’ve included sample files with this blog that can be used to follow along. Once you have downloaded them and unzipped them to a folder on your local computer, you can use them as the content to upload to S3. I recommend the drag-and-drop method because this will allow for the upload of folders and files in the same operation. Select the folders and files, and click the Start Upload button.

 

The upload process will start; when completed, Congratulations, you now have content hosted in AWS S3! Because you took a look at S3 bucket properties earlier, let’s investigate the object-level properties of a file you uploaded. Click the index.html file and then click the Properties tab in the top right-hand corner of the screen. 

 

When an object is selected in S3 and the Properties tab view is enabled, you will see all of the information related to that object, the S3 bucket in which it resides, and additional details such as object size, last modified date, and more. This should feel pretty familiar to the information you can get from a file’s properties in an operating system’s File Explorer window.

 

One difference that you will notice is that each object has a unique link property, which is the HTTP endpoint that this specific file is available from. By default, uploaded files become objects that have no permissions and are not publically available. As part of making these objects available on the Internet, you will perform a step in the next blog to change a property to make them “public.”

 

If you were to copy the object link property of index.html and paste it in a browser, you would not be able to resolve the page since it is currently set to “private.” In fact, you would receive an error, when you enable these files to be accessed over the Internet.

 

Accessing S3 Resources via the AWS CLI

Until this point in the blog you have only accessed your AWS S3 resources via the AWS Console. Although this is the most effective way to get started using S3, once you have resources (buckets, folders, and objects) in S3 you may want to interact with these resources in other ways.

 

Some of you are very familiar with using command-line interfaces (CLIs) to complete tasks, and AWS has a CLI that can be installed on your local computer and can give you access to all of your AWS platform resources (IAM, S3, and more).

 

The installation of the CLI is a bit outside of the scope of this blog, but Amazon offers excellent documentation on how to get this handy tool installed here: http://docs.aws.amazon.com/cli/latest/ userguide/installing.html.

 

Once you have installed the CLI, open a shell window or command prompt, enter the following command

aws --version

 

The AWS CLI allows you to perform any action that you can perform through the AWS Console via the command line. It relies on the secret key and access ID of a given AWS IAM account to authenticate and access AWS platform resources.

 

As you’ll remember, you created that account as an administrative account with full access to all AWS platform resources. To verify that the account is set up correctly, let’s issue a command to get a view of available buckets in your S3 account:

aws s3 ls

 

This command first calls aws, and then it states that you want to use the s3 resource. The last part of the command lists available resources. A full command reference for AWS S3 can be found at http://docs.aws. amazon.com/cli/latest/reference/s3/. You’ll just focus in on what I feel may be of the most use to you when needing to update your static website content. 

 

In the response of that command, you’ll see a listing of the single S3 bucket that you created. If you’d like to see a listing of all files in that bucket, you can add the name of the bucket to the command by using the following command (replace my website name with yours):

aws s3 ls www.nadonhosting.com

 

This command will list all objects and folders in the www.nadonhosting.com bucket. Other useful commands for finding out what resources are in S3 are the --summarize, --recursive, and --human-readable options with the command.

 

The last command that I’d like to cover is one of the most useful commands for managing your static website content: the sync command. The command will synchronize a local folder with your S3 bucket and can be used as a very simple way to push any content changes that you’ve made to your local files up to S3 without having to log in to the AWS Console.

 

In addition, the sync command can also sync content between S3 buckets, making it an easy method for moving files around in AWS. In the following code, I have my local directory named the exact same name as my S3 bucket and I changed my working directory to be the one that has the content that I’d like to sync.

 

Doing this allows me to just pass the “.” in the command to reference the current working directory as the source of the sync process. I’ve also made an update to the about_us.html file and run the sync command with the --dry run option. This option will tell you what would happen, but won’t actually do it; it’s good for testing the sync before actually performing it.

aws s3 sync . s3://www.nadonhosting.com --dryrun

 

In this CLI output, I can see that there is only one file that has been changed and needs to be updated. When I’m OK with this process happening, I can rerun the command without the --dryrun option and it will sync my content and update the object in my S3 bucket.

 

This process makes it very easy to update your website content via a single command rather than logging into the AWS Console, navigating to the S3 service, navigating to your bucket resource, and uploading the file manually.

 

In these code examples, you were accessing the default credential profile by not specifying the profile option in the command. It is worth noting that the CLI can support multiple named profiles. Once these are set up in the credentials file, using the profile option in the command will allow you to switch between profiles.

 

This is useful when you have multiple AWS accounts that you want to manage with the CLI. Another example is if you want to switch between users within a specific account, such as one user that has read-only access vs. one that has the ability to create, update, or delete resources.

 

Accessing S3 Resources via Third-Party Applications

You’ve now learned how to access your S3 resources through the AWS Console and the command-line interface. Let’s briefly talk about another way that you can access your AWS S3 resources: third-party applications that can use AWS IAM credentials in a similar way to how the CLI uses them. One application that I have found particularly helpful for managing S3 content is CloudBerry Explorer for Amazon S3. 

 

There are two versions: a freeware version and a Pro version. My recommendation is to use the freeware version, and if you find it to be valuable, you can upgrade when you are ready. The configuration of the client software will ask you for your AWS account access keys and secret keys, so be sure to have them handy from your work in the last blog.

 

This interface should seem pretty familiar to you in that it is a graphical user interface (GUI) that feels very much like an FTP client application or File Explorer. CloudBerry has built in all of the functionality that is exposed through the AWS CLI into this interface, enabling you to create and delete resources, sync locations, and work with your resources in an easy way.

 

As you can see, although the AWS Console is an effective way to work with all of the AWS platform resources, in certain cases, such as with S3, there are many other ways to work with your resources. 

 

I also want to mention that although I have specifically talked about using AWS S3 to host your website content, the platform is a highly available, highly durable, inexpensive solution for all of your storage needs. Personally, I use this platform for backing up personal data, photos, music, and other files in addition to using it to host website content.

 

Setting Up Your Website Content and Domain

setting_websites

 

This blog picks up where the previous blog left off. Now that you know how to store your content in AWS S3 and you have been introduced to several ways to access that content, it is time for you to walk through the final steps of setting up a static website in AWS, which includes making the content of the buckets public, creating policies to control access, and making the bucket publically available by enabling website hosting on the S3 bucket.

 

From there I’ll show you how to set up a domain name in your third core AWS service, Route S3, to front your new static content. We have quite a bit to cover to get your static website content ready for delivery to your customers, so let’s get started.

 

Making Your Content Public

The first thing that you need to do is to make your content available for your visitors to view. I mentioned that content has no permissions set on it when you upload it. When you try to view it in a web browser, you received an error. You’re going to fix that by using a S3 bucket policy that will allow all content in the bucket to be viewed publically.

 

When an S3 bucket is created, the only person that is granted any permission to that bucket is the account that created it. This account has full permissions on the bucket and can read the contents of the bucket (list), write to the bucket (upload), delete objects in the bucket, view objects in the bucket (though not until the object is made public), and edit the properties of objects within the bucket.

 

An important concept to understand is that control can be set at the bucket level and at the object level. At the bucket level, access to the resources within the bucket is controlled through the ACL (access control list). Each object within the bucket, excluding folders, can have their own object level permissions set.

 

This allows for fine-grained access control to the content that you host within S3. There are multiple ways to grant users, visitors, and other AWS accounts permissions to S3 resources. The first way is to add additional grantees.

 

As you recall from above, a grantee can be given list, view, upload/delete, and edit permissions at the bucket level. You could make your content public by granting the Everyone grantee the list and view permissions, but doing this would allow all visitors to view a listing of your bucket and all objects within it.

 

This would be a security concern, so rather than doing it that way you will make use of setting permissions/rights to the content in your bucket via a bucket policy.

 

Bucket Policies and Permissions

policy is a JSON-formatted document that can be applied to an AWS resource such as an S3 bucket to control access to that resource by defining actions, resources, and effects. Actions are predefined work that can be performed against a resource. For in-depth information about using S3 actions, AWS has a resource available at http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html.

 

An example of an action is the ability to list an S3 bucket’s contents. Resources are AWS resources and can be things like an IAM account or, in this case, an S3 bucket. An effect is the end result of the permission or control that you are looking to enforce, such as Allow or Deny. The policy that you will apply to your bucket is listed as follows

 

In this policy code, you set the Effect to be “Allow,” the Principal value of “*” is a wildcard meaning everyone, the Action lists the predefined method that includes the request of an S3 object, and Resource is set to the S3 Bucket Name that is holding your static web content. In this example, you should update the code to list the name of the bucket that you created to host your static web content rather than the resource www.nadonhosting.com.

 

To apply the bucket policy, select your bucket in S3 and view the bucket properties. From this page, drop down the Permissions tab and click the “Add bucket policy” button to open up the Bucket Policy Editor. In this dialog box, you will paste your edited text content from the JSON sample file.

 

Once completed, click the Save button and your bucket policy will be applied. Applying this policy will mean that everyone can perform the Get Object method against any content in the www.nadonhosting.com S3 bucket. If you click your bucket name, explore your content, and choose the link, you should now be successful in accessing the object endpoint via a web browser and no longer receive the error message you experienced previously.

 

Controlling Object Access/S3 Lifecycle Management

Since you have applied the policy at the bucket level, this means that any object within that bucket will inherit the control set forth in that policy. This is fine for your static web content because you want all the HTML files, image files, and associated content to be available to the public via the Internet, but what if you have files that you don’t want to be accessible? One way to tackle this is to create a separate bucket and apply a specific policy to that content.

 

In future blogs, I’ll cover more advanced topics on locking down S3 content, including creating links to objects that are available for just a specific amount of time and are not available after that time period has passed. I do want to show how easy it is to create a similar concept using S3’s lifecycle management features for buckets.

 

Let’s use an example of your law firm wants to offer clients a 10% discount coupon for services. Using S3 policies you can create a bucket called “nadon hosting promotions” (remember that S3 buckets must be unique, which is why you’re using a very specific bucket name) and use Lifecycle rules for content that reside in that bucket.

 

S3 has an excellent Lifecycle feature that can be used with the service to archive infrequently accessed data to lower cost storage options such as AWS Glacier or, in your case, set expiration on content that is created in this folder.

 

For this example, you will apply the same bucket policy so that content within this new bucket can be accessed by everyone on the Internet and can be linked from one of your static pages. Note that the policy looks very similar to what you used on your static web content bucket, but has the updated bucket name listed in the resource key value pair. 

 

Now that you know that the public will be able to access content created in the bucket, let’s add a Lifecycle rule to have any content that is created in this bucket expire in 30 days.

 

To do this, select the bucket in the S3 management console and be sure that the Properties tab is selected. Open the tab labeled Lifecycle and click the Add Rule button to start the Lifecycle Rules Wizard.

 

On the first page of the wizard you need to select how the rules will apply. It’s important to note that you can apply a Lifecycle rule against a subset of a bucket. The subset can be defined with the use of a folder, or a prefix by selecting the A Prefix method. You’re going to keep things simple and apply the rule to the entire bucket.

 

After selecting the radio button to apply the rule to the whole bucket, click the Configure Rule button.

 

The second page of the wizard asks you what actions you want to perform on the objects in this bucket. You will choose the Permanently Delete option by marking the checkbox to select that option. In the “Days after the object’s creation date” option, enter the number 30.

 

The Lifecycle Rules Wizard will give you a visual representation in the form of an example of this rule and its effect on an object that was uploaded to the bucket on the current date and showing when it will be deleted if this rule is applied. Click the Review button to review the rule before applying it. Once reviewed, click the “Create and Activate Rule” button to create the Lifecycle rule. 

 

Now that you have set your Lifecycle rule on this bucket, you can upload a file to verify that it is working as expected. In my case, I uploaded a file called promotionalCoupon.pdf to the nadonhostingpromotions bucket.

 

After the upload completed, when viewing the object properties, you can see that the expiration date of the content is now set 30 days after the Last Modified date and you can also see the DeletePromotionalContent rule listed as the expiration rule that applies to the content expiration date.

 

With this bucket and rule in place, you can now link to this coupon using the link endpoint in from any of your static web pages and the file will be available for the next 30 days. After day 30, the content will be deleted and that promotional coupon will no longer be available.

 

Although this example is not perfect (it would be a good idea to set a reminder to remove the link from your website on the day that the content expires so that your visitors aren’t presented with a “file not found” message), it does illustrate a way that you can have static content available for a given period of days using lifecycle management features of S3.

 

Enabling Website Hosting on S3 Buckets

I have talked quite a bit about S3 and the basics of managing content within buckets as well as how to access this content via each individual object’s web endpoint.

 

You know that your content is accessible from the Internet, but only an object at a time. Although you could link from one object to another using each public object’s endpoint, this would be a nightmare for managing the collection of content as a website. AWS S3 has a feature that makes this task easier: static website hosting. The feature can be found in the Properties screen of any S3 bucket.

 

After a brief description of this feature, you are presented with three options for configuration.By default, all S3 buckets created have this feature configuration set to “Do not enable website hosting.”

 

The third radio button, which is labeled “Redirect all requests to another hostname,” can be used to redirect website traffic bound to this S3 bucket endpoint to another location. I’ll discuss this configuration option in a bit, so let’s just move forward by choosing the second radio button option of “Enable website hosting” for your S3 bucket.

 

When you choose this option, you need to fill in at least one additional piece of information and that is the name of your main page to be served at your S3 bucket endpoint. In my case, I’m going to enter index. html; however, if you have a different name for your landing page, you can enter it here. Once you enter your home page name in the Index Document field, click the Save button to complete the configuration.

 

Browsing Your Website

website

 

Now that you have enabled static website hosting on your S3 bucket you can open a web browser and point to the S3 bucket endpoint domain name listed in that section on the S3 bucket’s Properties page.  

 

When this endpoint is called in a browser, S3 checks the configuration that you set up and issues a GetObject method against the value that you entered into the Index Document field. In your case, this was index.html and this can be seen being returned in the browser. 

 

At this point, your static website is fully functional. HTML is being delivered by S3 and rendered in your browser. You have a URL that your visitors can access, but it is not a “friendly” URL. I don’t know about you, but I don’t want to print http://www.nadonhosting.com.s3-website-us-west-2.amazonaws.com/ on my business cards, nor do I want to say that mouthful of words when telling potential customers how to access my site. The logical next step is to set up a domain name to point to this S3 bucket.

 

Setting Up a Domain with Route53

Amazon Web Services offers a highly available DNS managed service called Route53. The name comes from the port used for DNS services. AWS backs this service with a 100% service level agreement (SLA). This means that the service is guaranteed to be up at all times, which is a very rare thing to find in the information technology industry.

 

DNS servers are used to map IP addresses to domain names and are the backbone of what allows you to type www.google.com into a web browser and have the Google website resolve before your eyes. In the background, a series of calls are being made to DNS servers to find out the IP address for a server that hosts the content for the given domain name.

 

Route53 is very easy to use and supports the registration of new domains, transfer of existing domains, and full DNS management of your domain name. 

 

Registering a New Domain

AWS_domian

The most straightforward way for you to get started is to register a new domain name to use with your static website. If you recall, I had you name the S3 bucket the exact same name as your website domain name. In the example I’ve been using in this section, mine is www.nadonhosting.com.

 

To register this domain name, you’ll the first login to the AWS Console, click the Services drop-down, and choose Route53. As a reminder, you could also click the Edit link at the top of the console screen and add this shortcut to your Console for easier access in the future.

 

Once you’ve clicked the Route53 service, you’ll be presented with a familiar AWS Getting Started screen that you’ll see on most services the first time that you start using them. 

 

Since you are going to first go through the steps of registering a new domain name, let’s choose the domain registration option. When you click this link you are brought to the Registered Domains page on your Route53 Dashboard.

 

You don’t have any domains registered yet, so it will be blank. At the top of the screen are buttons named Register Domain, Transfer Domain, and Domain Billing Report. For this section, you will choose the Register Domain option.

 

Clicking this button will start the Domain Registration Wizard. You will first be prompted for your domain name and top-level domain (TLD) type such as .com, .org, .edu, etc. There are different requirements for each TLD and you may want to research this first if you want to use something different than .com.

 

Once you have entered your domain name and chosen which TLD you want to register, click the Check button to see if that domain name is available. If the domain status is “Available”, click the Add to Cart button to add the domain to your shopping cart.

 

If not, you may need to search again for a more specific name or use a different TLD. Once the domain has been added to your cart, click the Continue button at the bottom of the screen to proceed with the next step in the Domain Registration Wizard.

 

The second step in the Domain Registration Wizard is where you add your contact information for the domain. Fill in all relevant details and be sure that you have access to the email address that you use here because this will be the address contacted for renewals and if any changes are made to the domain or associated DNS records. Once your information has been entered, click the Continue button to move on to the final step of the Domain Registration Wizard.

 

In the final step, you review your domain contact and order information. If all of the information looks correct, read and accept the terms by clicking the Accept Terms checkbox and then click the Complete Purchase button.

 

After you click the Complete Purchase button, a request to register the domain will be sent to the AWS Domain Registrar and your billing credit card will be charged the domain registration fee.

 

In addition, you will receive an email at the email address you specified in the domain contact details. You must acknowledge this email or the domain will go into a suspended state with the registrar and will not be available for use until resolved.

 

When the domain registration has completed, it will move from the Pending Requests screen into the Registered Domains screen. Once the domain has been registered, a default zone file with DNS records will be created. I’ll talk more about this in the “DNS Zone File Basics” section later in this blog.

 

Transferring an Existing Domain

If you already have a website, chances are you already have a custom domain name. You may have registered the domain name through one of the popular domain registrars such as GoDaddy, Network Solutions, or Register.com.

 

The domain registrar is the entity that manages your domain record and metadata on the Internet. This is the vendor that will let you know when the domain record is expiring and will collect fees to renew the use of the domain name.

 

If you want to use AWS Route53 DNS servers, you have a couple of options. The first is to keep the domain with the current registrar and update the DNS nameservers to use AWS Route53. The second option is to transfer the management of the domain from the current registrar to AWS and, once the transfer is complete, use AWS DNS services.

 

Before I go over each option below, the important point is that you don’t specifically have to change the domain registrar to use AWS Route53 to host your DNS, but you must have access to your current registrar in order to manage the DNS name servers.

 

To illustrate the two options that I’ve described above, I’ll use an existing domain called heritagematters.ca that I currently have hosted at GoDaddy.com.

 

Let’s discuss the first option that was presented above. This option keeps the domain at GoDaddy, the current registrar, and has you updating the DNS name server records to use AWS Route 53.

 

Step 1: Add the domain to your Route53 account as a hosted zone. In the AWS Console Route 53 Dashboard, you’ll see a Hosted Zones navigation link. Hosted zones are a collection of DNS records associated with your domain. 

 

For this first step you need to add a hosted zone for the heritagematters.ca domain name to Route53; doing this will create DNS name servers that will host your hosted zone and serve as the source for your DNS records for the domain. Click the Create Hosted Zone button to expand the single-step wizard to create the hosted zone.

 

Once you have entered your details, click the Create button at the bottom of the screen to create your new hosted zone.

 

When the hosted zone is created, by default AWS moves you to the detail page for the hosted zone. AWS automatically creates NS records, which are the DNS name servers that are hosting the DNS record information for your domain. This is the information that you’ll need to enter in your current registrar (recall that mine is GoDaddy for heritagematters.ca).

 

AWS Route53 uses highly available DNS name servers that are spread across the globe on their regional infrastructure. 

 

Step 2: Now that you have created the hosted zone in your AWS Route53 account, In my case, I logged into my GoDaddy account and browsed to my registered domains list, 

 

Click the Manage DNS button to go to the Domain Detail page. Once here, click the Change button under Name Servers. This will open up an input screen allowing you to change your DNS name server settings.

 

As explained briefly earlier, name servers are used to find information specific to your domain. You set up a hosted zone in AWS Route53 and by doing so create information on AWS Route53 name servers. This allows you to update your domain name server records to point to AWS Route53 for domain-specific information.

 

Once you have entered in the nameservers to be used, click the Save button to update the domain information at GoDaddy. This update can take as little as an hour or as long as 48 hours. The timeframe depends on your current registrar.

 

Now that this change has been made, I can use Route53 to manage my DNS records, which will allow me to set up the records needed to point the domain name at content hosted on AWS S3. I’ll talk more about this in the “DNS Zone File Basics” section below.

 

Now that I’ve covered the first option for using the AWS Route53 Managed DNS service, which is to keep the current registrar but use AWS Route53 name servers, let’s talk about the second option: to transfer the domain control and management over to AWS.

 

As part of the AWS Route53 domain transfer process, in which you move from your current registrar to AWS, the domain is renewed for another period of one year automatically. This means that evaluating the move makes sense for me now, since the process will include the renewal of the domain name at the new registrar, AWS.

 

This also means that if your domain still has a long time before expiration at its current domain registrar, you may want to wait to do the transfer of the domain since doing so will add another year renewal when it’s really not a requirement. The domain transfer process can be broken down into the following steps:

 

  • \1.\  Unlock the domain at the current registrar.
  • \2.\ Obtain the authorization code needed to transfer the domain (not a requirement with all registrars).
  • \3.\  Submit a transfer domain request to initiate the domain transfer.
  • \4.\  The Domain Administrative Contact must approve the domain transfer request.
  • \5.\  Finalization of the domain transfer to the new registrar.

 

The full process can take as little as two days and as long as several weeks to complete. One important detail is to make sure that you have your domain updated with current contact information. Doing so will ensure that step four in the above list can be completed as quickly as possible. The domain at my current registrar with the domain lock setting set to disabled.

 

Since my domain is unlocked and ready to be transferred, I can click the “Email my code” link to have the registrar email me the authorization code needed to complete the transfer process.

 

Now I’m ready to move on to step three in the above list: requesting the domain transfer from within the AWS Route53 Dashboard. Once I’ve logged into the AWS Console and after I’ve navigated to the Route53 service, I click the Registered Domains navigation link.

 

From here I can see my currently registered domains and buttons named Register Domain and Transfer Domain. Clicking the Transfer Domain button will start the Transfer Domain Wizard.

 

You will need to fill in detailed information about the domain that you want to transfer. In my case, I need to make sure that I select the correct top-level domain as well since it is a country-specific domain name. 

 

Once you review your domain information, you’ll click the Add to Cart button to continue with the transfer process. You will then be prompted to put in the authorization code for the transfer, and you will have the choice to transfer the domain but keep the name servers the same or update them to the new domain’s name servers.

 

If the website that you are transferring is currently serving customers, an outage may be unavoidable, but to minimize the risk be sure to copy over all DNS records into the Route53 hosted zone that you created earlier and then choose to use those new name servers during the transfer process.

 

After entering the authorization code and your name server preferences, click the Continue button to move on to the next step of the process, which will have you confirm your contact information for the domain name.

 

Once verified, click the Finish button to finalize your order and start the domain transfer process. You will be presented with the information, which sets the expectation for the timeframe and next steps of the transfer process.

 

Once the transfer process has been completed and all approvals have been fulfilled, you will find that your domain moves from the “Pending Requests” section of the Route53 Dashboard to the “Registered Domains” section.

 

Now that you have full control over a domain that you have registered and transferred, let’s talk about how you manage DNS settings in Route53 and the various types of records that you can create and how to instruct your domain name to front your S3 static website content.

 

DNS Zone File Basics in Route53

domain

An introduction to DNS is a topic that could fill a blog on its own, so I can’t go too deep into this topic in this blog. However, I do want to give you the bare, basic understanding of the main types of activities that you will be able to do within Route53 when it comes to working with your domain.

 

As mentioned, when you register a domain name, AWS creates a hosted zone for that domain and populates it with two types of DNS records (also known as record sets in AWS Route53 terminology).

 

The first type is an SOA record, which stands for “Start of Authority.” This record contains vital information about your domain and must exist for any domain name that is hosted on a DNS name server. It has information such as the primary name server that the domain is hosted on, information about domain contacts, and default domain configuration settings.

 

The second type of recordset that is automatically created when you registered your domain name is the NS record, which is a listing of the name servers that hold information about this domain name. In addition to these two types of domain record sets, you may end up using a combination of one or more of the following three types:

 

\1.\ A Record: This type of record points a domain name or sub-domain resource to an IP address. An example is nadonhosting.com > 192.168.2.100. The IP address is usually that of a web server.

 

\2.\ CNAME Record: Also known as a canonical name, this type of record points a domain name or sub-domain resource to another name. An example is www.nadonhosting.com > nadonhosting.com (using this same example from the A record above, if both of these records exist in the same hosted zone, the www.nadonhosting.com address would resolve to nadonhosting.com, which in turn would resolve to 192.168.2.100).

 

\3.\ MX Record: Also known as a mail exchange record, this type of record points a domain name or sub-domain resource to an IP address where the source is a mail server. These are used when you have domain-specific email addresses, such as jason@nadonhosting.com

 

Although these are not the only types of DNS record sets that you’ll be working with, they are the most frequently used for setting up a domain name to front web content and mail services. 

 

In terms of AWS, there is an additional configuration option when using Route53 that allows for the routing of domain names to sources of content: the alias. I’ll talk a bit about this next as you finish setting up your domain DNS settings to point to your AWS S3 content.

 

Route53 Alias Records

Route53 allows for the use of what is referred to as an alias record to point to AWS-specific resource endpoints such as elastic load balancers, CloudFront distributions, or S3 static website content.

 

This term/ functionality does not exist in standard DNS; it is available only within Route53 and allows for simple routing in the same hosted zone. In the next section, you will use the alias record to direct website traffic to your root domain, also known as the apex to your S3 static website endpoint.

 

Adding DNS Records to Point to the Static Website Content

From within the Route53 Dashboard, choose the Hosted Zones link from the left-hand navigation menu. Once presented with the list of hosted zones, select the one that you want to add DNS record sets to; for this example, choose the nadonhosting.com hosted zone.

 

By default, there are currently only two recordsets listed in your hosted zone. To create a new record set in this hosted zone, click the Create Record Set button at the top of the screen.

 

You must first create a record set to handle the “www” sub-domain. You will need to have your S3 static website endpoint handy because you will be entering this as the alias value for your record. As you may recall, in your example, the website endpoint is found in the S3 Bucket Properties window and in your case is formatted as www.nadonhosting.com.s3-website-us-west-2.amazonaws.com.

 

You will enter www in the Name input box, leave the A - IPv4 Address default type of record set, and then click the Yes radio button for the Alias option. Once selected, you can then enter your alias target into the input box; this will be the value of your S3 website endpoint noted above. Review the data on the record set to be created, and click the Create button at the bottom of the screen to create the record set.

 

This takes care of any visitors that enter www.nadonhosting.com into a browser. They will see the S3 static website that you have hosted due to the alias record routing the traffic to that specific resource. What about those that enter the domain name without the www sub-domain?

 

Those visitors will be greeted with a DNS error. To address this, perform the following steps:

 

  • \1.\ Set up a new S3 bucket with the same name as the root domain name, nadonhosting.com.
  • \2.\ Enable static website hosting on the S3 bucket, but choose the “Redirect all requests to another hostname” radio button.

 

  • \3.\ Enter the value of the hostname to redirect to in the input field. In your example, it is www.nadonhosting.com.
  • \4.\ Take note of the newly created S3 static website hosting endpoint for this new bucket.

 

  • \5.\  Browse to Route53 and select the hosted zone that you want to edit.
  • \6.\ Create a new record set, leave the Name field blank, leave the default type as IPv4 Address, and select the Yes radio button for the Alias option. In the input field, enter the newly created S3 website endpoint from Step 4. Click the Create button.

 

The steps above will create a new S3 bucket with website hosting enabled, but with the option to redirect that traffic to the host/domain name of www.nadonhosting.com. In addition, you’ve created a new record set for the apex or root domain so that any traffic received there will be directed to your S3 bucket, which then redirects to www.nadonhosting.com.

 

Next-Level Static Content Hosting

AWS_hosting

In the last blog, you finished getting your files hosted in AWS S3 and then you worked through getting your static content set behind a domain name using AWS Route53 to register a domain and to add DNS entries to point to the static web content hosted in S3.

 

In this blog, you’ll switch your focus from setting up resources in Amazon Web Services to some of the management challenges that come with hosting a website that has only static content.

 

You’ll take a look at how you can use HTML templates to give you an easy way to update parts of your static website. Next, you’ll use embedded content in your website content to give a more dynamic feel. Finally, you’ll end this blog thinking very much outside the box by looking at technologies like client-side scripting and serverless architecture that will allow you to extend your static content and interact with your website visitors.

 

Limitations of Static Content Websites

The term static is used to define something that does not change at any regular interval. The Internet started with only static websites. These websites were usually text-based in nature and may have used some images, text formatting, and colors to make them more visually appealing, but they were quite limited in terms of the type of information that could be displayed and how visitors could interact with that information.

 

Fast forward a bit and you started to see websites that had an element of server-side processing happening that allowed for a website/web page to be assembled using static elements as well as dynamic elements (information that could be updated and read from data storage). This server-side processing gave birth to systems referred to as content management systems (CMS) that allowed you to dynamically manage and display content.

 

An example of a very popular CMS platform is WordPress. I’ll cover how to host such a platform using AWS in the second section of this blog, but as you’re in this section dedicated to the static content website, I want to focus on how you can deal with the challenges and limits of static files.

 

Static files are just that: static. The content in these files does not change at runtime; they are delivered to the website visitor as they sit on disk. So if you want to update your website, you need to update the static content on your pages.

 

If you want to update a section on each page of your website and you happen to have five web pages, this means that you must update five separate pages. I would classify this as the first limitation of a static website: the need to manually update content.

 

The time that you have to dedicate to these updates could be a key factor when deciding if you want to choose static content as your main type of website content.

 

Since there is a time commitment involved in updating the content, the likelihood of you doing it often is low. This brings me to another limitation: the chance that the content will be out of date or irrelevant. Having out-of-date information displayed on your website, in my opinion, is worse than having no website presence at all.

 

Think of the frustration and lost time that bad information can cause visitors who come to your site. Losing a potential customer’s trust is not a good way to start a long-term relationship.

A final factor that could be considered a limitation is that you may need to learn some HTML and other technologies to be able to update your content efficiently.

 

With all this said, static hosting requires no server-side processing because the rendering of the static content takes place on the client side, meaning the browser and device of the person visiting your website. This also means lower resource requirements from a hosting perspective, which means lower hosting charges in general.

 

A static website is also the least complex of the types that I’ll be covering in this blog and for these reasons, I felt it was worth dedicating this blog to discussing how to overcome some of these limitations.

 

Extending the Boundaries of Static Content

static_content

Through the rest of this blog I am going to talk about a few ways that you can manipulate your static content to give it a more dynamic feeling. I think it is important for me to set a proper expectation before getting started: updating your website is something that your visitors want you to do and it takes effort.

 

I’ll cover a few tricks to make things easy to update, but the actual updating is fully on you. You will only get out of your website what you are willing to put into it. Now let’s talk about the first way to extend your static website to make it feel like a dynamic website: by using web page templates.

 

As you may recall, your static website is made up of five pages. The home page uses a sidebar layout on the right to highlight your firm’s latest news, Although this sidebar content looks like it has been updated recently because the date of the article is listed along with how many comments have been made, this is a bit of an illusion.

 

This is just HTML text; when a visitor clicks one of these images or article titles, they will be taken to a static web page template that has been used to create the content.

 

Updating this part of the site with new content is a two-step process. First, you will want to use a page that you have defined to use as a template to create the new article content. Second, you update the index. HTML page to add a new article on top of the existing ones. I’ve included a file called article_template. HTML to give you the shell of an article page that you can use or edit to fit your needs.

 

The main areas to be updated are in the sections that are marked Sidebar and Content. In the Sidebar section, updates should be made to the link to the article template (to be made after you’ve decided what to save this file as described below), article date, the small image to be used, and the short description of the article. In the Content section, updates should be made to the article title, article date, and article content.

 

You may also notice that I’ve placed a holder for an image that is currently set to "visibility:hidden" so that the image won’t be displayed by default. There may be times when you want an image at the top of your article, and a simple update to point the image source of the image to be used and removal of the style definition will make the image visible in the article.

 

You will want to copy this section from the page you updated earlier and then paste this new entry right on top of the existing ones. Depending on how many articles you would like to display, you may want to remove the oldest entry. Personally, I think having the three most recent articles is sufficient on the home page.

 

Once you have pasted the information for the new article page, remember to save the file. Now that you’ve updated these two files locally on your system, you’ll need to push them up to AWS S3 so that your website will use the updated content. My preference is to use the AWS S3 Sync command as described in Blog, or to use Cloudberry Lab S3 Explorer to copy the files up to the S3 bucket.

 

Another option is to use the Console to upload the updated files. Any of the choices are fine; it is really your preference for which one to use. Although this may not seem like the easiest way to update content, once you have done it a couple times you will see that it can be an effective way of keeping content fresh with minimal effort. There is another option for pulling in content from other sources to your static website: embedding content.

 

Next, I’ll talk about how you can embed your latest posts from Facebook on your homepage in that same sidebar section.

 

Embedding Content from Other Sources

content

In the previous section, I discussed how to manually update a section of your static website to give the appearance that this section was being updated automatically or in a dynamic fashion.

 

The process was quite manual and would fit certain use cases, such as a website that has a few content updates a month. You also have the ability to bring in content from other websites to be displayed on your static website.

 

The content from these other sources could be content that is updated frequently, which means that your static website would also have the updated content on it, ensuring that your website doesn’t feel stale. There are many methods for displaying content from other sources on an HTML-only based website.

 

JavaScript/Client-Side Scripting

JavaScript or other client-side scripting languages process code during runtime using the processing resources of the client machine. In short, this means that you can include JavaScript code in your HTML web page that will be run when the visitor to your website opens that page with their browser.

 

The benefit of client-side scripting is that it can use resources on the visitor’s computer and interact with their session in a way that feels like a lightweight, positive experience for the visitor.

 

Many web-based forms use JavaScript to validate data that is being entered into a form by a web visitor and interact with the visitor as they are entering the data. An example is validating that an email address is entered in the proper format before accepting the input.

 

In your use case, JavaScript can be used to load content from another source. To give an example of this type of content embedding, you will pull in some information from Facebook using code that is available from within the Facebook UI.

 

For your example, you will update the “Legal Issues and News” sidebar with content that pulls from a public Facebook post. The important part to note here is that the post has to be public to be able to be displayed without any form of authentication being required.

 

You’ll use a public post that has relevant data that could be listed in your sidebar section, but if you already have a Facebook page with content on it, you may choose to link to that content instead. 

 

The first step is to open in a browser the Facebook post that has the content that you would like to display on your page. Once opened, click the Settings drop-down in the top right corner. Click the Embed option to open up a window that will provide you with the code that you need to add to your page.

 

Let’s bypass this code for the moment and click the Advanced Settings option. This will bring you to Facebook’s Code Generator wizard, Once this has been added to the page, when loaded in a browser the HTML page runs the JavaScript and reaches out to Facebook for the content to be displayed and then displays it in your browser.

 

JavaScript can be used to do many things on your website, such as adding scripts to the page that can alternate through a set of featured images. An effect like this enhances the user’s experience on your website.

 

As you can see, embedding content using JavaScript is a viable way to add content to your static website. There is, however, a potential drawback. Since this is run within the client-side browser, if the visitor has disabled the use of JavaScript, your script will not work and the content will not be loaded.

 

There are ways to prepare for this and to show content in the place of something that doesn’t load. I have a resource in the Appendix that can be helpful in testing whether JavaScript is enabled or disabled and how to display content based on the feedback from that test.

 

These can be controlled by setting another property in the tag. To see a full list of supported properties for HTML tags, I highly recommend the W3 Schools website located at www.w3schools.com/.

 

Getting a Notification of a New Message

It’s great that your contact form is now working and visitors to your website can interact with you and leave you a message, but it would not be great to have to check your S3 bucket daily to see if there are new messages.

 

Thankfully the AWS Platform has yet another service that will help you out by letting you know when a new message arrives: Simple Notification Service (SNS).

 

AWS SNS is a notification service that every other service on the AWS Platform can interact with as a way to indicate when an action has occurred.

 

Notifications are sent to topics, and topics can be subscribed to by a variety of methods. To finish off this blog, I’ll walk you through setting up SNS to create a new topic and subscribing to that topic via email so that you’ll get an email message when a new message becomes available in your S3 bucket.

 

The easiest way to set this up is by heading to the SNS dashboard from within the AWS Console. Click the Services menu; under the Messaging heading, choose the SNS link.

 

After clicking the “Create topic” button, you will be brought back to the Topics screen where you can see your new topic listed with the ARN value.

 

Make note of this ARN; it will be needed in the next step. From here you need to allow S3—and specifically the bucket holding your messages—to publish notifications to this new topic. Select the checkbox next to the topic name and click the Actions menu. Click the Edit Topic Policy option, and when the window opens, copy the following code into the Advanced view tab

 

You will need to update this code in two spots. The first is the Resource property where you should have the ARN of your SNS topic that you just created.

 

The second is the aws:SourceArn property where you should have the name of the S3 bucket that you created to hold your messages. Once you’ve adjusted the code accordingly, click the Update Policy button.

 

From here, you’ll head to the S3 bucket that you created to hold these messages.

 

Click the Services menu and click the S3 service link. From the dashboard, select the bucket you created to hold your messages and then choose to make the properties of the bucket visible by clicking the Properties tab near the top right-hand corner of the screen.

 

Expand the Events section and fill out detail, giving the configured event a name, selecting when the event should be triggered and choosing to publish the event notification to SNS and the topic you created above.

 

After entering the information the above event configuration form, click the Save button to create the event notification.

 

Your last step is to head back to SNS and subscribe to your new topic. From the SNS Dashboard, click Topics and select the checkbox next to the name of the topic you recently created. Click the Actions menu and choose “Subscribe to topic.”

 

Configure the subscription option to use email as the notification method and enter an email address where you would like to receive email notifications.

 

Click the Save Subscription button. A confirmation email will be sent to the email address that you enter in this configuration; you will need to confirm that you want to receive an email before you will be able to test your contact form again to see if you receive a notification email.

 

I have found that it can take up to an hour for the email subscription confirmation email to be processed. After waiting a while, give your contact form another test and you should receive an email with the subject “Amazon S3 Notification.”

 

The email content isn’t the best format, but this is a way of letting you know that a new object has been created in the S3 bucket, which is better than having to check it manually. In later blogs, you’ll explore more advanced options in SNS, including the ability to tailor these messages to be more reader-friendly.