Async Await and Task

Async Await and Task

Async-Await and Task

In my experience, proper use and management of asynchronous operations is one of the more frustrating tasks that web developers face. If the definitions I’ve provided don’t suffice, perhaps some real-world examples will clear things up.

 

So what are some actual examples of async tasks? Where might we come across this concept in everyday web development? There are actually quite a few operations that are asynchronous by nature. For example:

  • Handling responses to AJAX requests.
  • Reading files or blobs of data.
  • Requesting user input.
  • Passing messages between browsing contexts.

 

The most common async task involves AJAX requests. Client/server communication is understandably asynchronous. When you send a request to a server endpoint from the browser using XMLHttpRequest or fetch, the server may respond in milliseconds, or perhaps even seconds after the request has been initiated.

 

The point is, after you send the request, your application doesn’t simply freeze until the request has returned. The code continues to execute. The user continues to be able to manipulate the page. So you must register a function that is invoked by the browser once the server has properly responded to your request.

Async Await

 

There is no telling exactly when the function purposed with handling the server response will be executed, and that’s “okay.” Your application and code must be structured to account for the fact that this data will not be available until some unknown point in the future. Indeed, some logic may depend on this response, and your code will need to be written accordingly.

 

Managing a single request or asynchronous operation may not be particularly difficult. But at scale, with a number of requests and other async tasks in progress concurrently, or a series of async tasks that depend on each other, this becomes mighty hairy.

 

In a general sense, APIs benefit from the support of async operations. Consider a library that executes a provided function when a user chooses a file to be uploaded. This function can prevent a file from being uploaded by returning false. But what if the function must delegate to a server endpoint to determine whether the file is valid?

 

Perhaps the file must be hashed client-side and then the server must be checked to ensure a duplicate file does not exist. This accounts for two asynchronous operations: reading the file (to generate a hash) and then contacting the server to check for duplicates. If the library doesn’t provide support for this type of task in its API, integration is limited to some degree.

 

Another example: a library maintains a list of contacts. The user is given the ability to delete a contact via a <button>. It is quite common to display a confirm dialog before actually deleting the contact. Our library provides a function that is called before the delete operation occurs and allows it to be ignored if the function returns false.

 

If you want a confirm dialog that stops the execution of your code before the user responds, you could use the browsers built-in confirm dialog and then return false if the user elects to cancel the operation, but the native confirms dialog is barebones and ugly. It isn’t an ideal choice for most projects, so you will need to provide your own styled dialog, which will be non-blocking.

 

In other words, the library will need to account for the asynchronous nature of waiting for a user to decide if they are really sure that the file should be deleted forever. These are just two examples of how important it may be to consider async support when building an API, but there are many more. This blog deals with both traditional and some relatively new methods for dealing with async wait calls, but I also cover another solution that can probably be classified as “bleeding edge.”

 

The reason for the inclusion of a new specification in this blog is to provide you with an indication of how important dealing with the asynchronous nature of the web has become, and how the maintainers of JavaScript are doing their best to make a traditionally difficult concept to manage much easier.

 

Callbacks: the Traditional Approach for Controlling Async Operations

Controlling Async Operations

The most familiar way to provide support for asynchronous tasks is via a system of callback functions. Let’s take the the contacts list library example and apply a callback function to account for the fact that a beforeDelete handler function may need to ask the user to confirm the contact removal, which is an async operation (assuming we don’t rely on the built-in window.confirm() dialog). Our code may look something like this:

contactsHelper.register('beforeDelete', function(contact, callback) {
confirmModel.open(
'Delete contact ' + contact.name + '?',
function(result) {
callback({cancel: result.cancel});
});
});

 

When the user clicks the delete button next to a contact, the function passed to the “beforeDelete” handler is invoked. The contact to be deleted is passed to this function, along with a callback function. If the delete operation is to be ignored, an object with a cancel property set to true must be passed into this callback.

 

Otherwise, the callback will be invoked with a false value for the cancel property. The library will “wait” for this call before attempting to delete the contact. Note that this “waiting” does not involve blocking the UI thread, so all other code can continue to execute. I’m assuming there is a modal dialog component with an open function that displays a delete confirm dialog to the user. The result of the user’s input is passed into another callback function supplied to the open function.

 

If the user clicks the Cancel button on this dialog, the result object passed to this particular callback function will contain a cancel property with a value of true. At that point, the callback function passed to the “beforeDelete” callback function will be invoked, indicating that the file to be deleted should not, in fact, be deleted.

 

Notice how the preceding code depends on a number of varying conventions—a number of non-standard conventions. In fact, there aren’t any standards associated with callback functions. The value or values passed to the callback are part of a contract defined by the supplier of the function.

 

In this case, the conventions are similar enough between the modal callback and the “beforeDelete” callback, but that may not always be the case. Although callbacks are a simple and well-supported way to account for async results, some of the problems with this approach may already be clear to you.

 

Node.js and the Error-First Callback

Error-First Callback

I haven’t spent a lot of time discussing Node.js, but it has come up periodically throughout this blog. The non-browsers section of blog 3 goes into a bit of detail about this surprisingly popular server-side JavaScript-based system. Node.js has long relied on callbacks to support asynchronous behavior across APIs.

 

In fact, it has popularized a very specific type of callback system: the “error-first” callback. This particular convention is very common throughout the Node.js landscape and can be found as part of the API in many major libraries, such as Express,1 Socket.IO, and request. It is arguably the most “standard” of all the various callback systems, though of course there is no real standard, just conventions, though some conventions are more popular than others.

 

Error-first callbacks require, as you might expect, an error to be passed as the first parameter to a supplied callback function. Usually, this error parameter is expected to be an Error object. The Error object has always been part of JavaScript, starting with the first ECMAScript specification published back in 1997.

 

The Error object can be thrown in exceptional situations or passed around as a standard way to describe an application error. With error-first callbacks, an Error object can be passed as the first parameter to a callback if the related operation fails in some way. If the operation succeeds, null should be passed as the first parameter instead.

 

This makes it easy for the callback function itself to determine the status of the operation. And if the related task did not fail, subsequent arguments are used to supply relevant information to the callback function.

 

Don’t worry if this is not entirely clear to you. You’ll see error-first callbacks in action through the rest of this section, and that error-first callbacks are the most elegant way to either signal an error or deliver the requested information when supporting an asynchronous task via a system of callbacks.

 

Solving Common Problems with Callbacks

Solving Common Problems with Callbacks

 

Let’s look at a simple example of a module that asks the user for their email address (which is an asynchronous operation):

function askForEmail(callback) {
promptForText('Enter email:', function(result) {
if (result.cancel) {
callback(new Error('User refused to supply email.'));
}
else {
callback(null, result.text);
}
})
}
askForEmail(function(err, email) {
if (err) {
console.error('Unable to get email: ' + err.message);
}
else {
// save the `email` with the user's account record
}
});

 

Can you figure out the flow of the preceding code? An error-first callback is passed in as the sole parameter when invoking the function that ultimately asks our user for their email address. If the user declines to provide one, an Error with a description of the situation is passed as the first parameter to our error-first callback.

 

The callback logs this and moves on. Otherwise, the err argument is null, which signals to the callback function that we did indeed receive a valid response from our user—the email address— which is contained in the second argument to the error-first callback.

 

Another practical use of callbacks is to handle the result of an AJAX request. Since the very first version of jQuery, it has been possible to supply a callback function to be invoked when an AJAX request succeeds. 

$.get('/my/name',  function  (name)  {
console.log('my  name  is  '  +  name);
   });

 

The second parameter is a success callback function, which jQuery will call with the response data if the request succeeds. But this example only handles success. What if the request fails? Yet another way to write this and account for both success and failure is to pass an object that contains the URL, success, and failure callback functions:

$.get({
url: '/my/name',
success: function(name) {
console.log('my name is ' + name);
},
error: function() {
console.error('Name request failed!');
}
});
The same section in the AJAX requests blog demonstrates making this call without jQuery. This solution for all browsers also relies on callbacks to signal success and failure:
var xhr = new XMLHttpRequest();
xhr.open('GET', '/my/name');
xhr.onload = function() {
if (xhr.status >= 400) {
console.error('Name request failed!');
}
else {
console.log('my name is ' + xhr.responseText);
}
};
xhr.onerror = function() {
console.error('Name request failed!');
};
xhr.send();

The onload callback is invoked if the request has been sent a response from the server. Conversely, the onerror callback is used if the request cannot be sent, or if the server fails to respond. Callbacks certainly seem to be a reasonable way to register for the result of an asynchronous task. And this is indeed true for simple cases. But a system of callbacks becomes less appealing for more complex scenarios.

 

Promises: an Answer to Async Complexity

Async Complexity


Before I discuss an alternative to callbacks, perhaps it would be prudent to first point out some of the issues associated with a dependence on callbacks to manage async tasks. The first fundamental flaw in the callback system described in the preceding section is evident in every method or function signature that supports this convention.

 

When invoking a function that utilizes a callback to signal success or failure of an asynchronous operation, you must supply this callback as a method parameter. Any input values used by the method must also be passed as parameters. In this case, you are now passing input values and managing the method’s output all through method parameters. This is a bit non-intuitive and awkward. This callback contract also precludes any return value. Again, all work is done via method parameters.

 

Another issue with callbacks: there is no standard, only conventions. Whenever you find yourself needing to invoke a method that executes some logic asynchronously and expects a callback to manage this process, it may expect an error-first callback, but it may not. And how can you possibly know? Since there is no standard for callbacks, you are forced to refer to the API documentation and pass the appropriate callback.

 

Perhaps you must interface with multiple libraries, all of which expect callbacks to manage async results, each relying on different callback method conventions. Some may expect error-first callbacks. Others may include an error or status flag elsewhere when invoking the supplied callback. Some may not even account for errors at all!

 

Perhaps the biggest issue with callbacks becomes apparent when they are forced into non-trivial use. For example, consider a few asynchronous tasks that must run sequentially, each subsequent task depending on the result from the previous. To demonstrate such a scenario, imagine you need to send an AJAX request to one endpoint to load a list of user IDs and then a request must be made to a server to load personal information for the first user in the list.

 

After this, the user’s info is presented on-screen for editing, and finally, the modified record is sent back to the server. This whole process involves four asynchronous tasks, with each task depending on the result of the previous. How would we model this workflow with callbacks? It’s not pretty, but it might look something like this:

function updateFirstUser() {
getUserIds(function(error, ids) {
if (!error) {
getUserInfo(ids[0], function(error, info) {
if (!error) {
displayUserInfo(info, function(error, newInfo) {
if (!error) {
updateUserInfo(id, info, function(error) {
if (!error) {
console.log('Record updated!');
}
else {
console.error(error);
}
});
}
else {
console.error(error);
}
});
}
else {
console.error(error);
}
});
}
else {
console.error(error);
}
});
}
updateFirstUser();

 

Code like the preceding is commonly referred to as callback hell. Each callback function must be nested inside of the previous one in order to make use of its result. As you can see, the callback system does not scale very well. Let’s look at another example that further confirms this conclusion.

 

This time, we need to send three files submitted for a product in three separate AJAX requests to three separate endpoints concurrently. We need to know when all requests have completed and whether one or more of these requests failed. Regardless of the outcome, we need to notify our user with the result. If we are stuck using error-first callbacks, our solution is a bit of a brain-teaser:

function sendAllRequests() {
var successfulRequests = 0;
function handleCompletedRequest(error) {
if (error) {
console.error(error);
}
else if (++successfulRequests === 3) {
console.log('All requests were successful!');
}
}
sendFile('/file/docs', pdfManualFile, handleCompletedRequest);
sendFile('/file/images', previewImage, handleCompletedRequest);
sendFile('/file/video', howToUseVideo, handleCompletedRequest);
}
sendAllRequests();

That code isn’t awful, but we had to create our own system to track the result of these concurrent operations. What if we had to track more than three async tasks? Surely there must be a better way!

 

The First Standardized Way to Harness Async

Harness Async

The flaws and inefficiencies associated with relying on callback conventions often prompt developers to look for other solutions. Surely some of the problems, and the boilerplate common to this async handling approach, can be solved by and packaged into a more standardized API. The Promises specification defines an API that achieves this very goal, and so much more.

 

Promises have been publicly discussed on the JavaScript front for some time. The first instance of a Promise-like proposal (that I am able to locate) was created by Kris Kowal. Dating to mid-2011, it describes “Tenable Promises”. A couple of lines from the introduction provide a good glimpse into the power of promises:

 

An asynchronous promise loosely represents the eventual result of a function. A resolution can either be “fulfilled” with a value or “rejected” with a reason, corresponding by analogy to synchronously returned values and thrown exceptions respectively.

 

This loose proposal was, in part, used to form the Promises/A+ specification. This specification has a number of implementations, many of which can be seen in various JavaScript libraries, such as bluebird, 6 Q,7 and rsvp.js.8 But perhaps the more important implementation appeared in the ECMA- 262 6th Edition specification.

 

Remember from the blog that the ECMA-262 standard defines the JavaScript language specification. The 6th edition of this spec was officially completed in 2015. At the time of writing, the Promise object defined in this standard is available natively in all modern browsers, with the exception of Internet Explorer. Luckily, many lightweight polyfills are available to fill in this gap.

 

Using Promises to Simplify Async Operations

Async Operations

So what exactly are promises? You could read through the ECMAScript 2015 or A+ specifications, but like most formal language specifications, these are both a bit dry and perplexing. First and foremost, a promise, in the context of ECMAScript, is an object used to manage the result of an asynchronous operation. It smoothes all the rough edges in a complex application left by traditional convention-based callbacks.

 

Now that the overarching goal of promises is clear, let’s take a deeper look at this concept. The first logical place to start exploring promises in more depth is through Domenic Denicola’s “States and Fates” article). From this document, we learn that promises have three states:

  • Pending: The initial state, before the associated operation has concluded
  • Fulfilled: The associated operation monitored by the promise has completed without error
  • Rejected: The associated operation has reached an error condition
  • Domenic goes on to define a term that groups both the “fulfilled” and “rejected” states: settled.
  • So, a promise is initially pending, and then it is settled once it has concluded.

 

There are also two distinct “fates” defined in this document:

Resolved: A promise is resolved when it is fulfilled or rejected, or when it has been redirected to follow another promise. An example of the latter condition can be seen when chaining asynchronous promise-returning operations together. (More on that soon.)

 

Unresolved: As you might expect, this means that the associated promise has not yet been resolved. If you can understand these concepts, you are very close to mastering promises, and you will find working with the API defined in the A+ and ECMA-262 specifications much easier.

 

The Anatomy of a Promise

Anatomy of a Promise

A JavaScript promise is created simply by constructing a new instance of an A+ compliant Promise object, such as the one detailed in the ECMAScript 2015 specification. The Promise constructor takes one argument: a function. This function itself takes two arguments, which are both functions that give the promise a resolved “fate” (as described in the preceding section).

 

The first of these two function arguments is a “fulfilled” function. This is to be called when the associated asynchronous operation completes successfully. When the “fulfilled” function is invoked, a value related to the completion of the promissory task should be passed.

 

For example, if a Promise is used to monitor an AJAX request, the server response may be passed to this “fulfilled” function once the request completes successfully. When a fulfilled function is called, the promise assumes a “fulfilled” state, as described earlier.

 

The second argument passed to the Promise constructor’s function parameter is a “reject” function. This should be called when the promissory task has failed for some reason, and the reason describing the failure should be passed into this rejected function. Often, this will be an Error object.

If an exception is thrown inside of the Promise constructor, this will automatically cause the “reject” function to be invoked with the thrown Error passed as an argument.

 

Going back to the AJAX request example, if the request were to fail, the “reject” function should be called, passing either a string description of the result or perhaps the HTTP status code. When a reject function is called, the promise assumes a “rejected” state, described in number 3 of the list of promise states given earlier.

 

When a function returns a Promise, the caller can “observe” the result a couple of different ways. The most common way to handle a promissory return value is to call a then method on the promise instance. This method takes two parameters, both functions. The first functional parameter is invoked if the associated promise is fulfilled. As expected, if a value is associated with this fulfillment (such as a server response for an AJAX request), it is passed to this first function.

 

The second function parameter is invoked if the promise fails in some way. You may omit the second parameter if you are only interested in fulfillment (though it is generally unsafe to assume your promise will succeed). Additionally, you may specify a value of null or undefined, or any value that is not considered to be “callable” as the first argument if you are only interested in promise rejection.

 

An alternative to this, which also lets you focus exclusively on the error case, is to call the catch method on the returned Promise. This catch method takes one argument: a function that is invoked when/if the associated promise errors. The ECMAScript 2015 Promise object includes several other helpful methods, but one of the more useful non- instance methods is all(), which allows you to monitor many promises at once.

 

The all method returns a new Promise that is fulfilled if all monitored promises are fulfilled or reject as soon as one of the monitored promises is rejected. The Promise.race() method is very similar to Promise.all(), the difference being that the Promise returned by race() is fulfilled immediately when the first monitored Promise is fulfilled.

 

It does not wait for all monitored Promise instances to be fulfilled first. One use for race() could also apply to AJAX requests. Imagine you were triggering an AJAX request that persisted the same data to multiple redundant endpoints. All that is important is the success of one request, in which case Promise.race() is more appropriate and much more efficient than waiting for all requests to complete with Promise.all().

 

Simple Promise Examples

Simple Promise Examples

If the previous section isn’t enough to properly introduce you to JavaScript promises, a few code examples should push you over the edge. Earlier, I provided a couple code blocks that demonstrated handling of async task results using callbacks.

 

The first one outlined a function that prompts the user to enter an email address in a dialog box—an asynchronous task. An error-first callback system was used to handle both successful and unsuccessful outcomes. The same example can be rewritten to make use of promises:

function askForEmail() {
return new Promise(function(fulfill, reject) {
promptForText('Enter email:', function(result) {
if (result.cancel) {
reject(new Error('User refused to supply email.'));
}
else {
fulfill(result.text);
}
});
});
}
askForEmail().then(
function fulfilled(emailAddress) {
// do something with the `emailAddress`...
},
function rejected(error) {
console.error('Unable to get email: ' + error.message);
}
);

In the preceding example rewritten to support promises, our code is much more declarative and straightforward. The askForEmail() function returns a Promise that describes the result of the “ask the user for email” task. When calling this function, we can intuitively handle both a supplied email address and an instance where the email is not provided by following a codified standard.

 

Notice that we are still assuming that the promptForText() function API is unchanged, but the code can be simplified even further if this function also returns a promise:

function askForEmail() {
return promptForText('Enter email:');
}
askForEmail().then(
function fulfilled(emailAddress) {
// do something with the `emailAddress`...
},
function rejected(error) {
console.error('Unable to get email: ' + error.message);
}
);

If promptForText() returns a Promise, it should pass the user-entered email address to the fulfilled function if an address is supplied, or a descriptive error to the rejected function if the user closes the dialog without entering an email address. These implementation details are not visible above, but based on the Promise specification, this is what we can expect.

 

The other example in the callbacks section demonstrates the onload and error callbacks provided by XMLHttpRequest. Just to recap, onload is called when the request completes (regardless of the server response status code), and error is invoked if the request fails to complete for some reason (such as due to CORS or other network issues).

 

As the Fetch API brings a replacement for XMLHttpRequest that makes use of the Promise specific to signal the result of an AJAX request. I’ll dive into a more complex example that makes use of fetch shortly, but first, let’s write a wrapper around the XMLHttpRequest call from the callbacks section that presents a more elegant interface using promises:

function get(url) {
return new Promise(function(fulfill, reject) {
var xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.onload = function() {
if (xhr.status >= 400) {
reject('Name request failed w/ status code ' + xhr.status);
}
else {
fulfill(xhr.responseText);
}
}
xhr.onerror = function() {
reject('Name request failed!');
}
xhr.send();
});
}
get('/my/name').then(
function fulfilled(name) {
console.log('Name is ' + name);
},
function rejected(error) {
console.error(error);
}
);

Although the Promise-wrapped XMLHttpRequest doesn’t simplify that code much, it gives us a great opportunity to generalize this GET request, which makes it more reusable. Also, our code that uses this new GET request method is easy to follow and magnificently readable and elegant.

 

Both the success and failure conditions are a breeze to account for, and the logic required to manage this is wrapped away inside the Promise constructor function. Of course, we could have created a similar approach without Promise, but the fact that this async task-handling mechanism is an accepted JavaScript language standard makes it all the more appealing.

 

The same exact AJAX request logic can more elegantly make use of the Promise API (for Firefox, Chrome, Opera, and Edge) by relying on the Fetch API:

function get(url) {
return fetch(url).then(
function fulfilled(response) {
if (response.ok) {
return response.text();
}
throw new Error('Request failed w/ status code ' + response.status);
}
);
}
get('/my/name').then(
function fulfilled(name) {
console.log('Name is ' + name);
},
function rejected(error) {
console.error(error);
}
);

 

Here we’ve been able to simplify the GET name request much more with the help of promises and fetch. If the server indicates a non-successful status in its response, or if the request fails to send at all, then the rejected handler will be hit. Otherwise, the fulfilled function handler is triggered by the text of the response (the username). A lot of the boilerplate that plagues the XHR version have been avoided entirely.

 

Fixing “Callback Hell” with Promises

Callback Hell

Earlier, I demonstrated one of the many issues with callbacks that presents itself in a non-trivial situation where consecutive dependent async tasks are involved.

 

That particular example required retrieving all user IDs in the system, followed by retrieval of the user information for the first returned user ID, and then displaying the info for editing in a dialog, followed by a callback to the server with the updated user information. This accounts for four separate but interdependent asynchronous calls.

 

The first attempt to handle this made use of several nested callbacks, which resulted in a pyramid-style code solution— callback hell. Promises are an elegant solution to this problem, and callback hell is avoided entirely due to the ability to chain promises. Take a look at a rewritten solution that makes use of the Promise API:

function updateFirstUser() {
getUserIds()
.then(function(ids) {
return getUserInfo(ids[0]);
})
.then(function(info) {
return displayUserInfo(info);
})
.then(function(updatedInfo) {
return updateUserInfo(updatedInfo.id, updatedInfo);
})
.then(function() {
console.log('Record updated!');
})
.catch(function(error) {
console.error(error);
});
}
updateFirstUser();

That is quite a bit easier to follow! The flow of the async operations is probably apparent as well. Just in case it isn’t, I’ll walk you through it. I’ve contributed a fulfilled function for each of the four then blocks to handle specific successful async operations. The catch block at the end will be invoked if any of the async calls fails. Note that catch is not part of the A+ Promise specification, though it is part of the ECMAScript 2015 Promise spec.

 

Each async operation—getUserIds(), getUserInfo(), displayUserInfo(), and updateUserInfo()— returns a Promise. The fulfilled value for each async operation’s returned Promise is made available to the fulfilled function on the subsequently chained then block. No more pyramids, no more callback hell, and a simple and elegant way to handle a failure of any call in the process.

 

Monitoring Multiple Related Async Tasks with Promises

Monitoring

Remember the callbacks example from the start of this section that illustrated one approach to handling three separate AJAX requests to three separate endpoints concurrently? We needed to know when all requests completed and whether one or more of them failed. The solution wasn’t ugly, but it was verbose and contained a fair amount of boilerplate that could become cumbersome should we find ourselves in this situation often.

 

I surmised that there must be a better solution to this problem, and there is! The Promise API allows for a much more elegant solution, particularly with the all method, which allows us to easily monitor all three asynchronous tasks and react when they all complete successfully, or when one fails. Take a look at the rewritten Promise-ified code:

 

function sendAllRequests() {
Promise.all([
sendFile('/file/docs', pdfManualFile, handleCompletedRequest),
sendFile('/file/images', previewImage, handleCompletedRequest),
sendFile('/file/video', howToUseVideo, handleCompletedRequest)
]).then(
function fulfilled() {
console.log('All requests were successful!');
},
function rejected(error) {
console.error(error);
}
)
}

sendAllRequests();

The preceding solution assumes sendFile() returns a Promise. With this being true, monitoring these requests becomes much more intuitive and lacks almost all the boilerplate and ambiguity from the callbacks example. Promise. all takes an array of Promise instances and returns a new Promise.

 

This new returned Promise is fulfilled when all the Promise objects passed to all are fulfilled, or it is rejected if one of these passed Promise objects is rejected. This is exactly what we are looking for, and the Promise API provides this support to us natively.

 

jQuery’s Broken Promise Implementation

jQuery Broken

Almost all the code in this blog has focused exclusively on the support for async tasks that is native to JavaScript. The rest of this blog is going to follow a similar pattern. This is mostly due to the fact that jQuery simply doesn’t provide much in terms of powerful async support. The ECMA- 262 standard is far ahead of jQuery in this regard.

 

But because this blog aims to explain much of the web API and JavaScript to those coming from a jQuery-centric perspective, I feel it is important to at least mention jQuery in this section, since it does have support for promises—but this support has been, unfortunately, broken and completely non-standard in all released versions of jQuery up until June of 2016. While the problems with promises have been fixed in jQuery 3.0, promises have suffered from some notable deficiencies in the library for quite some time.

 

There have been at least two serious bugs in jQuery's promise implementation. Both of these deficiencies made promises non-standard and frustrating to work with. The first related to error handling. Suppose an Error is thrown inside of a promise’s fulfilled function handler, part of a first then block.

 

To catch this sort of issue, it is customary to register a rejected handler on a subsequent then block, chained to the first then block. Remember that each then block returns a new promise. Your code may look something like this:

someAsyncTask
.then(
function fulfilled() {
throw new Error('oops!');
}
)
.then(null, function rejected(error) {
console.error('Caught an error: ' + error.message);
});

Using the ECMA-262 Promise API, the preceding code will print an error log to the console that reads “Caught an error: oops!” But if the same pattern is implemented using jQuery’s deferred construct, the error will not be caught by the chained rejected handler. Instead, it will remain uncaught. I’ll leave it to you to read further if you are interested in more specifics regarding the issues with jQuery’s promise error handling and won’t spend more time on this here.

 

The second major issue with jQuery’s promise implementation is a break in the expected order of operations. In other words, instead of observing the expected execution order of code alongside promise handlers, jQuery changes the order of execution to match the order in which the code appears in the executable source.

 

This is an overly simplistic explanation, and if you'd like to read more, take a peek at Valera Rozuvan's “jQuery Broken Promises Illustrated” article. The lesson here is simple - avoid jQuery's promise implementation, unless you are using a very recent version (3.0+). It has been non-standard and deficient for many years.

 

Native Browser Support

Native Browser Support

As mentioned earlier, the Promise API is standardized as part of the ECMA-262 6th edition. As of this writing, all modern browsers, with the exception of Internet Explorer, implement promises natively.

 

A number of Promises/A+ libraries are available (such as RSVP.js, Q, and Bluebird), but I prefer a small and focused polyfill to bring promises to non-compliant browsers (Internet Explorer). For this, I highly recommend the small and effective “es6-promise” polyfills by Stefan Penner.

 

Async Functions: Abstraction for Async Tasks

The TC39 group that standardized promises in ECMA-262 6th edition worked on a related specification that builds upon the existing Promise API. The async functions specification,16 also known as async/await, will be part of the 8th edition of the ECMAScript specification in 2017.

 

At the writing of this blog, it is currently sitting in stage 4, which is the last stage in the TC39 specification acceptance process.17 This means async functions are complete and ready to be associated with a future formal edition of JavaScript. There seems to be a lot of momentum and excitement surrounding async functions (and rightfully so).

 

Async functions provide several features that make handling async operations incredibly easy. Instead of getting lost in a sea of conventions or async-specific API methods, they allow you to treat asynchronous code as if it were completely synchronous. This lets you use the same traditional constructs and patterns for asynchronous code that you have already been using for your synchronous code. Need to catch an error in an asynchronous method call?

 

Simply wrap it in a try/catch block. Want to return a value from an async function? Go ahead, return it! The elegance of async functions is a bit surprising at first, and web development will benefit enormously once they become more commonly used and understood.

 

The Problem with Promises

Promise API

The Promise API provides a refreshing break from callback hell and all the other inelegance and inefficiency associated with callback-based async task-handling conventions. But promises don’t mask the process of handling async.

 

Promise merely provide us with a more elegant API—an API that makes managing async a bit easier than the alternatives that came before it. Let’s look at two code samples—one that deals with two very similar tasks—one synchronous, the other asynchronous:

function handleNewRecord(record) {
try {
var savedRecord = saveRecord(record);
showMessage('info', 'Record saved! ' + savedRecord);
}
catch(error) {
showMessage('error', 'Error saving!' + error.message);
}
}
handleNewRecord({name: 'Ray', state: 'Wisconsin'});

 

Note The implementation of showMessage() has been left out as it is not important to the example code. It is intended to illustrate a commonly used approach to dealing with success and errors by displaying a message to the user.

 

In the preceding code, we’re given a record of some type, which is then “saved” with the help of the saveRecord function. In this case, the operation is synchronous, and the implementation doesn’t rely on an AJAX call or some other out-of-band processing. Because of this, we’re able to use familiar constructs to handle the result of the call to saveRecord. When saveRecord is called, we expect a return value that represents the saved record.

 

At that point, we may inform a user that the record was saved, for example. But if saveRecord fails unexpectedly—say it throws an Error—we have that covered too. A traditional try/catch block is all that is needed to account for such a failure. This is a basic pattern that virtually all developers are familiar with.

 

But suppose the saveRecord function was asynchronous. Suppose it did delegate to a server endpoint from the browser. Our code, using promises, would have to change to look something like this instead:

function handleNewRecord(record) {
saveRecord(record).then(
function fulfilled(savedRecord) {
showMessage('info', 'Record saved! ' + savedRecord);
},
function rejected(error) {
showMessage('error', 'Error saving!' + error.message);
}
);
}
handleNewRecord({name: 'Ray', state: 'Wisconsin'});

 

That code, rewritten to use promises due to the async nature of saveRecord, isn’t terribly difficult to follow or write, but it’s a notable departure from the familiar var savedRecord = try/catch block from the previous example. The burden of directly depending on the Promise API becomes even clearer as we run into more promissory functions throughout our project.

 

Instead of simply using familiar patterns, we are continually forced to think about async. We must treat our async code completely differently from our synchronous code. That’s unfortunate. If only we could handle async tasks without thinking about the async part. . . .

 

Async Functions to the Rescue

 

Async Functions

The primary asset that async functions bring to the table is the almost total abstraction they offer—so much so that asynchronous promissory tasks appear to be completely synchronous. It seems like magic at first. There are some things to be aware of, lest you get sucked up in the magic and become frustrated when an async function’s dependency on promises leaks through the abstraction.

 

Let’s start with a really simple and somewhat contrived example (don’t worry, we’ll work up to the real examples from the promises section soon). First, here’s the saveRecord example that we recently discussed, written to make use of async functions:

async function handleNewRecord(record) {
try {
var savedRecord = await saveRecord(record);
showMessage('info', 'Record saved! ' + savedRecord);
}
catch(error) {
showMessage('error', 'Error saving!' + error.message);
}
}
handleNewRecord({name: 'Ray', state: 'Wisconsin'});

 

Did we just assign the result of an asynchronous operation to a variable without using a then block and handle an error by wrapping that call in a try/catch block? Why, yes we did! That code looks almost exactly like the initial example where we called a completely synchronous saveRecord function. Under the covers, this is all promises, but there’s no trace of a then or even a catch block.

 

Earlier, I demonstrated how to prevent “callback hell” with the help of the Promise API. The solution presented in that section is certainly a vast improvement over the traditional callback-based approach, but the code is still a bit unfamiliar, and of course we are clearly forced to explicitly deal with the fact that we are invoking a number of interdependent asynchronous calls. Our code must be structured to account for this reality. Not so with async functions:

async function updateFirstUser() {
try {
var ids = await getUserIds(),
info = await getUserInfo(ids[0]),
updatedInfo = await displayUserInfo(info);
await updateUserInfo(updatedInfo.id, updatedInfo);
console.log('Record updated!');
}
catch(error) {
console.error(error);
}
}
updateFirstUser();

 

The preceding code is markedly more succinct and elegant than the earlier version that relied on the direct use of promises. But what about the code in the next part of the promises section? This is the one where

 

I converted the callback example that sent, managed, and monitored three files submitted for a product in three separate AJAX requests to three separate endpoints concurrently. I made use of the Promise. all method to simplify the code. Well, we can simplify that even further with some help from async functions.

 

But remember, as of the writing of this blog, async functions are still an ECMA-262 proposal. It is not part of any formal specification yet (though it will be very soon). As with many proposals, async functions have changed a bit since the initial version of the proposal. In fact, this initial version included some syntactic sugar to make it even easier and more elegant to monitor an array of promissory functions. Let’s look at a rewrite of the concurrent async tasks example, using the initial async functions proposal:

async function sendAllRequests() {
try {
// This is no longer valid syntax - do not use!
await* [
sendFile('/file/docs', pdfManualFile, handleCompletedRequest),
sendFile('/file/images', previewImage, handleCompletedRequest),
sendFile('/file/video', howToUseVideo, handleCompletedRequest)
];
console.log('All requests were successful!');
}
catch(error) {
console.error(error);
}
}
sendAllRequests();

 

At one point early on in the development of the async functions proposal, await* was included as an alias for Promise.all(). Sometime after April 2014, this was removed from the proposal, apparently to avoid confusion with a keyword in the “generators” specification in the ECMAScript 6th edition standard.

 

The yield* keyword in the generators spec resembles await* in appearance, but the two do not share similar behaviors. So, it was removed from the proposal. The appropriate way to monitor a number of concurrent promissory functions with async functions requires making use of Promise.all():

async function sendAllRequests() {
try {
await Promise.all([
sendFile('/file/docs', pdfManualFile, handleCompletedRequest),
sendFile('/file/images', previewImage, handleCompletedRequest),
sendFile('/file/video', howToUseVideo, handleCompletedRequest)
]);
console.log('All requests were successful!');
}
catch(error) {
console.error(error);
}
}
sendAllRequests();

 

It’s perhaps unfortunate that we still must make some direct use of promises in this one specific case, even when utilizing async functions, but this doesn’t negatively impact the readability or elegance of the solution. But it’s true, async functions aren’t perfect—you still have to define functions as async, and you still must include the await keyword before a function that returns a promise, but the syntax is much simpler and more elegant than the bare Promise API.

 

You can use familiar and traditional patterns to handle both async and non-async code. That’s a pretty clear win for me. This is one of many ways in which specifications are evolving quite rapidly, building upon each other, and outpacing the progression of jQuery.

 

Browser Support

Browser Support

Sadly, async functions are not natively supported in any browsers as of August 2016. But this is to be expected as this proposal is just that, a proposal—it is not part of any formal JavaScript standard yet.

 

That doesn’t mean you must wait for browser adoption before using async functions in your project. Since async functions offer new keywords, a polyfill is not the appropriate solution. Instead, you will have to make use of a tool that compiles your async functions at build time into something that browsers can understand.

 

There are many such tools that are able to compile async function syntax into cross-browser JavaScript. Babel is one such tool, and a number of Babel plug-ins exist to accomplish this task. Discussing Babel or any other JavaScript compilation tool is beyond the scope of this blog, but

 

I can tell you that most plug-ins seem to compile async functions to ECMAScript 2015 generator functions. Generator functions must then be compiled down into ECMAScript 5 code if the project is browser based (since generator functions are not natively supported in all modern browsers).

 

TypeScript is another JavaScript compilation tool that performs many of the same tasks as Babel but also supports a number of non-standard language features. TypeScript currently offers native support for async functions, but only in browsers that natively support generator functions. That limitation may very well be relaxed in a future release.

 

The Future of Standardized Async Task Handling

When I began writing this blog, I intended to dedicate two entire sections to a couple additional ECMA-262 proposals. These proposals—Asynchronous Iterators and Observable—were created to further enhance JavaScript async task handling. I initially planned to dedicate a section to each of these proposals, complete with copious code examples, but I ultimately decided against it for a few reasons. First, these proposals are still fairly immature.

 

Asynchronous Iterators is a stage 2 proposal, and Observable is only at stage 1. It didn’t seem appropriate to include these proposals in a blog when they could very well change in unexpected ways at some point during the process. Even worse, one or both proposals could be withdrawn. And no complete implementations of either proposal exist at the moment.

 

That makes it difficult to actually create runnable code when attempting to demonstrate the benefits of these concepts. Even though Async Functions is also a proposal, it did make the cut due to its momentum in the JavaScript community and its advanced stage 4 status.

 

Asynchronous Iterators aim to make it simple use familiar looping constructs, such as a for loop, to iterate over a collection of items produced by an asynchronous operation. Each item in this collection is not immediately available after the invocation of the function. Instead, as the loop executes, logic inside of the asynchronous function progressively loads new items asynchronously.

 

An intuitive example in the proposal repository demonstrates how this new concept allows us to use a for loop to print outlines in a file. The process of reading the file is asynchronous, and our for loop only attempts to read each subsequent line as the for loop requests it. If the loop terminates, so does the file reader.

 

This proposal pairs Async Functions with ECMAScript 2015 Generator Functions. Although I did cover Async Functions in this blog, I intentionally left out Generator Functions. Generator Functions are indeed useful for handling async tasks, but their use in this scenario is fairly low-level and awkward—not appropriate for this particular blog, due to the explicit complexity associated with the use of this language feature.

 

Observables are a bit better understood. A number of implementations of this pattern already exist, both in JavaScript and other languages. RxJS21 is perhaps the most well-known Observable implementation, though it remains to be seen if this is a “standard” implementation since the Observable proposal is just that—a proposal.

 

Observables provide a standardized method for sifting through and focusing on specific data points in a stream of data. An example in the proposal repository demonstrates the use of Observables that monitor all browser keyboard events to focus on a specific combination of keys in this stream of events.

 

Although Async Iterators and Observables may be part of the future of JavaScript async task handling, I’ve already demonstrated a number of available APIs that can be used today. You no longer have to rely on conventions or proprietary solutions that are tied to a specific library. JavaScript continues to evolve to standardize intuitive solutions for complex operations. Support for asynchronous tasks is just one of many such examples.