Our website uses some essential cookies to improve your experience and enable certain functionality.

Event loop: microtasks and macrotasks

Delve into the inner workings of JavaScript's event loop to optimize code, craft efficient architectures, and master asynchronous programming in both browser and Node.js environments. Explore the theoretical foundations and practical applications of this essential mechanism to unlock the full potential of your JavaScript development.

Obaid AshiqcalendarJanuary 11, 2024

Event loop: microtasks and macrotasks

Event Loop

The event loop operates on a simple premise: within an ongoing loop, the JavaScript engine awaits tasks, processes them in sequence, and then idles until further tasks emerge.

Here's an abstract outline of the engine's operation:

  1. Continuously check for pending tasks.
  2. Sequentially execute these tasks, starting with the oldest.
  3. Enter a dormant state until new tasks become available, then return to step 1.

This process mirrors our browsing experience: the JavaScript engine remains largely inactive, only engaging when prompted by scripts, handlers, or events.

Task instances vary:

  • Loading an external script (<script src="...">) triggers its execution.
  • User actions like mouse movements generate mousemove events and execute associated handlers.
  • Scheduled setTimeout functions prompt their callbacks to run.
  • This cycle continues, with each task being queued for processing by the engine.

In instances where a task arrives while the engine is occupied, it joins a queue known as the "macrotask queue" (referred to as such in V8 terminology). This queue orderly organizes tasks, ensuring they're processed in the sequence they're received.

Event loop with macrotask queue

For instance, while the engine is busy executing a script, a user may move their mouse causing mousemove, and setTimeout may be due and so on, these tasks form a queue, as illustrated on the picture above.

Tasks from the queue are processed on "first come – first served" basis. When the engine browser is done with the script, it handles mousemove event, then setTimeout handler, and so on.

So far, quite simple, right?

Two more details:

  • Rendering never happens while the engine executes a task. It doesn't matter if the task takes a long time. Changes to the DOM are painted only after the task is complete.
  • If a task takes too long, the browser can't do other tasks, such as processing user events. So after a time, it raises an alert like "Page Unresponsive", suggesting killing the task with the whole page. That happens when there are a lot of complex calculations or a programming error leading to an infinite loop.

Use Cases

Use-case 1: splitting CPU-hungry tasks

When handling CPU-intensive tasks like syntax highlighting, which involves heavy analysis and manipulation of elements, it monopolizes the engine's attention. During this period, the engine becomes unable to manage other DOM operations or process user events, potentially resulting in browser slowdowns, hiccups, or even temporary freezing—undesirable outcomes.

To circumvent such issues, a strategy involves breaking down the extensive task into smaller segments. For instance, instead of highlighting an entire document in one go, the process could focus on a manageable portion, like the first 100 lines. Following this, the engine can schedule a setTimeout operation, set to zero-delay, to handle subsequent segments in a piecemeal fashion. This approach ensures that the CPU-intensive task is divided into more manageable chunks, preventing it from overwhelming the system and allowing other operations to proceed smoothly in between these smaller task segments.

To demonstrate this approach, for the sake of simplicity, instead of text-highlighting, let's take a function that counts from 1 to 1000000000.

If you run the code below, the engine will "hang" for some time. For server-side JS that's clearly noticeable, and if you are running it in-browser, then try to click other buttons on the page -- you'll see that no other events get handled until the counting finishes.

Code snippet

The browser may even show a "the script takes too long" warning.

Let's split the job using nested setTimeout calls:

Code example with nested setTimeout()

Now the browser interface is fully functional during the "counting" process.

A single run of count does a part of the job (*), and then re-schedules itself (**) if needed:

  1. First run counts: i=1...1000000.
  2. Second run counts: i=1000001..2000000.
  3. ...and so on.

Now, if a new side task (e.g. onClick event) appears while the engine is busy executing part 1, it gets queued and then executes when part 1 is finished, before the next part. Periodic returns to the event loop between count executions provide just enough "air" for the JavaScript engine to do something else, to react to other user actions.

The notable thing is that both variants -- with and without splitting the job by setTimeout -- are comparable in speed. There's not much difference in the overall counting time.

To make them closer, let's make an improvement.

We'll move the scheduling to the beginning of the count():

Code example, scheduling is beggining of count()

Now when we start to count() and see that we'll need to count() more, we schedule that immediately, before doing the job.

If you run it, it's easy to notice that it takes significantly less time, because there's the in-browser minimal delay of 4ms for many nested setTimeout calls. Even if we set 0, it's 4ms (or a bit more). So the earlier we schedule it - the faster it runs.

Finally, we've split a CPU-hungry task into parts - now it doesn't block the user interface. And its overall execution time isn't much longer.

Use case 2: progress indication

Dividing hefty tasks in browser scripts has an additional advantage: it enables the display of progress indicators. As mentioned earlier, alterations to the DOM become visible only after the ongoing task concludes, regardless of its duration.

This behavior ensures a seamless experience for visitors, preventing them from witnessing any intermediate or incomplete states as functions create elements and modify their styles.

Here's a demonstration:

Code example, no intermediate values

However, there are scenarios where we desire to exhibit progress during such tasks—like displaying a progress bar. By fragmenting the intensive task using setTimeout, the changes become visible in between these segments.

This approach allows us to periodically update the interface, showing progress indications at intervals as the task progresses, providing users with feedback on the ongoing operation.

Here’s the demonstration:

Code example, Intermediate values shown

Here <div> will show the increasing values of i (kind of a progress bar).

Use case 3: doing something after the event

In an event handler we may decide to postpone some actions until the event bubbled up and was handled on all levels. We can do that by wrapping the code in zero delay setTimeout.

Code example, zero delay timeout

Macrotasks and Microtasks

Macro task is any JavaScript code which is scheduled to run by the standard mechanism such as an event callback, interval or timeout.Task queue and Macro task queue are the same concept.

Some examples of macro tasks are:

  1. setTimeout()
  2. setImmediate()
  3. setInterval()
  4. requestAnimationFrame()
  5. I/O
  6. UI rendering

Microtasks come solely from our code. They are usually created by promises: an execution of .then/catch/finally handler becomes a microtask. Microtasks are used "under the cover" of await as well, as it's another form of promise handling.

There's also a special function queueMicrotask(func) that queues func for execution in the microtask queue.

Some examples of micro tasks are:

  1. Promise
  2. process.nextTick()
  3. queueMicrotask()

Immediately after every macrotask, the engine executes all tasks from microtask queue, prior to running any other macrotasks or rendering or anything else.

For instance, take a look:

Code example, Order of execution for macrotasks, microtasks and general synchronous code

Order here would be

  1. code shows first, because it's a regular synchronous call.
  2. promise shows second, because .then passes through the microtask queue, and runs after the current code.
  3. timeout shows last, because it's a macrotask.

The richer event loop picture looks like this (order is from top to bottom, that is: the script first, then microtasks, rendering and so on):

Event loop example

All microtasks are completed before any other event handling or rendering or any other macrotask takes place.

That's important, as it guarantees that the application environment is basically the same (no mouse coordinate changes, no new network data, etc) between microtasks.

If we'd like to execute a function asynchronously (after the current code), but before changes are rendered or new events handled, we can schedule it with queueMicrotask.

Here's an example with "counting progress bar", similar to the one shown previously, but queueMicrotask is used instead of setTimeout. You can see that it renders at the very end. Just like the synchronous code:

Code example, queueMicrotask


A more detailed event loop algorithm (though still simplified compared to the specification):

  1. Dequeue and run the oldest task from the macrotask queue (e.g. "script").
  2. Execute all microtasks:
    While the microtask queue is not empty:
    Dequeue and run the oldest microtask.
  3. Render changes if any.
  4. If the macrotask queue is empty, wait till a macrotask appears.
  5. Go to step 1.

To schedule a new macrotask:

  • Use zero delayed setTimeout(f).

That may be used to split a big calculation-heavy task into pieces, for the browser to be able to react to user events and show progress between them.

Also, used in event handlers to schedule an action after the event is fully handled (bubbling done).

To schedule a new microtask

  • Use queueMicrotask(f).
  • Also promise handlers go through the microtask queue.

There's no UI or network event handling between microtasks: they run immediately one after another.

So one may want to queueMicrotask to execute a function asynchronously but within the environment state.

NOTE: For long heavy calculations that shouldn't block the event loop, we can use Web Workers

Talk to us