AngularJS Docs Performance

I noticed at a recent Meetup in New York – – that the performance of the AngularJS docs app was questioned and quite rightly so. Loading up the docs app was a complete dog!


As you can see from this Chrome timeline the page is taking over 4 secs (on a fairly powerful laptop with fast broadband connection) to complete loading and rendering.

The mystery of the blocking script

What is going on here? Well the actual loading of the various files is not too bad (though it could be better). Most of the files are loaded within 1500ms. But then there is this mysterious block of script evaluation happening from 1500ms to around 4000ms. What is worse this script is blocking the whole browser. So the user experience is terrible. On a phone you might have to wait ten or more seconds, looking at a pretty blank page, before you can start browsing the docs.

In the video Brad and Misko suggest that the problem is with a Google Code Prettify plugin that we use to syntax highlight our inline code examples. It turns out that while this is a big old file the real problem is elsewhere. Notice how the memory during this blocking script goes up and up with a number of garbage collection cycles throughout? This is due to a memory intensive process.

Houston… we have a problem with the Lunr module

A quick look at a profile of the site loading show us that the main offenders are found in the lunr.js file.


Lunr is the full text search engine that we run inside our app to provide you with the excellent instant search to all the Angular docs (API, guides, tutorials, etc). Unfortunately, we are creating the search index in the browser at the application bootstrap, which is blocking the rest of the application from running and stopping the page from rendering.

I looked at various different options to fix this. Initially I thought, “Let’s just generate this index on the server and download it”. Sadly the generated index is numerous Mb in size so this is not acceptable (especially if you are on your 3G connection)! I also looked around to see if there were alternative search engines we could use. I didn’t find anything that looked like it would be more performant that Lunr.

Getting async with Web Workers

So if this was no go and we must generate the index in the browser we needed a way to do it without blocking the rendering of the page.  The obvious answer is to do it in a Web Worker. These clever little friends are scripts that run on their own thread in the browser (so not blocking the main rendering thread), to which we can pass messages back and forth. So we simply create a new worker when the application bootstraps and leave it to generate the index.

Since the messages that can be passed between the main application and the worker must be serialized it was not possible to pass the entire index back to the application. So instead we keep the index in the worker and use messages to send queries to the index and messages to get the results back.

This all has to happen asynchronously but thats OK because AngularJS loves async and provides an implementation of the beautiful Q library, in the form $q service.

The code for the docsSearch service to set up and query the index looks like this:

You can see that since we are interacting with an async process outside of Angular we need to create our own deferred object for the index and also use $apply to trigger Angular’s digest when things happen.

The service itself is a function which takes a query string (q). It returns a promise to the results, which will be resolved when both the index has been generated and the query results have been built. Notice also that the query is not even sent to the Web Worker thread until the promise to the built index has been resolved.  This makes the whole thing very robust, since you don’t need to worry about making queries before the index is ready. They will just get queued up and your promises will be resolved when everything is ready.

In a controller we can simply use a .then handler to assign the results to a property on the scope:

The effect of this is that we now have completely removed the blocking JavaScript from the main thread, meaning that our application now loads and renders much faster.



See now that the same block of script is there but now the solid darker rectangle is much smaller (only 0.5ms). This is the synchronous blocking bit of the script. The lighter paler block is the asynchronous children, which refers to our web worker. Interesting by moving this into a new thread it also runs faster as well, so a win all round!

But what about Internet Explorer?

There is always the question of browser support. Luckily most browsers do support Web Workers. The outstanding problems are the older Android browsers and of course Internet Explorer 8 and 9. So we have to put in place a fallback for these guys. I considered using a Web Worker shim but actually that would just add even more code for the browser to download and run. Instead I used the little understood service provider pattern to be able to dynamically offer a different implementation of the docsSearch service depending upon the browser capabilities.

The webWorkerSearchFactory, which we saw above, is provided if the browser supports Web Workers. Otherwise the localSearchFactory is provided instead.

This local search implementation is basically what we had in place in the non-performant version of the docs app but with a small user experience improvement. We still have to block the browser at some point to generate the index but rather than doing it immediately at application bootstrap we delay the work for 500ms, which gives the browser time to at least complete the initial rendering of the page. This gives the user something to look at while the browser locks up for a couple of seconds as it generates the index.


We could, of course, go further with this and chunk up the index building into smaller blocks that are spaced out to prevent a long period of blocking but since the number of browsers that this affects is growing smaller by the day it didn’t make sense to put more development time into this.

To cache or not to cache?

Some of you might be wondering why we are not then caching the generated index into the LocalStorage. This is certainly an option but it raises additional questions, such as:

How and when we should invalidate old cached indexes? We generally release a new version of AngularJS once a week; so it is likely that the cache will need to be invalidated weekly; also people may want to look at different versions of the docs, should we cache more than one version and how do we decide which ones to keep.

Also, LocalStorage has very variable and limited capacity. One has to put in place code to cope with storage running out. The indexes are rather large so this could happen quite often.

On mobile we probably don’t want to be caching multiple  large indexes anyway since storage is more of a premium in such devices.

Given the speed and user experience improvements achieved simply by moving the index generation to a new thread it was felt that this was a good enough solution for now.

Further Performance Improvements

In addition to this refactoring of the search index, we have also removed dead code, refactored other bits to lower the amount of data that must be transfered and minified much of the rest of the JavaScript. Our top man Jeff has also tweaked the server so we should be getting many more files zipped up when downloading now too.

If you have any other suggestions feel free to comment or even better put together a Pull Request to the repository with further improvements!

8 thoughts on “AngularJS Docs Performance

  1. After a quick glance at Lunr, it looks like Lunr should load and work in a Worker. So you should be able to have the worker create the entire Index, and then post that index back to your UI thread. It *looks* like the Index would pass a structured clone, so that should work. I’m guessing because I haven’t tried it. But if it worked it would completely offset the indexing work to your worker thread until it was done.

  2. I take that back… you can’t send the index across threads… however you might be able let the index live in the Worker, and just post queries to it and get responses. Which, again, will offset almost all of the work.

  3. I’m not sure you need $rootScope. Doesn’t angular invoke a digest cycle when a promise from $q is resolved?

  4. Looking at the query implementation, calling the query multiple times (I.e. binding it to a keystroke event without a debounce) would mean that only the last called query would be resolved, but it would resolve, quite likely, with one of the earlier query calls.

    Which leads me to my broader question around promises, do unresolved/in rejected promises leak memory, even if the deferred is no longer referenced?

  5. @Clark Pan – Yes, you are right, if the queries come in faster than the resolutions then we will indeed get slightly dubious async behaviour:

    Let consider q1 and q2 are queries, d1 and d2 are the respective deferred objects and p1 and p2 are the respective promises.

    * q1 is requested, the results var references d1 and a then handler is attached to p1.
    * q2 is requested, the results var references d2 and a then handler is attached to p2.
    * q1 response arrives from the webworker but since our webworker handler is dumb it resolves results, i.e. d2.
    * p2 is resolved, i.e. its then handler is called
    * q2 response arrives from the webworker; it also tries to resolve results, i.e. d2. Since this is already resolved nothing happens.
    * p1 is never resolved and its then handler is never called
    * p2 is resolved with the results of q1 and the results of q2 never make it back to the client

    As it turns out, I have not seen this in practice, since our Web Workers appear to return quickly enough that we don’t miss any queries but a more robust solution would be to keep a hash of deferred objects in the results var and pass some identifier through to the web worker when we make the query that we can match when it sends back the completion message.

    Regarding memory leakage… I need to have a think about that. In general failing to resolve a deferred doesn’t leak memory – when they become disconnected from the rest of the application they can be garbage collected. But this could be a problem if the callback (i.e. then handler) contains something long-lived in its closure…

  6. Pete,

    Thanks for publishing this article. I realized that I was very discomfited by the docSearch implementation. And the question/answer with @ClarkPan identified why.

    I am passionate about promises. For several days I brooded about this issue. Like a dog gnawing on a rib bone, I could not seem to let it go.

    So many issues can be solved with well-built promise chains. Thus I decided to write a version that I hope you like even more [than the above solution]. For your perusal and thoughts:

    BTW – I have not tested this version. 😉

Comments are closed.