Category Archives: Programming

AngularJS Docs Performance

I noticed at a recent Meetup in New York – http://youtu.be/-Z-NuSklwWU?t=39m38s – that the performance of the AngularJS docs app was questioned and quite rightly so. Loading up the docs app was a complete dog!

docs-before

As you can see from this Chrome timeline the page is taking over 4 secs (on a fairly powerful laptop with fast broadband connection) to complete loading and rendering.

The mystery of the blocking script

What is going on here? Well the actual loading of the various files is not too bad (though it could be better). Most of the files are loaded within 1500ms. But then there is this mysterious block of script evaluation happening from 1500ms to around 4000ms. What is worse this script is blocking the whole browser. So the user experience is terrible. On a phone you might have to wait ten or more seconds, looking at a pretty blank page, before you can start browsing the docs.

In the video Brad and Misko suggest that the problem is with a Google Code Prettify plugin that we use to syntax highlight our inline code examples. It turns out that while this is a big old file the real problem is elsewhere. Notice how the memory during this blocking script goes up and up with a number of garbage collection cycles throughout? This is due to a memory intensive process.

Houston… we have a problem with the Lunr module

A quick look at a profile of the site loading show us that the main offenders are found in the lunr.js file.

profile-before

Lunr is the full text search engine that we run inside our app to provide you with the excellent instant search to all the Angular docs (API, guides, tutorials, etc). Unfortunately, we are creating the search index in the browser at the application bootstrap, which is blocking the rest of the application from running and stopping the page from rendering.

I looked at various different options to fix this. Initially I thought, “Let’s just generate this index on the server and download it”. Sadly the generated index is numerous Mb in size so this is not acceptable (especially if you are on your 3G connection)! I also looked around to see if there were alternative search engines we could use. I didn’t find anything that looked like it would be more performant that Lunr.

Getting async with Web Workers

So if this was no go and we must generate the index in the browser we needed a way to do it without blocking the rendering of the page.  The obvious answer is to do it in a Web Worker. These clever little friends are scripts that run on their own thread in the browser (so not blocking the main rendering thread), to which we can pass messages back and forth. So we simply create a new worker when the application bootstraps and leave it to generate the index.

Since the messages that can be passed between the main application and the worker must be serialized it was not possible to pass the entire index back to the application. So instead we keep the index in the worker and use messages to send queries to the index and messages to get the results back.

This all has to happen asynchronously but thats OK because AngularJS loves async and provides an implementation of the beautiful Q library, in the form $q service.

The code for the docsSearch service to set up and query the index looks like this:

You can see that since we are interacting with an async process outside of Angular we need to create our own deferred object for the index and also use $apply to trigger Angular’s digest when things happen.

The service itself is a function which takes a query string (q). It returns a promise to the results, which will be resolved when both the index has been generated and the query results have been built. Notice also that the query is not even sent to the Web Worker thread until the promise to the built index has been resolved.  This makes the whole thing very robust, since you don’t need to worry about making queries before the index is ready. They will just get queued up and your promises will be resolved when everything is ready.

In a controller we can simply use a .then handler to assign the results to a property on the scope:

The effect of this is that we now have completely removed the blocking JavaScript from the main thread, meaning that our application now loads and renders much faster.

docs-after

 

See now that the same block of script is there but now the solid darker rectangle is much smaller (only 0.5ms). This is the synchronous blocking bit of the script. The lighter paler block is the asynchronous children, which refers to our web worker. Interesting by moving this into a new thread it also runs faster as well, so a win all round!

But what about Internet Explorer?

There is always the question of browser support. Luckily most browsers do support Web Workers. The outstanding problems are the older Android browsers and of course Internet Explorer 8 and 9. So we have to put in place a fallback for these guys. I considered using a Web Worker shim but actually that would just add even more code for the browser to download and run. Instead I used the little understood service provider pattern to be able to dynamically offer a different implementation of the docsSearch service depending upon the browser capabilities.

The webWorkerSearchFactory, which we saw above, is provided if the browser supports Web Workers. Otherwise the localSearchFactory is provided instead.

This local search implementation is basically what we had in place in the non-performant version of the docs app but with a small user experience improvement. We still have to block the browser at some point to generate the index but rather than doing it immediately at application bootstrap we delay the work for 500ms, which gives the browser time to at least complete the initial rendering of the page. This gives the user something to look at while the browser locks up for a couple of seconds as it generates the index.

 

We could, of course, go further with this and chunk up the index building into smaller blocks that are spaced out to prevent a long period of blocking but since the number of browsers that this affects is growing smaller by the day it didn’t make sense to put more development time into this.

To cache or not to cache?

Some of you might be wondering why we are not then caching the generated index into the LocalStorage. This is certainly an option but it raises additional questions, such as:

How and when we should invalidate old cached indexes? We generally release a new version of AngularJS once a week; so it is likely that the cache will need to be invalidated weekly; also people may want to look at different versions of the docs, should we cache more than one version and how do we decide which ones to keep.

Also, LocalStorage has very variable and limited capacity. One has to put in place code to cope with storage running out. The indexes are rather large so this could happen quite often.

On mobile we probably don’t want to be caching multiple  large indexes anyway since storage is more of a premium in such devices.

Given the speed and user experience improvements achieved simply by moving the index generation to a new thread it was felt that this was a good enough solution for now.

Further Performance Improvements

In addition to this refactoring of the search index, we have also removed dead code, refactored other bits to lower the amount of data that must be transfered and minified much of the rest of the JavaScript. Our top man Jeff has also tweaked the server so we should be getting many more files zipped up when downloading now too.

If you have any other suggestions feel free to comment or even better put together a Pull Request to the repository with further improvements!

AngularJS Scoping (by value – by ref)

It is important to remember that you are basically working with Javascript.  No one would expect the following code to update the outer variable name:

This is because strings are passed by value in JavaScript.  So this is equivalent to

Objects on the other hand are passed by reference so this does work:

Which is equivalent to:

Passing Services to Controllers

When using N2 with MVC you may want to use some N2 services in your controller. This is really easily achieved by simply creating a constructor for your controller with a parameter for service you require. N2 will kindly wire up the controller for you.

Here is a very simple and artificial example:

And here is a real example from the UserRegistrationController of the MVC templates project. This controller needs to be able to send emails to users who register and log errors.

Numeric Operators

Ruby allows built-in numeric types to co-exist seamlessly with user-defined types. To the client of a non-standard library the interactions between the library and the built in numeric types is fairly invisible. This allows you to define a new type and use instances of it as parameters to built in methods. As an example, consider that we have implemented a class called MyNumber.

class MyNumber

end

If the right methods have been implemented on MyNumber then the following code will work – hopefully with the correct semantics.

x = MyNumber.new

30 + x
54.4 / x
0x80000000 << x

There are two strategies employed in the Numeric classes to achieve this:

Convert to Integer/Float
In some cases it is necessary that the parameter passed in is actually of the same type as the Numeric. An example are the bit shift operators. It doesn’t make sense to shift a value by a non-integer amount (or does it??). In these cases the Numeric class attempts to convert the parameter to the right type. In this case an Integer (or Fixnum to be more precise). This is done either directly, as in the case of Float, or it is done indirectly by calling either to_i or to_int on the parameter. So to allow your class to be used with the Fixnum#<< operator all you have to do is implement to_i. There is a similar to_f method you can implement to convert to Float.

Value Coercion
Sometimes it is simply necessary to ensure that the object of the method and the parameter have the same type. The aim of coercion is to convert built-in types to match the user-defined types. This is used frequently to allow Fixnum and Bignum objects to interact with Floats. For instance, in the following code, the Fixnum get coerced up to a Float before the / operator is invoked. In this case it is the Float#/ operator that actually does the calculation.

3 / 4.3

This is achieved through implementing the coerce method. This method attempts to coerce its parameter to the same type as itself. The clever bit comes inside the Numeric class when a method is called that doesn’t know about its parameter.

Consider the following code:

class MyNumber

end

x = MyNumber.new
3 / x

The Fixnum#/ operator is invoked but it doesn’t know what to do with a parameter of type MyNumber. Instead it calls MyNumber#coerce passing in the self object (in this case the Fixnum 3) as the parameter. The / operator is then invoked on the value returned from coerce.
So assuming the MyNumber class implements both MyNumber#coerce (which coerces Fixnums to MyNumber instances) and MyNumber#/ then the above code will work. Again, hopefully with reasonable semantics.

IronRuby Implementation
This is all well and good but what about implementing this mechanism in the IronRuby libraries?

The convert to Integer implementation is fairly straightforward, we can call Protocols.ConvertToInteger and let it do the work of invoking to_i or to_int accordingly. E.g.

[RubyMethod(“|”)]
public static object BitwiseOr(CodeContext/*!*/ context, int self, object other) {
other = Protocols.ConvertToInteger(context, other);
return self | Protocols.CastToFixnum(context, other);
}

In the case of coercion I wrote a bit of helper code in the Numeric class that will do the coercion and then call the original operator or method:

public static object CoerceAndCall(CodeContext/*!*/ context, object self, object other, DynamicInvocation invoke) {
RubyArray coercedValues;
try {
// Swap self and other around to do the coercion.
coercedValues = NumericSites.Coerce(context, other, self);
} catch (MemberAccessException x) {
throw MakeCoercionError(context, self, other, x);
} catch (ArgumentException x) {
throw MakeCoercionError(context, self, other, x);
}
// But then swap them back when invoking the operation
return invoke(context, coercedValues[0], coercedValues[1]);
}

This method invokes Coerce on the other object passing in self and then invokes a method that you passed in, which corresponds to the original method called.

The delegate defining what can be passed in is fairly generic:

public delegate object DynamicInvocation(CodeContext/*!*/ context, object self, object other);

This allows you to do the following for the Fixnum#% operator, when the other parameter is not of a known type:

[RubyMethod(“%”)]
public static object ModuloOp(CodeContext/*!*/ context, object self, object other) {
return Numeric.CoerceAndCall(context, self, other, NumericSites.ModuloOp);
}

Operator Aliasing
In the case where an operator has been aliased, such as Fixnum#modulo and Fixnum#%, we have to be careful. Ruby insists that the same operator/method is invoked after the coercion regardless of aliasing. This means that despite Fixnum aliasing modulo and % to use the same implementation, MyNumber could implement both separately even with different semantics.

Therefore when implementing Fixnum in IronRuby it is necessary for modulo and % to have separate methods for each when the parameter is arbitrary.

[RubyMethod(“%”)]
public static object ModuloOp(CodeContext/*!*/ context, object self, object other) {
return Numeric.CoerceAndCall(context, self, other, NumericSites.ModuloOp);
}

[RubyMethod(“modulo”)]
public static object Modulo(CodeContext/*!*/ context, object self, object other) {
return Numeric.CoerceAndCall(context, self, other, NumericSites.Modulo);
}

They can of course share implementation for overloads that have known parameters:

[RubyMethod(“%”), RubyMethod(“modulo”)]
public static object ModuloOp(int self, int other) {
RubyArray divmod = DivMod(self, other);
return divmod[1];
}

Note the multiple RubyMethod attributes in this method where other is int (Fixnum).

Method overloading

In IronRuby you can create a number of different implementations of a method that each take different parameters; much in the same way that you can do in C#.
E.g.
In BignumOps there is a * (multiply) method of the form: self * other.
Here are the overloads:

[RubyMethod(“*”)]
public static object Multiply(BigInteger self, BigInteger other)
[RubyMethod(“*”)]
public static object Multiply(BigInteger self, double other)
[RubyMethod(“*”)]
public static object Multiply(CodeContext/*!*/ context, BigInteger self, object other)

As you can see, if Multiply is called with a BigInteger (or int for that matter) as the “other” parameter the first method is executed. If it is a double, which maps to Float in Ruby, then the second is executed and any other type as other invokes the third implementation (the context parameter is a hidden support mechanism for providing access to stuff such as the dynamic invocation mechanism).

In other words the DLR is responsible for selecting the correct method overload to call at runtime based on the number and type of parameters.

This is different to how the overloading is achieved in the standard Ruby implementation. In this case you have just one method and this method works out which implementation it should execute itself. This is the equivalent C function in the C Ruby implementation:

VALUE
rb_big_mul(x, y)
VALUE x, y;
{
long i, j;
BDIGIT_DBL n = 0;
VALUE z;
BDIGIT *zds;

if (FIXNUM_P(x)) x = rb_int2big(FIX2LONG(x));
switch (TYPE(y)) {
case T_FIXNUM:
y = rb_int2big(FIX2LONG(y));
break;

case T_BIGNUM:
break;

case T_FLOAT:
return rb_float_new(rb_big2dbl(x) * RFLOAT(y)->value);

default:
return rb_num_coerce_bin(x, y);
}

j = RBIGNUM(x)->len + RBIGNUM(y)->len + 1;
z = bignew(j, RBIGNUM(x)->sign==RBIGNUM(y)->sign);
zds = BDIGITS(z);
while (j–) zds[j] = 0;
for (i = 0; i <>len; i++) {
BDIGIT_DBL dd = BDIGITS(x)[i];
if (dd == 0) continue;
n = 0;
for (j = 0; j <>len; j++) {
BDIGIT_DBL ee = n + (BDIGIT_DBL)dd * BDIGITS(y)[j];
n = zds[i + j] + ee;
if (ee) zds[i + j] = BIGLO(n);
n = BIGDN(n);
}
if (n) {
zds[i + j] = n;
}
}

return bignorm(z);
}

You can see here that there is a lot of type checking and conversion going on. I think that the IronRuby/DLR way is a much more pleasant and maintainable way of developing.

One thing that did worry me though was what happens if someone monkey patches your class. For instance what if I did the following in a Ruby program?

class Bignum
def *(other)
puts “hello”
self + other
end
end

In the C Ruby implementation, the whole C function is overridden and so all that complex type conversion goes out the window and you are left with the simple method given:

>> 0x80000000 * 2
hello
=> 2147483650
>> 0x80000000 * 0.3
hello
=> 2147483648.3

What would happen in the IronRuby case?

I felt it could have gone either way. At first I expected that it would just add a method with the equivalent of the following signature:

[RubyMethod(“*”)]
public static object Multiply(CodeContext/*!*/ context, BigInteger self, object other)

In actual fact, it appears that all the overloaded methods are wiped out. This is perhaps common sense: the DLR now has no type information on the parameters to distinguish the new Ruby method from those written in C#, so it just blats them all.

Even in cases where there where different overloads accepted different numbers of parameters and you could distinguish based on the number of parameters, those get wiped out too. You just get the one method and calls to the method with a different number of parameters causes an ArgumentError.

So all is good and you don’t have to worry about spreading your code out into multiple overloaded methods to make it easier to read.