Sunday, May 4, 2014

Meteor, load balancing and sticky sessions

Meteor clients establish a long-lived connection with the server that is uniquely identified by a session identifier to support the DDP protocol.  The DDP protocol allows Meteor clients to make RPC calls and also allows the server to keep the client updated with changes to data, i.e., Mongo documents. Meteor uses sockjs, which provides a cross-browser web-socket like API which falls back on long-polling when web sockets are not supported by the server.

Meteor with sockjs long-polling

Consider the following setup: Nginx is acting as a load balancer that is load-balancing 2 or more meteor servers.  The Nginx server configuration looks like this:
upstream meteor_server_lp {
   server localhost:3000;
   server localhost:3001;
}
server {
        listen       8084;
        server_name  localhost;

        location / {
            proxy_pass  http://meteor_server_lp;
        }

}
This configuration of Nginx does not support web-sockets, so the Meteor clients will use long polling.  Since long polling re-establishes connections every so often due to connection timeouts, such a configuration will require sticky sessions to ensure that client is directed to the same server that they have previously established a connection with. A sockjs connection that is directed to the wrong server by the Nginx load balancer will fail with a 404 Not Found.  The solution is to compile Nginx with the sticky module, and modify the configuration to be sticky like this:
upstream meteor_server_lp {
    sticky;
   server localhost:3000;
   server localhost:3001;
}
server {
        listen       8084;
        server_name  localhost;

        location / {
            proxy_pass  http://meteor_server_lp;
        }

}
When using this setup with AWS load balancers, sticky session needs to be enabled on the load balancer - app stickiness using the "route" cookie setup by Nginx's sticky module.

Meteor with web sockets

Nginx since 1.3.13, supports web-sockets using the protocol switch mechanism in HTTP/1.1. The configuration file looks like this:
upstream meteor_server {
   server localhost:3000;
   server localhost:3001;
}
server {
        listen       8082;
        server_name  localhost;

        location / {
            proxy_pass  http://meteor_server;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
}
This works as is without the need for any sticky sessions. The websocket connection between the client and any one of the two servers will be established when the client first connects or reconnects or if one of the servers go down.  AWS load balancers don't support websockets with http listeners, but it works with a tcp listener setup.  But this means any SSL termination must occur on the instances being load balanced.