Those who spend a lot of time at conferences, forums, and professional exhibitions know how unreliable a link during all events can be – wifi connections become virtually impossible. The reasons are well-known: big crowds of people, lots of devices, and a limited access channel. Frequent interruptions, inability to connect, prolonged delays are the unfortunate consequences of this. To show demos when a link is overloaded is a difficult task that requires a lot of patience.

After several IT-exhibitions we decided to optimize a protocol to work under such uncomfortable and ineffective conditions, when the data required is too big and the channel too small. If there is no possibility to increase the channel or to use a multiplicity of channels, we should reduce the channel’s load.

During our visit to an IT event we recorded a network packet dump for later detailed analysis and problem fixing using DeviceHive’s capabilities.

What problems did we see after analysing the dump? And how did we correct them?

DNS channel caching

First of all, we noted prolonged DNS name resolutions, which consisted of frequent repeated requests and an increase in waiting times. Each HTTP request is preceded by a DNS query of a server address. Under laboratory conditions this procedure is transparent and almost instant, but under “heavy” exhibition loads, the probability of losing a UDP packet is very high (UDP is especially used for DNS queries). If a packet is lost, the program sends the DNS request again and again with 5 seconds timeouts, so request time increases.

Working with the DNS channel we used a standard function getaddrinfo(). It works fine with the Windows platforms – caches server addresses and doesn’t send a DNS request to recall it.

It should be noted that this situation is the opposite with some Linux platforms – the system sends a DNS request several times, as with regular queries. So we decided to perform local DNS name caching by hand in the application itself.

Now, once we have obtained a server address, it is stored in cache, and all further requests for this server name happen without DNS queries. A client, while trying to find a server address, caches its requests for the next request to the same server.

Query size reduction

We studied this principle in our previous project: the program fills the Rainbow Cube of 64 LEDs with different colors. First, the information about all 64 points with 6 parameters (LED’s position XYZ and color RGB) was passed to each LED. This format allowed us to control any quantity of LEDs in the Rainbow Cube at the same time. The final command for each packet was eventually about 3-4KB.

As the demo could fill a part of the cube with the same color and didn’t use complex animations, we decided to simplify the command format. A reduction of the parameters required to 6 (cube part size DX DY DZ and color RGB) helped us to minimize packet size, and then to boost the tasks implementation within the program.

As we noted later, the decrease of command size reduces the load on a wifi channel by several times.

Protocol optimization

The version of DeviceHive 1.2 contains protocol optimization avoiding data redundancy.

We used an updated DeviceHive to change the REST protocol that was being used – it was a complicated protocol that slowed down the response speed from the server. How did it look like before?

The client sends a request -> The server returns a copy of it -> The device receives the request and performs it -> Then, the device sends a status request -> The server returns a copy of the command -> The client receives a notification about the status with a request copy attached.

With the current version of the REST protocol, the server responds with a standard status code ‘204 No Content’, if no changes took place. Thus, we avoid unnecessary traffic.

We’ve already tested the service on a “bad” network simulator, and can’t wait for it to be field tested.

These techniques can certainly help you improve your wifi communications at crowded events. The optimization code used in DeviceHive is available on the repository.