While playing around with low-level Witty, I faced a serious issue of performance. For some reason, one of the operations that we were doing was taking a relatively long (1s) time to complete. This didn’t make sense as the operation itself was not particularly onerous.
The operation involved receiving a file over HTTP. This file was then chunked into blocks, had each individual byte transformed, had a CRC16 calculation done over the block, and finally had the entire block encoded with Base64. The entire file was then returned as a response over HTTP.
Testing it out with CURL resulted in a request of about 1000ms as measured by Witty itself. I had at first thought that it might be due to the computations. However, even raw echoing of data without any processing, took about 1s.
So, I thought it might be due to the size of the data being sent, which was in the order of several hundred KB. This didn’t make any sense but just to test things out, I used different file sizes, which resulted in minor changes in the time, but still about 1s.
So, I thought that this might be due to the underlying structure of Witty. However, when I perform other operations instead of this one, I got performance in the range of 10-20ms, which was 100 times faster.
So, there was something else amiss. The computations took hardly any time, and the data transfer, while certainly affecting the time taken, did not make much of a dent in it. So, where was this 1s overhead coming from?
Further investigation showed that the size of the upload does affect things tremendously. Any file transfer above 1KB results in taking 1s to process the data while transfer below 1KB take less than 1ms. That’s a 1000 time difference, which is HUGE.
This warrants further investigation if it turns out to be a bottleneck.
Update: Turns out that this is a problem with the upload, on the browser end. It was using HTTP/1.1 and in particular, the “Expect: 100-continue” mechanism. Switching it to using plain old HTTP/1.0 changes the response time to 10-20ms as expected.