Improving CS2D’s File Transfer

CS2D is a relatively small indie game, but one that has impressive tenure—its first public version was released around 2003, developed then by a single man, and it’s maintained to this day by a small team of developers, yours truly among them. Because of this, the dev team finds itself working with true legacy code—although there was a total rewrite in mid-2008, that still means that parts of the codebase are almost 10 years old. One area that has been a point of contention for a long time now is file transfer code.

Originally, the file transfer code was relatively simple, because servers hadn’t needed to transfer all that many files to players, except in the cases of maps with many custom resources, like role play and adventure maps. At that point, the transfer rate was very low—around 10 KB/s was a realistic number for most servers. The game added native support for HTTP transfer, but because most server hosters had been amateurs and enthusiasts, it was configured extremely rarely—the returns were deemed too low for the investment. This status quo remained in place for a few years.

However, the situation began changing after Lua modding had turned itself into an important part of the CS2D gameplay experience. Suddenly, there were lots more resources transferred to players, and the transfer speeds simply would not do. A lot of effort went into improving CS2D’s data transfer protocol, and the transfer speeds increased—20 KB/s became the low default settings, and 200 KB/s and above file transfer became possible. At that point the speed would often be limited by the network hardware of PCs and laptops that were running many of CS2D’s servers. This made the server join process a lot smoother and more pleasant as far as large files (like music and other audio) were concerned, but one important chokepoint remained—lots of small files being transferred back-to-back.

Total number of files downloaded via CS2D
A moderately active CS2D gfx/ directory shows just how many (small) files are sent to the client by different servers.

Fast-forward to present day. Lua modding has become ubiquitous. Many of the popular servers are running extremely heavy modifications to the game, and most of others are also running administration and convenience mods of some sort. Most importantly, the majority of them depended on many small resource files, which all have to be transferred to the client. This creates serious network congestion for the clients, and it’s not uncommon in popular servers to experience some lag due to newly joining players. We were hopeful to find more avenues to increase the protocol’s performance, but by now we had hit the limit.

The general spirit of the solution is fairly obvious—if your algorithm is good with large data, but bad with lots of bits of small data, then make those bits of small data into larger bits and work with those. The actual implementation, however, is more complicated, especially considering the legacy codebase we’re working with. The BlitzMax platform is obscure and at this point very nearly obsolete, and when applied to a relatively mainstream (for the platform, anyway) product like CS2D, it’s also quite fragile. We looked to the usual suspects—the gzip and lzma libraries, but because of BlitzMax’s arcane nature, those were difficult to apply to the situation, and a little overkill as well.

However, BlitzMax offers its own compression implementation, which appears to be an implementation of gzip, and CS2D has been using it since the first improvements to the transfer protocol, so we decided to go with that. We quickly developed a homegrown archive format for the game, and started experimenting with bundling game resources together to transfer them to clients. CS2D has always targeted the lower end of the normal distribution curve as far as player hardware is concerned, and connections are no exception—we’re acutely aware that much of our playerbase often has access only to metered connections. This is why the client is still able “picks and chooses” the files it will receive based on their size—it’s a little skewed towards the lower end, because the size calculations are done for uncompressed files, but less data used is better than more.

The video above demonstrates the new algorithm vs. the old file-by-file transfer on around 200 small files (weapon sprites). We were stunned by the improvement we saw after a test run of the new algorithm. On the default 25 KB/s transfer speed, the time to join decreased about 4-5 times. We expect an even steeper decrease on higher transfer speeds. We experimented with splitting the packages into smaller subpackages, but that approach didn’t result in further improvements—it may, however, be prudent to include the splitting technique to avoid interrupted transfers.

We’re still figuring this out and doing further experimentation for stability. You may likely see this implemented in 1.0.0.9, at first as one of the file transfer modes (selected by default) alongside the old file-by-file transfer, but eventually the legacy transfer algorithm may disappear entirely.

Published by

EngiN33R

EngiN33R

Developer, linguistics enthusiast, amateur teacher. All opinions are my own.

Leave a Reply

Your email address will not be published. Required fields are marked *