Internet photo
Marcin Ignac
SHARE

If you’re reading this, you’re probably using a version of the transmission control protocol, or TCP, a system that regulates internet traffic to prevent congestion. It works, and it’s getting better all the time. But it was a system made by puny humans–surely our machine-overlords can do better.

Yes, and possibly as much as two or three times better, say the MIT researchers behind Remy, a system that spits out congestion-stopping algorithms.

To use Remy, an Internet-goer plugs in answers to a few variables (How many people will use this connection? How much bandwidth will they need?) and what metric they want to use for measuring performance (Is throughput, the measure of how much data is going through, the most important? Or is it the delay, the measure of how long it takes that information to travel?).

The system then starts testing algorithms to determine which works best for your situation. Testing every possible algorithm would be impractical, so Remy prioritizes, searching for the smaller tweaks that will result in the largest jump in speed. (Even this “quicker” process takes four to 12 hours.)

The resulting rules that the system spits out are more complicated than in most TCPs, according to Remy’s inventors: while TCP programs might operate based on a few rules, Remy works out algorithms with more than 150 if-x-then-y rules for operating. The simulations sound impressive: doubled throughput and two-thirds less delay on a computer connection, and a 20 to 30 percent increase in throughput for a cell network, with a 25 to 40 percent slower delay.

But that really only makes Remy impressive on paper. The researchers haven’t yet tested Remy on the wide-open Internet, which presents a whole new set of variables the researchers have to account for. It might very well turn out, as the researchers told PC World, that Remy just provides people with a new way to look at the problem, instead of a solution in itself.

MIT