A student recently decided to create an algorithm like Google's to index web pages. After some iterations and months of hard work the code did not seem to work as well as Google's, so he decided to use the fruits of his labor in a more restricted field - he limited his crawler to find bit-torrent files. He then proceeded to check those torrents to eliminate the deadwood and the fakes and to put up the ones that had sufficient information on a server. Here is his description of what he did and…
Continue