Alo Sarv
lead developer

Donate via

Latest Builds

version 0.3
tar.gz tar.bz2
Boost 1.33.1 Headers
MLDonkey Downloads Import Module Development
Payment completed
Development in progress.
Developer's Diary

Friday, December 10, 2004

Range API rewrite complete, perparing for PartData rewrite

Just to get it out of the way -> the new Range API. Not fully documented yet, but should give the idea to those curious ones wanting to read code. As you can see, the new implementation is significently smaller than the old range.h and rangepol.h. As mentioned earlier, I dropped a lot of things that go beyond the scope of Range API's responsibilities, which simplified things a lot.

What I'v been pondering about however, is PartData. The current implementation of PartData has a multitude of problems that need to be addressed. Namely, it should be easy to learn/use, and provide protection mechanisms against programmer errors. The major source of errors tend to come from forgetting to unlock/free used ranges, unlocking/freeing them twice and so on. This problem could most likely be addressed by introducing Lock objects (similar to most threading libraries with mutexes), which free the ranges they refer to during destruction. What makes it complex though is the fact that in threading API's, the locks are meant to be kept for short period of time, usually within scopes, and thus can be stored on stack, however, in our case, we need to keep the locks enabled for long periods of time (e.g. until the data has been downloaded, or source dropped).

Another (even more) important aspect of PartData is the choice of which parts to download. While PartData does not control downloading itself at all, it can indirectly control it via giving out specific ranges, since that's up to PartData to decide. The original system has a lot of flaws in that area, which resulted in a lot of range fragmenetation and incomplete chunks until to the very end of the download. New PartData implementation must:
It's very important to fully randomize the part selection (except perhaps for first/last chunks), because if we download sequencially from begin->end, or completely randomly (disregarding chunks), we cause large-scale problems on networks if hydranode is used a lot. For example, if we download sequencially, the ends of files will become very rare, since people tend to un-share files after completing them. Alternatively, if we download random chunks, disregarding chunkhash boundaries, we won't be able to share the hashed parts until near the end of the file due to chunk fragmenetation (since we miss chunks of the parts, we can't hash them, and thus can't share them), which pretty much breaks a lot.

Madcat, ZzZz

PS: This blog engine also has comments capabilities (altough you have to bear with a nagscreen asking login/pass - but there IS an "post anonymously" link there), so - comments/thoughts are always welcome :)

I just heard about this project at the start of this week, and it looks very promising.
Every morning I read your Developer's Diary... it's great to know what problems you're facing and how are you dealing with them.
Still needing some free time to grab the sources and take a look -- I'm sure there's a lot to be learned from them.
Great work, MadCat. Keep it up.
Yea...You are right. Madcat Rulez, and he is making an outstanding work, not counting how many days/hours/beers/nights he spent on the project :)

Hydranode will rule the p2p apps, i'm quite sure of if.

I've got to say that all of this sounds very nice. I've read all documents that are available on your website.

Those documents are very interesting. Especially the documentation about the code structure and the ed2k-network are like a little treasure.

I'm reading your webblog every day and like it very much. It's exciting to read how hydranode is evolving every day.

About the project itself I'm a bit sceptical. The concept of Hydranode is well thought and the separation of the core functions and the network plugins is a step in the right direction. But I'm not really convinced by Multi-Network clients ( Shareaza, Mldonkey, etc.). Very often they support alot of networks but the implementation is very poor. This is due to the lack of devs and time. You can't check every network implementation as well as it should be checked. Mldonkey is a good example for such a multi-network client with problems.
Anyway we are going to see how you are going to handle this problem.

I'm wondering how you are going to create the GUI for Hydranode because this seems to be another delicate point.
Generally a GUI should be intuitive and simple so that new users can adapt to it in a short amount of time. But with the support of multiple networks the task of creating a simple GUI becomes in my eyes very difficult.

Anyway keep up the good work but it seems to me like you should rest more :)

cya Skyw4lker
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?