Alo Sarv
lead developer

Donate via

Latest Builds

version 0.3
tar.gz tar.bz2
Boost 1.33.1 Headers
MLDonkey Downloads Import Module Development
Payment completed
Development in progress.
Developer's Diary

Tuesday, January 18, 2005

WorkThread API, hydranode log analyzer

Implemented the new generic WorkThread API, which can be used to submit any jobs to secondary thread for processing. The main reason for this API is to localize multi-threading, and - more importantly - serialize disk I/O. For example, when hashing a file and starting to move some other file to incoming dir at same time (in case temp/incoming are on different disks), this would mean duplicate disk load, which slows everything signficently. Also, any arbitary long-running tasks can now be performed in secondary thread, with API open to modules for usage also. The API needs some more touches, but generally should stay as is. The API is implemented in workthread.h / workthread.cpp, and corresponding regress-test is located at tests/test-workthread directory. (Sorry for no links, too tired to re-generate doxygen docs and upload 'em right now).

On other news, I was wondering why we'r losing a lot of sources from ed2k. The main problem in investigating this is that there isn't any useful UI, and it's hard to track where/why sources are dropped, so eventually I had to resort to writing a small shell script to parse hydranode.log (which quickly gets to several MBs of data) and gather some statistics. Here's output from one of my test runs (run the script in same dir as hydranode.log (default ~/.hydranode)):
525 sources received from server(s).
Connection established to 200 HighID clients, dropped 59 (22%)
Connection established to 229 LowID clients, dropped 37 (13%)
Sent StartUploadReq to 352 (67%) sources
Received queue ranking from 313 sources (59%)
Received AcceptUpload from 23 sources (4%)
Received NoFile from 24 sources (4%)
173 (32%) sources lost before sending StartUploadReq
As seen, 32% of sources are still dropped. 22% of direct connection attempts failed (sources gone offline/changed IP?), and 13% of lowid callbacks failed (same reason?). As you might notice, out of 352 requests we got 313 QR's + 23 accepts + 24 nofiles = 360 -> the reason is that later during download process from those sources, they re-sent AcceptUploadReq, which causes this anomaly in statistics.

In any case - <10% CPU usage while downloading from 20 sources / 60kb/s in full debug/trace build - could use some more optimizations later on, but generally doesn't sound too bad. Now if I could only get the new PartData API going, we'd be set :)

Madcat, ZzZz

there are many companies paid to disrupt p2p traffic if your not using a filter list you might get lots of invalid or random sources ( iana reserved ips etc )

blocklists peerguardian format available here

emule ipfilter.dat format

there is no need to use the complete list Anti-P2P, Merged IANA/Bogon List, Master Exclusions and fake MLdonkey 0.25 are enough

a system to update the block list is recommended as it is updated often
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?