Alo Sarv
lead developer

Donate via
MoneyBookers

Latest Builds

version 0.3
tar.gz tar.bz2
Boost 1.33.1 Headers
MLDonkey Downloads Import Module Development
Payment completed
Development in progress.
Developer's Diary
irc.hydranode.com/#hydranode

Friday, September 30, 2005

Deploying ClientManager

Yesterday morning, the ClientManager code was checked into svn, with support for various views via hnsh, and integration with ed2k module. Today http module also implemented support for the BaseClient API. Hnsh 'vc' command can be used for various views, altough I had to disable uploadqueue-for-file and uploadlist-for-file views, since hnsh lacks proper shared-files-listing support.

The next thing that needs to be added to that subsystem is the actual uploadmanagment, e.g. opening/closing upload-slots and such; however I doubt anything interesting will happen in that area before beginning of next week - taking week-end off development - some real-life topics need my attention.

Madcat.


Wednesday, September 28, 2005

Progressing on ClientManager

The BaseClient API is in it's first draft state now, viewable here. Basically, the interface consists of three sections - virtual functions that derived classes can override to provide user interfaces various information (client software, nicknames and such); interface to get and set various state flags, such as requested files, connection state etc, which affect ClientManager lookups; and utility functions, which help reduce duplicate code over plugins (such as request generation). I'm also considering integrating A4AF handling into that class, but there are some cave-at's in that area that I'm not comfortable with (might be not portable across networks). The corresponding implementation can be seen here.

The fun magic, however, starts at ClientManager. The public interface can be seen here. Here you can see the "smart data structures and dumb code work lot better than vice versa" in action - 80-lines to define the data structure, and then one-liners to perform lookups there. That large multi_index_container allows me to get various views on the clients listing, queries such as:
Additional indices can be added easily when needed, altough I think the above covers all our needs (640kb ought to be enough for everyone, right?).

Madcat, ZzZz


Tuesday, September 27, 2005

Designing Client Subsystem

It's becoming clear that the Client Subsystem will be the most important addition to the core since the original completition of the hncore API's; a LOT of code will be depending on this subsystem, so it's critical that it gets done right the first time, since writing it twice will take 5x the time (5 modules depending on it). Hence, I'm not rushing into coding it until I have a clear underastanding of the subsystems responsibilities, requirements and design.

The Client subsystem will be composed of two classes - BaseClient class, from which plugins shall derive their customized Client's, and ClientManager singleton, which will provide various lookups and views to the clients listing. Those two classes will need to serve three purposes:
  1. Reduce duplicate code over plugins in related topics, such as chunk requests for example.
  2. Provide information about clients to user interfaces
  3. Provide various views about the clients listing (possibly spanning tens of thousands of objects).
The first bullet wasn't originally planned for the subsystem, however as I realized that the same chunk-request generation code has been copied verbatim from ed2k to bt to http, it clearly shows this must be generalized. A4AF handling, chunkmaps and similar should also be centralized into this BaseClient class to further reduce duplicate code over modules.

The second bullet is generally const virtual methods, providing access to various client data, including nicknames, software, protocol and so on. Plugins shall simply override the virtual functions to return their information.

The third could be considered hardest to implement. It will need either multiple different containers, or single multi_index container for the various views; the views must be kept up2date (which might impose some limitations on the BaseClient class API). It must be possible to look up all clients for a specific download, all connected clients, all clients with specific client software, all clients which have requested a specific file and so on. This will allow lot of interesting and custom views that user interfaces can then implement. Also, it should be considered that the listing may contain tens of thousands of objects, hence any kind of looping on the listing is completely out of the question.

An iterator-based interface for the ClientManager class could be a nice and clean solution, however I'm slightly concerned that if I expose multi_index_container and the iterators in the public interface, some of the testers on slower machines will be unable to compile Hydranode anymore - we did run into similar issue some some 4 months ago, when I was using multi_index containers in many places and exposed them in header files - memory and time requirements during compilation skyrocketed. To add to the problem, the derived Client classes are most often the largest classes in the module (ed2k one is 2500 lines, bt is 800), and they usually include a large set of hnbase and hncore headers already)... Lately, I'v used wrapper iterators, (ed2k/downloadlist.h, DownloadList class), however this approach doesn't scale well. The alternative - using a combination of STL containers - is way too much of a nightmare - been there, done that, really don't want to go there anymore.

Edit: Actually, the above only affects ClientManager class, not BaseClient class; and ClientManager class will only be needed by ui modules (hnsh) and cgcomm module(s), so it's acceptable to expose multi_index_containers in the public API; BaseClient API will be very light-weight on the compiler/compile times.

Madcat, ZzZz


Monday, September 26, 2005

Fixes and improvements

Yap, it's monday evening (according to my schedule, anyway), and I don't have the new Client API yet. Instead, I have some fixes, some improvements, and some additional designs on Core/GUI comm topic.

To get the usual listing out of the way, here it is:

Now, with that out of the way, we can get back to the interesting stuff. I realized I went all wrong with the cgcomm library - it was just hacked together w/o proper design. What must be done is a full API design for the library; it should act and feel like it was direct link to core, completely hiding the fact that the core is actually a separate process. Basically, it would almost duplicate much of the hncore API (but not hnbase), providing proper iterator-based interface (we all love iterators, don't we? :)), signals on updates and so on and so forth.

On top of that, we will have QT Model classes (derived from QAbstractItemModel or similar), which wrap around the cgcomm library interface; and on top of that, we will have QT View classes (QAbstractItemView and derived), which finally expose the entire thing to the user. This design fits nicely into QT Model/View Programming architecture, and decouples GUI handling from the data, allowing any of the components to vary independantly; here we even have two layers - cgcomm library, which allows user interfaces to vary and QT Model classes on top of that, which allow views to vary (we can provide different views of same data structure very easily).

On the GUI topic itself, I figured since I don't have the designer resources right now to accomplish the most ambitious goals, I'll have to scale back the requirements, and (at least initially) come up with a simpler UI, that's mainly based on native/default controls; since all of hydranode is so damn modular and so on and so forth, we can swap out user interfaces, or parts of it very easily at any point down the road, when better resources become available.

Madcat, ZzZz



Sunday, September 25, 2005

The never-ending GUI-topic

I happened to read a few nice GUI design-related research / blog posts few days ago, so I took some time to look over our past and present GUI concepts, and came up with some new ones as well. One of the fundemental ideas is understanding what a user does with a P2P client. What is the most fequent activity user does when he interacts with the interface. For example, in case of a web browser, the main activity and focus is the content (hence the content page in a web browser is the largest area of the UI). In case of a P2P client, 95% of the time user interacts with the UI is searching. All else is secondary - watching your downloads progress, releasing files, or looking at statistics - those are all mainly passive activities, but searching is the main active activity user does with the UI. Thus, the UI should focus on making searching very easy and accessible. All kinds of searches should be possible - p2p networks, ftp sites, local media library, web-searches, rss feeds etc.

Another thing I realized is that the GUI standards on windows have raised considerably over the past few years. Few years ago, an UI made out of native controls was completely acceptable, while today, it's completely out of the question. It's not about skinning support directly (altough I assume it's a bonus), but rather creating all kinds of new controls, based on old ones. Heh, even in firefox you see few custom controls (the google search box for example). Icons also have a big role in modern UI designs. I'v tried to create some test UI's w/o icons, and they look really ugly, until you throw in some custom colors or icons.

All in all, it seems Photoshop skills are essencial to modern programmer, at least at minimum level to be able to visualize concepts (perhaps to give to designers for improvement). In case of a self-contained programmer, who has no designers around for whatever reason, master-level Photoshop skills seem to be needed to create modern applications. Let's hope it won't be the case with this project, otherwise we'd just lose 3-4 weeks while I'm learning Photoshop...

Anyway, I'm near to completing the Client / Upload / Source-management API design, there are few quirks still not figured out, but I hope I can get to implementing it on Monday at latest. Ideally, I'd like to ship 0.2 version around October 15th, with BT support, and then move the focus completely to GUI things.

Madcat, ZzZz


Friday, September 23, 2005

Testing

Been mainly testing-day, focused on win32 port. From what I can tell, everything's working on win32 as good as it does on linux. Interesting note tho - hydranode uses around 2-3 times less cpu on windows than on linux. And that's when built with GCC - rebuilding with MSVC adds another 30+% performance improvement :).

A thing about those speed-o-meters ... we'll need to lower the resolution - they'r too sensitive for anything right now. Currently their set to 100ms precision, 1-sec history, but this results in a lot of rate fluctuation, since in real world, network transfers are never stable, you get 5kb now, then 2kb in half a second later, and so on, so basically the trick is to average everything down to some level where it "stabilizes". For example, the statistics lines printed every 10 seconds (yes, you guessed it - 10sec averages) - show rather steady rates, while the rates on the "statusbar" fluctuate widely. Anyway, I think we should go somewhere along the lines of 2 or 3-sec averages, with 50ms precision. This will mean 20*3*4=240 bytes history data stored for each speedmeter - no mentionable overhead.

Other than that, I'v been thinking on how to design/implement the client/upload management in core. Basically, first we need some kind of BaseClient structure, that will incorporate various details about the client (network, ip addr, client soft et al). This will also be visible to user interfaces. Then we need some kind of UploadManager, to which modules wishing to upload to other peers register themselves, and the UploadManager will then open up upload slots (based on module priorities, current transfer rates et al) as needed. UploadManager will also need some kind of method for closing upload slots (when there are too many of them, or dead slots, or similar). DownloadManager, in turn, will need to act as global cross-reference table for file <-> client (again, needed by user interfaces). Then again, we might just merge UploadManager and DownloadManager into central ClientManager, which handles everything (in core).

Madcat, ZzZZ

PS: Updated builds are available at http://hydranode.bytez.org/r1942/
Edit: The win32 zip was broken, and was re-uploaded now. Re-download if it failed to unpack.


Thursday, September 22, 2005

SpeedMeters, fixes and allocation moved to io-thread

Yesterday ... well, that wasn't a good day. Everyone knows one shouldn't drink and code. And everyone knows premature optimizations are the root of all evil. And also, everyone knows that programmers are the last persons to know about the bottlenecks in their apps. And yet still everyone ignores those simple rules. That's exactly what I did on the night before today as well - namely, the Scheduler optimizations, and the getTick() optimization. The getTick() one broke hnanalyze.pl script (and caused chemical to almost lose his entire statistics history); the Scheduler patch introduced a huge memory leak (~60mb leaked in 12 hrs), at least two segflt's and gave near-zero performance boost. In addition to that, the added FileRating support in searchresults was broken as well, the reality is that servers just send the tag right now, but no clients yet provide this data to servers, so the servers just send nulls right now. Sigh.

Recovering from that setback, and starting fresh today, here are the new accomplishments of tonight's coding session:
Madcat, ZzZz


Wednesday, September 21, 2005

Why don't people listen to other people?

Duh, that last night's patchset broke stuff. It's strange really - everyone knows premature optimization is the root of all evil, and that programmers are the last persons to know where the bottlenecks are in their apps, and STILL they go optimizing. *sighs*. Bottom line - I reverted the getTick() patch and the scheduler "optimization" - the latter broke way too much and didn't provide the expected amount of speedup.

No other interesting news tho, had network outage for most of the night, so - hard to dev a p2p client w/o net :)

Madcat, ZzZz


Tuesday, September 20, 2005

Preparations for next feature-set

Blog entry posted on Sept/11 outlined a current TODO listing. Today, we'r near to completing that - PartialTorrent internals were redesigned few days ago, Chunk requests are re-arranged near the end of download (altough some optimizations could be made there), and today's latest addition is a generic SpeedMeter class, which hides the speed calculation behind an easy API. Interface, Implementation.

Now, before we can add the SpeedMeter to each and every socket (and there-of create a signals/slots system to connect socket speeds to files/clients/plugins), we need some optimizations at Scheduler to compensate for the added complexity. Two specific optimizations were made - UploadReq / DownloadReq objects (internal Scheduler objects) are now reused, instead of re-constructed during each request; and added clock_gettime() variant of Utils::getTick(), which seems to be slightly faster than raw gettimeofday() calls (on Unix).

The next steps now are to add the SpeedMeter to each socket, add signal getSpeed to PartData, SharedFile and ModuleBase classes, to which various sources connect the SpeedMeters. For example, Scheduler will be responsible for connecting all sockets belonging to a module to ModuleBase's getSpeed signal, downloading clients will be responsible for connecting their socket (while data transfer is in progress) to the corresponding PartData et al.

When that's done, we get to somewhat more complex area that's been ignored until now. Namely, Client management, and a generic interface for Upload management. Currently, clients and uploads are managed module-specifically, and they are not available for core/gui comm module. What we need is a ClientList API at core lib, which will make the clients data available for user interfaces. If developed properly, it could (perhaps) also implement some kind of support for upload management (at least simplify/generalize it). Because, for example, the current ed2k upload management is quite a mess, and I don't want to create another such mess at BT module.

On related news, I added support for filerating tag support in ed2k searchresults (lugdunum 17.6 feature, will be included in eMule 0.46d probably). The value is integer, and my tests showed it ranging from 0 to over 1000, so it's yet unknown how to interpret the value properly. But guess we'll see, as soon as eMule 46d is released - for now, I'm just displaying it in hnshell as raw integer.

Other miscellaneus changes tonight include StopWatch::reset() method, addition of current ID to ed2k/serverlist/stat command, and fixed asking for sources on newly-added downloads (got broken few days ago with the pausing/stopping patch).

Madcat, ZzZz


Saturday, September 17, 2005

Minor updates

There's been some distractions on the Real Life side that prevented me from spending vast amounts of time on this project, so not much updates during past days (since the last post). However, there have been some minor ones.

Cyberz found that he can use my ed2k parser engine for kademlia parsing, with trivial change. It was originally planned that the ed2k parser engine would be generic for using with other networks as well, and this shows that that intention was (at least partially) achieved. All it took was moving the packet header parsing code to a Policy class, and there we go.

On the bittorrent side, I re-wrote the virtual file wrappers implementation, now based on RangeList engine. The resulting code got lot cleaner, less error-prone (and possibly even slightly faster). See for yourself.

Among other things, the above patch fixed a rather bad bug in BT downloading - namely, the last chunk hash for each file was actually done AFTER the file was completed (after final rehash, but before moving - IF the transfer rate was high enough). This no longer happens.

Madcat.


Thursday, September 15, 2005

Downloads pausing/resuming, and more

Since I have to wake up in 3 hours again to resume coding, I'll be brief tonight. A rather usual fix / improvement-set this time, with some new features. The list:
Why the last two ones? That code hasn't changed in a long time, but as it turns out, it seems to be the strongest generic component we have in hnbase library. It is already used in PartData for completed/locked/verified et al range bookkeeping (the original purpose), for PartData chunk-map components, for IPFilter management (ip address is a 32bit integer really). And now, I figured I can use those classes for one more purpose - for BT files bookkeeping. That would replace the current, hand-crafted (and rather buggy) handling of sub-files in a torrent. The current implementation uses a map of files, keyed by begin offset, and does some really wierd lookups there. However, I figure if I wrap a sub-file into something like this:

class InternalFile : public Range64 {

SharedFile *m_file;
};

Then this allows me to wrap the above into a RangeList, that allows us to simply do m_children.getContains(offset), and find which file an offset belongs to. Cleaner code, less bugs, and all is happy :)

Madcat, ZzZz


Wednesday, September 14, 2005

Request management isn't fun

Well, I'm trying to solve the request-management problem. The thing is, near the end of the download, you run out of requests, and then have two options - request same data from multiple clients, or drop the client (that ran out of requests) completely. Naturally, dropping clients results in last chunks of the file to come from some 28.8kbps modem, which ain't good. However, when requesting a chunk from multiple clients, you also need to start canceling requests as the data arrives, otherwise you just waste everyone's bandwidth with data you don't need anymore.

The idea was simple - whenever clients make requests, push them into a central requests structure in the corresponding Torrent, in a sequenced manner, so that the first request to be pushed shall be the bottom-most request in the structure. Now, when a client runs out of requests, it'll just take the bottom-most request in the Torrent (since that request has been waiting unanswered for the longest time), and send that request instead.

On the idea level, it all sounded nice, until I got to implementing it. Now we have a whole bunch of new problems - fast clients cause the shared-requests to rotate too fast, which means they compete with each other for the same requests. And to make it worse, as soon as I send single Cancel message to a client, it stops sending any new chunks at all - it sends the chunks that were requested prior to the cancel message, but requests made after that remain un-answered.

So far my attempts to understand Azureus source code, or find any kind of logic or structure in it have failed, (and Azureus accounts for like 95% of BT users), so I'm kinda lost (again). ShareAza and libtorrent code didn't give any hints towards that end either.

Madcat, ZzZz


Sunday, September 11, 2005

TODO listing ...

Hey, stop yelling at me for missing blog posts (I know you'r doing that, I can feel it). I'm still only human, and sometimes need some time off as well.

Anyway, I'v compiled a list of things that need to be done in short-term. Hence, for your viewing (and commenting) pleasure, here's the most current TODO listing (in no particular order):
And to add to that, I have to come out with some more miracle optimizations, because some people insist on running hydranode on their 120mhz pentiums and complain that hydranode is using 60% cpu :o

And then I have a ton of ideas related to ... well, stuff, that I can't (or won't) disclose currently, but I assure you, you'll see them soon enough...

Madcat, ZzZz


Friday, September 09, 2005

Finally fixed a very annoying off-by-one bug

God I hate those off-by-one bugs. Namely, for the past several days already, we had a problem with 1-byte-gaps in the BT downloads. As I finally found out, the bug was in BT module (as I expected), in the virtual files wrapper; namely, when data was written to a child object, the parent objects completed map was incorrectly updated (off-by-one), which resulted in the broken behaviour.

Also, I reverted the IPV4Addr patch today - it caused more problems than it solved. Ed2k module does a lot of black magic with ips/ids, and that patch broke all of it; we don't have the time or resources right now to handle such things, otherwise we just end up hacking the same modules for years-to-come, w/o ever moving on. You can't make such fundamental API change so late into the project.

The rest of the day went to hardcore memory debugging with valgrind - I disovered a small number of problems (two uninitialized variables and broken sockets deletion), which were fixed.

Madcat, ZzZz


Thursday, September 08, 2005

Bugfixing

What did I say about ed2k being operational again after that ipv4addr mess? Wrong alert. In fact, I went as far as to completely revert the ipv4addr change, rationale being that it broke too much (ed2k module makes a ton of black magic with ip's/id's), you can't make such fundamental API change so long into the project, and besides, having the ip in network byte order in ipv4addr class (as it was before) even makes sense.

A rather serious memory leak was discovered (and fixed) today as well - namely, disconnected sockets were never actually deleted! The thing was, sockets have private destructors, and are supposed to be deleted by SocketWatcher (which performs the polling / event multiplexing). However, only connected / listening sockets are registred with SocketWatcher. Hence, when user did sock->disconnect(); sock->destroy();, the socket was never actually deleted, since it was removed from SocketWatcher prior to destroy() call. *duh*. This was now fixed, and a socket object counter added, which reports the number of alive socket objects on shutdown. Prior to this patch, roughly half of the created sockets (and that means thousands) weren't deleted; now it's down to ~10-30 lost sockets, a considerable improvement. How much, if any, effect this has on long-term memory usage is yet to be determined.

Madcat, ZzZz


Tuesday, September 06, 2005

BT improvements - downloading/hashing works now.

Due to an IPV4Address class API change few days ago (endianess setting for ip field was changed), things broke in ed2k module (as was expected), so yesterday went to debugging and testing ed2k plugin against that change. From what I can tell, everything's operational again.

Today was a rather interesting and progressive day. It started with cyberz's patch, adding Utils::timedCallback() method, which can be used to do standalone timed callbacks (formerly, an event table was needed to do this). In Hydranode shell module, trace command was updated to be more informative and less confusing (thanks to chemical for pointing this out). Following that, there's a number of BT-related patches:
As a result, BT downloading (using the bget utility) works rather well - I was able to complete three out of ~12 mp3z in the test torrent file. There seems to be some kind of +1-offset problem, on some conditions Locks are given at +1 offset from UsedRange beginning (and also seems they sometimes end at -1 offset from the end of the parent UsedRange), which causes 2-byte gaps near the end of the download.

Madcat, ZzZz


Monday, September 05, 2005

IPV4Address fixed; ChunkSelector optimizations

IPV4Address class was fixed by cyberz, now it keeps internal data always in host endian setting, removing lot of rather inconvenient handling of ip addresses in plugins. BT module also saw progress, there were a number of issues in files.cpp (the PartData/SharedFile wrappers), forwarding calls were still broken, and the custom hasher class got few fixed into it as well (altough a lot of chunks seem to fail verification still). On the bright side, I did notice few times PartData claiming it successfully verified chunks ... whether or not it is actually true is yet to be determined.

The ChunkSelector topic - I realized that we don't have to write a new ChunkSelector after all - you can't really start writing everything from scratch w/o even trying to work it out with the existing solution. So, I threw in a number of optimizations into the existing chunk-selector - it shouldn't be visible from end-user perspective, but it was neccesery to speed some things up. For example, when requesting a chunk for downloading, from a range of chunks (client is a partial source), prior to today, this meant that a complete RangeList64 was generated (list of 64bit begin/end offsets) from the passed boolean list (where each boolean indicated whether the client has the chunk or not); and the actual checks were done against RangeList64. Now, that step is completely eliminated, and we do some simple integer math directly on the passed vector, which is considerably faster.

Also, on the same area, we formerly didn't make any difference between "partially downloaded" and "not downloaded" chunks; the former would then be a chunk that has some data already downloaded, while the latter hasn't been downloaded at all. This caused the first round of ChunkSelection (which prefers partially-downloaded chunks in attempt to complete and hash them) walk the ENTIRE chunkmap (74 steps for a 700mb ed2k download). This no longer happens, and a usable chunk is generally found within 1-4 steps. The above also caused some issues with rarest-selector, which should also work more effectivly now.

View entire change list

Madcat, ZzZz


Saturday, September 03, 2005

CPU usage fixed, ipv4addr is broken, but what about that chunkselector?

The performance hog was discovered - namely, seems the statusbar (the very last line in console that displays speeds in real-time) started consuming a lot more resources recently. This could be explained by increased amount of events being processed, since the statusbar was updated in real-time (during event loops). Now, the statusbar is only updated once per 100ms - fast enough to give accurate information, but no longer consuming so much cpu resources. CPU usage dropped roughly 2 times (if statusbar was enabled - this patch has no effect when running with -b {background} or --disable-status flags).

Another issue was identified today related to IPV4Address class - namely, apparently it expects host-endian port value, but expects (and stores) ip address in network byte order. This causes a lot of strange mess (for example, I was parsing compact tracker responses last night {apparently not so optional after all - some trackers reject clients who dont support it}, and was wondering why on earth is it sending ip in one byte ordering, and port in other).

The new chunkselector(tm) isn't progressing though - I'v given it a lot of thought today, but with little progress. There's a lot of issues to be addressed - performance, memory usage and effectivness are the primary concern, and balancing between those three is tricky. The ideal solution would:
The original approach that attempted to address these targets relied on per-byte chunks, e.g. each chunk takes 16 bytes memory (all size management in hydranode uses 64-bit variables). However, for BT this could quickly lead to 80kb memory being stored for a chunkmap (plus various overheads), which is unacceptable.

The really tricky busyness here is the rarest-chunk selection. If we were to deal with single-layer, e.g. choose the rarest from one layer only - for example BT would always choose the rarest BT chunk, disregarding any ed2k chunk information - it would be simple. But what I'd like to achieve is that if a 9500kb chunk is completely missing on ed2k network, BT module would prioritize that one, and vice versa. This would lead to most effective chunk-selection in multi-network downloads, with each network selecting whichever chunk is rarest across all networks. How to actually implement this, and even further - implement this so that it scales well even in case of thousands of chunks - still eludes me.

Madcat, ZzZz


Friday, September 02, 2005

Debugging, and we need a new ChunkSelector(tm)

Yesterday, I finished testing some of the patches that might have had side-effects for ed2k module, namely changes in PartData and SharedFile classes, which were required for the BT wrapper classes (basically just making some methods protected/virtual, but as well some functions splitting and added signals). So far, no regressions were detected, so the patches are now in SVN, and bget utility should be fully compilable/runnable.

Today I spent quality time further debugging and improving the BT module. Namely, .torrent file parser got some fixes into it, it now properly handles files in subdirs, and recognizes various other hashes for the files as well (ed2k, sha1, md4, md5). Also, in files.cpp, the forwarders were rather broken in some cases, which was fixed today as well.

However, BT exposed a weakness in PartData api, namely in ChunkSelector(tm), which, as the name says, decies which chunk should be downloaded next. The problem is the performance of the ChunkSelector - it was built while implementing ed2k, and was performing sufficiently fast for <100 chunks. However, in BT, we'r dealing with thousands of chunks - even a 150mb torrent with 256kb chunksize is already nearly 600 chunks. Hence, we need a LOT faster chunk-selector. And performance isn't the only concern - the new BT code also exposed that the chunkselector isn't actually working as good as it was thought to be - the selection of chunks it does isn't what I had in mind.

Also, I found one tracker that tells me to "upgrade your client". No info is yet known what exactly didn't it like about my Client <-> Tracker communication implementation, but I'm guessing some things mentioned "optional" in protocol specs aren't so much "optional" after-all.

On other news, some of you might have noticed a considerable increase in CPU usage between 0.1.1 and 0.1.2 releases (my personal tests show ~2x higher cpu usage in the latter version). I was somewhat concerned about it, but couldn't pin-point it's location. Today I did some tracing, and tracked the problem down to write() syscall, which increased from 38% to 87% time between 0.1.1 and 0.1.2. My first guess was PartData::flushBuffer() method, most likely the recently-added explicit fsync() call in there, but it doesn't seem to be the case. So the issue is still open, but we'r one step closer to actually getting it fixed.

Madcat, ZzZz


Archives: December 2004 January 2005 February 2005 March 2005 April 2005 May 2005 June 2005 July 2005 August 2005 September 2005 October 2005 November 2005 December 2005 January 2006 February 2006 March 2006 April 2006 May 2006 June 2006 July 2006 August 2006 September 2006 Current Posts

This page is powered by Blogger. Isn't yours?