Alo Sarv
lead developer

Donate via
MoneyBookers

Latest Builds

version 0.3
tar.gz tar.bz2
Boost 1.33.1 Headers
MLDonkey Downloads Import Module Development
Payment completed
Development in progress.
Developer's Diary
irc.hydranode.com/#hydranode

Sunday, December 19, 2004

Taking a break until Jan 3rd

Ok, this is it guys. It's been seven months of 24/7 development (since May 15th), and we have accomplished a lot during that time. Overall codebase amounts over 30'000 lines, I'v also received a lot of positive feedback on the documentation and so on. When we started, nobody expected this app to become this complex, nor this time-consuming. However, for each complex problem, there has always been found a solution. But we forgot one thing - madcat is still only human. Fact is, during past week, I'v felt completely empty, all out of energy (which explains the slowed development speed during last week).

After discussing this with our ProjectExpert(tm), we decided it was best if I take a short, but complete break from all development, in order to regain the energy that once allowed me to show the development speeds you all have gotten used to by now.

As of this moment, this project is stalled until January 3rd. I'm completely throwing anything related to this project of my head (ok, I'll try at least), and will be returning exactly on January 3rd, two weeks from now, to continue the project.

This also means I will most likely not be online on IRC during that time, but I can still be reached via e-mail if needed.

Madcat, signing off.


Friday, December 17, 2004

Deploying new EHS

Well, the transaction to new event system went rather smoothly, took few hours to convert the old code to use the new system, but no major problems occoured ... except for one.

For some reason, some of the submitted events never reached the handlers. After some tracking, I found out that multiple event tables for same source types existed at same time, and events were posted to one instance, but handlers were registred to other instance. Further investigation showed that every source file that instances the EventTable template class Singleton got a separate copy of the table, instead of referencing to the same copy. At first it made sense, since what we'r dealing with here are still templates, and it would make sense to get a local copy (altough when I wrote the thing I thought linker should take care of that). So there I am, rather pissed at the situation, considering possible solutions, and figured I'll test the engine on linux/gcc just to see if it behaves the same way ...

And what do I discover? That the engine works exactly as intended on linux/gcc - events are submitted to right tables and correctly handled (e.g. handlers also go to right tables). Now I'm REALLY pissed at MSVC - first it's crap editor with no syntax highlighting, then its the bloated MSVC environment ... and now this. *doh*

So anyway, I commited the new system to CVS for testing, it's not heavily tested on linux, but generally seems to be working. I'm currently researching possibilities of working around the MSVC issue, with one alternative being dropping MSVC support completely, if the code runs ok on win32/gcc - I'm REALLY not motivated to work in a crappy environment with a crappy compiler for ~5% performance boost that MSVC gives over mingw-gcc.

Madcat, ZzZz


Thursday, December 16, 2004

Some improvements in EHSv2

For quite a while I was cursing and fighting with MSVC, which didn't seem to perform too good at template functions overload resolution. It handles it correctly in simpler cases, but fails miserably when things get complex. The exact example:
template<typename Source, typename Event>
void postEvent(Source src, Event evt) { ... }
template<typename Source, typename Event>
void postEvent(Source *src, Event evt) { ... }
For pointer types, the second version is considered "more specialized" by the standard, and should be chosen - which is done correctly by both gcc and MSVC. However:
template<typename Source, typename Event>
void addHandler(Source src, T *obj, void (T::*ha)(Source src, Event evt) { ... }
template<typename Source, typename Event>
void addHandler(Source *src, T *obj, void (T::*ha)(Source *src, Event evt) {...}
Same situation, altough more complex - gcc 3.4 handles correctly, MSVC gets confused and gives ambigoutity error :(

However, after numerous attempts to get past that (and avoid explicit template argument specifications for those functions - which was the entire point of those methods - automatic parameter deduction), I rethought the engine, and came up with significently smaller solution. After throwing out roughly 150 lines of code from the api, it got simpler, and much cleaner. Here's the files:
event.h
test-event.cpp

It handles smart-ptr-wrapped objects automatically now, w/o template specializations this time (EHSv1 used full specialization for that, which created duplicate code and was errorprone).

Event handlers removal is still a rather grey area ... it can be done explicitly (as was done in old implementation), however that's very errorprone - forget to remove your handler in destructor, and you crash the app. Boost.Signals library offers a partial solution - boost::signals::trackable-derived objects automatically get disconnected from the signals they are connected to ...

Madcat, ZzZz



Wednesday, December 15, 2004

Introducing Event Handling Subsystem version 2 (beta)

'lo and behold - EHSv2 (beta) is here. For those curious ones, here's a set of links to get started:
Note: those are temporary links, don't store 'em anywhere

For those not so curious, the most important requirements for the new version of EHS have been fulfilled - it's completely decoupled from the event source, and is MUCH simpler to use. The test driver shows how events posted from within the object and from outside world can mix w/o any problems, and it's also possible to post events of any type this way.

The new implementation is based on Boost.Signals library as mentioned earlier, and is much simpler too - in original implementation it took ages to get the main loops running ok, while in this case, the gory details are handled by respective Boost libraries (keeping in mind the 2nd commandment :P)

I didn't get to test smart-ptr wrapped pointers yet, but I believe it can be handled with this new approach (I already have few helper methods towards that end in the implementation).

On other notes, I'm beginning to get the feel of win32-based development. Visual Assist X extension for MSVC gives at least some syntax highlighting capabilities for MSVC, altough in complex code, it still falls short of what Kate had to offer :(

Madcat, ZzZz

PS: Someone please tell me there's a free C++ to HTML converter available for win32? E.g. for doing HTML exports with syntax highlighting for posting code on websites/blogs ? Doxygen does it, but that means entire doxygen output must be uploaded, CBuilderX does it, but includes it's own name in the output (ok, fine, I can remove it, but that's not the point). Anything else?

PS2: *doh* For those who haven't heard already, check http://respectp2p.org ... 2 major ed2k sites taken down :(



Tuesday, December 14, 2004

Rewriting events

Madcat begins some strange incantations...
Madcat utters the words 'optimize desktop'
Madcat's spell backfires, and he squales in surprise.
Madcat smirks as his desktop explodes in his face.
Madcat curses and swears for a long time.
Madcat mumbles 'god I hate when that spell backfires'


(Sorry, inside joke :P)

Anyway, been getting reaquinted with windows IDE's and been cursing and swearing at them most of the time. One IDE has nice editor, other one has better compiler integration, third one is simply perfect, except for a bunch of bugs ... *curses*. Beginning to remember what it meant to develop things on win32 ... amazing how fast you forget this stuff.

So anyway, as noted previously, the major obstacle in hydranode/win32 is the event subsystem, which uses 10-yr-old library as backend. So I headed down to event subsystem, and rewrote the main engine using Boost.Signals library backend. That was the (relativly) easy part - altough it still needs more testing (and deployment over the existing codebase *cough**cough*), that's nothing hard. What's complex is figuring out how get the new system better than the old one (which was the second major reason for rewriting it anyway).

The main issue here is that I'd like to decouple the event source and the event table. In legacy event systems, event sources are tightly coupled with the event tables, and as such define how events can be procesed. For example, the event source defines what kind of events it can emit. However, sometimes I don't want to emit the event FROM the event source class, but from outside. For example:
HashWork *hw = new HashWork(myfile);
HashWork::getEventTable().postEvent(hw, NEW_JOB);

In this case, the event isn't emitted from the HashWork class itself, but from somewhere outside. Current Event subsystem implementation allows it, and new one (already) allows it too, however, it's still not as flexible as needed, since HashWork object in this case still needs to declare an event table, with predefined event handler prototype. What if I wanted to post a std::string type event instead ? I couldn't, since HashWork's event table wouldn't accept the event.

The idea is to use free functions for posting/adding handlers, then use Singleton event tables, which those free functions access, one per each object type. This way, we can write:
HashWork *hw = new HashWork(myfile);
postEvent(hw, NEW_JOB); // instanciates EventTable < HashWork*, int > event table and posts
postEvent(hw, "Doh"); // instanciates EventTable < boost::shared_ptr < HashWork > , const char* > event table and posts

This is already achieved, however, one last problem remains. How do we handle the situation, where the object is wrapped in Boost.SmartPtr wrapper ? My current tests have shown that it's not achievable (in a generic manner) to use boost::shared_ptr-wrapped objects with event tables, however, boost::intrusive_ptr-wrapped are theoretically possible. The problem however is, how do we mix the events posted from inside the object with events posted from outside? Because when posting from inside, it would go to EventTable < object*,EventType > table (because it uses this to aquire the pointer to itself), however, when posting from outside, it would go to EventTable < boost::intrusive_ptr < object > , EventType > table instead ... *ponders*

Madcat.


Sunday, December 12, 2004

System reinstall ...

Not much to say - I finally decided to reinstall my development box. Had been delaying it for ages now, but some recent things forced me into it now. Anyway, I realized I'v been getting too linux-biased in development orientation, which isn't what we want, since hydranode must run just as well on all platforms. So I spent all night reinstalling my system to windows-based, and by now I have basic stuff up, but still missing lot of dev tools and such.

In any case, I needed a change and refreshment - and radically switching between Linux and Windows every few months very refreshing. In any case, I needed to take my mind off the code issues for a day or perhaps even two anyway - every concept I'v thrown at PartData handling right now has become overly complex (and I'v tried several approaches already), and that's what I'd like to avoid ...

Madcat, ZzZz


Saturday, December 11, 2004

Ideas and requirements for new PartData

I'v been thinking about what/how new PartData internals should look like, as well as the requirements for it. It was discussed on IRC several times after the last blog entry, and by now I have a general understanding what it should provide.

The most important aspect of PartData is choosing which ranges to download, as mentioned previously. It should always prefer the least-available chunk. Chunk, in this case, is defined as a part of file for which we have a hash (so we can verify the data). This means introducing availability-o-meter into PartData ... or perhaps even into SharedFile, because SharedFile would need this information too (later, during uploading - in order to better spread files, we'd only announce that we have the rarest chunks at any time ... optionally ofcourse). I went through several different concepts of storing source masks... one idea is that we keep a map, keyed on chunksize, and in that, keep vector of integers. Would show the implementation, but the blog screws up with less-than / greater-than symbols needed to express template constructs :(

If we key the outer map with chunksize, the inner vector indicates the availability of specific chunks. Next, when a module asks us for a random chunk, we look the size up on the map, and if we recognize anything from there, choose the lowest-counted entry from the vector. When adding source mask, we simply add +1 into the inner vector into every position which the source has, when removing, the same thing reversed. The actual implementation needs to be somewhat more complex, since we don't want to perform linear searches through the vector all the time - Boost.MultiIndex library to the rescue - there we can do multiple different types of lookups on same container.

This leads up to next thing - using/locking chunk/parts in PartData. Using this approach, we can say that we can easily mark full parts as "used", e.g. ed2k module would "use" 9500kb parts at time, BT would "use" [file-part-size] parts etc. The main logic would go:
  1. Select lowest-available chunk that is incomplete. Prefer half-complete chunks over completely empty chunks (we want to complete a chunk before starting next one).
  2. Check how much of it has been downloaded already, and how many times it has been marked "used" already.
  3. If the "used" count is larger than some constant, say, CHUNKSIZE / 5 (allow max 5 concurrent downloads of same chunk), choose another chunk. If it's less, grant the "use". Otherwise, choose next-rarest chunk.
  4. PartData needs to keep references of granted chunks for bookkeeping, and some other fancy stuff (more on this later).
  5. Next up, for example in case of ed2k module, it splits the given part into 180k chunks and requests them, and also locks the first 180k chunk. After that it starts writing data... same as before.
Now, what we want really here is make the "used" parts an object, and keep back a pointer to granted parts. This is probably best implemented using Boost.SmartPtr library, using shared pointers. Why is this important? Because:

PartData knows what parts it has given out. It also has access to those parts after giving them out. Now, if multiple sources start downloading same part, what will PartData do ? It'll split the first given out part to half, and grant the second half to the second downloader. The splitting is done of the incomplete part only. And when third source wants to use the same chunk (also possible), split it again.

Now, at some point, the first source will hit the end of it's originally given chunk. In that case, it'll request a new part. Now it gets interesting. If we keep a construction time of all "used" chunks, we can easily calculate the downloadrate for those chunks. Now, based on that downloadrate, PartData can decide (w/o knowing anything else about the source), which sources are fast and which are slow. Now, if the one that completed the chunk was a fast source (relative, compared to others), and we don't have any other chunks to give to it (say we'r near the end of file), we'll just kick one of the remaining two (slower one), and give the chunk to the fast source. This way we perform optimally when we have fast sources available, and thus slow sources don't block us when we'r dealing with small files and/or file endings (we all know how painful sometimes the last <180kb can get in emule ... )

Madcat, ZzZz

PS: Uhh... I write too much *doh*
PS2: Thanks for the comments on last blog entry, was really cool to wake up in the .. evening, and find some positive feedback in a while :) Been getting the "hydranode? bah, yet another mldonkey?" stuff all week *grr*. I mean no disrespect to mldonkey developers, but sadly, nowadays saying "core/gui separated, modular, p2p client" is almost equal to saying "yet another mldonkey clone", and we all know what the general public thinks of mldonkey :(


Friday, December 10, 2004

Range API rewrite complete, perparing for PartData rewrite

Just to get it out of the way -> the new Range API. Not fully documented yet, but should give the idea to those curious ones wanting to read code. As you can see, the new implementation is significently smaller than the old range.h and rangepol.h. As mentioned earlier, I dropped a lot of things that go beyond the scope of Range API's responsibilities, which simplified things a lot.

What I'v been pondering about however, is PartData. The current implementation of PartData has a multitude of problems that need to be addressed. Namely, it should be easy to learn/use, and provide protection mechanisms against programmer errors. The major source of errors tend to come from forgetting to unlock/free used ranges, unlocking/freeing them twice and so on. This problem could most likely be addressed by introducing Lock objects (similar to most threading libraries with mutexes), which free the ranges they refer to during destruction. What makes it complex though is the fact that in threading API's, the locks are meant to be kept for short period of time, usually within scopes, and thus can be stored on stack, however, in our case, we need to keep the locks enabled for long periods of time (e.g. until the data has been downloaded, or source dropped).

Another (even more) important aspect of PartData is the choice of which parts to download. While PartData does not control downloading itself at all, it can indirectly control it via giving out specific ranges, since that's up to PartData to decide. The original system has a lot of flaws in that area, which resulted in a lot of range fragmenetation and incomplete chunks until to the very end of the download. New PartData implementation must:
It's very important to fully randomize the part selection (except perhaps for first/last chunks), because if we download sequencially from begin->end, or completely randomly (disregarding chunks), we cause large-scale problems on networks if hydranode is used a lot. For example, if we download sequencially, the ends of files will become very rare, since people tend to un-share files after completing them. Alternatively, if we download random chunks, disregarding chunkhash boundaries, we won't be able to share the hashed parts until near the end of the file due to chunk fragmenetation (since we miss chunks of the parts, we can't hash them, and thus can't share them), which pretty much breaks a lot.

Madcat, ZzZz

PS: This blog engine also has comments capabilities (altough you have to bear with a nagscreen asking login/pass - but there IS an "post anonymously" link there), so - comments/thoughts are always welcome :)



Thursday, December 09, 2004

Rewriting Range Management Subsystem

As mentioned in earlier blog entry, the key to finishing this app is simplifying. Writing the ed2k module showed where the flaws in the core design lied, and now it's time to fix 'em. You know when a subsystem is doing it's job and doing it good when you completely forget it's existance (same applies to any kind of computer software - or almost anything else for that matter). There are two subsystems that constantly remind of their existance tho - range management subsystem, and event subsystem. Both are rather fundamental parts of the core, and used heavily, so they need to be convenient.

Starting from Range Management Subsystem, I spent yesterday and the better part of today figuring out the public interfaces and overall design of the new engine, and now I'm rather satisfied how simple the thing became. The original system relied heavily on policy classes (hey, I had just learned the policy classes design concept when I first wrote it, and I clearly overused it there) - and policy classes change type, so in the end I had to convert ranges from one to another policy manually everywhere - tedios, error-prone etc. New engine gets away without any policy classes, and I also threw out a lot of things that are beyond the scope of RMS - namely FullRangeList concept (which did min/max border checking), and DataRange (which, in complex cases, also performed data buffers splitting/merging - with memcpy/malloc/realloc - *uff*). While it's not of any target, I estimate that the new subsystem will be <500 lines of code including documentation, compared to old 2000-line monstrum.

However, while the public interfaces are figured out, I still need to re-implement most of the internals (the old subsystems internals could partially be reused, however I believe I had some bugs in there - which resulted in those currupt downloading capabilities we'r seeing in CVS right now), so I'm writing the internals again from scratch.

After that's done, I need to review PartData and possibly simplify the systems there to make it safer/more usable too. If successful, the overall maintainance and future modules' development time should be lowered considerably by these improvements, so it's well worth spending time on right now.

Madcat, ZzZz



Tuesday, December 07, 2004

Simplification is the key

Today it finally hit me - the key to stabilizing this thing is simplifications. Seems I forgot the 13th commandment: "Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away." - Antoine de Saint-Exupéry. It's painfully obvious that this is what we need.

Following this rationale, I rewrote Client::DownloadInfo chunks handling code, replacing it with significently simpler implementation - got rid of over 100 lines of error-prone code, and replaced with a nice simple 5-line function. Now downloading on ed2k module side is pretty stable. Which, however, can't be said about core side.

After fixing few bugs in range management subsystem (you know - the 1600-line header-only engine which is accompanied with a 1000-line regress-test and acts as backend for PartData), it's clear to me now that PartData implementation is too complex and error-prone. It currently has SIX different internal data structures, storing various rangelists and things. Keeping all that in sync is a constant headache and highly error-prone, which explains why I haven't managed to stabilize it thus far. The entire using/locking mechanism needs to be reviewed and simplified to the point where it becomes maintainable again, I definately over-engineered that thing.

Another thing that's hitting me again and again is the damn events. It's been scheduled for rewrite for quite some time now, but now other things are waiting for that rewrite already - the multi-handler calling isn't working with current implementation. The interface also needs re-design, since it's error-prone (forgetting to remove your handlers?), non-intuitive syntax and too coupled with the underlying event object. The new implementation should be completely decoupled from the object an event table refers too - this is needed to allow smart-ptr-based event tables and so generally more flexible usage.

You might think - write events? You nuts? Isn't it the base of the entire app? Yes, it is. But that's the beauty of this thing - I can rewrite any part of it w/o much problems. Another thing - you don't truly understand a problem until the first time you implement a solution. Most subsystems in hydranode have gone through at least two implementations before reaching their final versions - and events are still in their first incarnation.

On other news (besides the one that I have a terrible headache again ... stupid winter), I figured the build system really needs an upgrade, so I delved into GNU auto-tools system. The new system is now deployed, works and is DAMN cool :) Those using CVS versions, refer to the commit message for instructions on how to build it. Note that you need GNU Autoconf 2.57 now too (doesn't work with lower versions). Note about debian woody - it has gcc 2.95 (which is unsupported), and autoconf 2.53 (also unsupported). While we could rather easily lower autoconf requirement down to 2.53, it wouldn't help much, since gcc 2.95 will remain unsupported. So - those who have upgraded their gcc on woody can imho as well upgrade the rest of the build tools, so I see no point currently downgrading autoconf requirement (unless someone gives me a good reason ... )

Also - if you have ideas for hydranode GUI, feel free to drop them to this forum thread.

Madcat, ZzZz



Monday, December 06, 2004

Will it ever end?

*sighs*

*sighs some more*

I hereby grant thee the right to SHOOT me next time I say "oh, let's write ed2k module first, its easy and I know the protocol".

Sure, yes, the protocol. Normally one would expect that a protocol is something that has been agreed upon to be used as common communication medium between two or more applications. What happens in ed2k network though is that EVERY client speaks a slang of the protocol. Sometimes only the words (e.g. packets) formats differ, but often, even the sentence structures (e.g. which order packets ar sent) differ. For example:
These are only a few examples I'v encountered thus far. The result is I have to write workarounds and compensate for every such situation, which takes ages, and every such workaround opens up more holes in the entire system (since more things need to be checked), destabilizing the system every time I think I got it stable again. *pffft*

Here's the list of today's accomplishments
All in all, it feels like I'm throwing tons and tons of fixes into the ed2k communication system, and nothing gets anywhere - like throwing fixes into an endless pit which never gets fixed. The most frustrating thing is that if the protocol was consistent over clients, I would'v been far done by now with ed2k module, but ... *sighs some more*

Madcat, ZzZz



Sunday, December 05, 2004

Lots and lots of lots and lots

Uzzaa... well, now I'm beginning to like hydranode too ... ok fine, I liked it before too, but if something you'v worked on for ... 7 months now ... is actually doing something and not crashing over every problem, you just have to love it :)

I'll get right to the point so I can get to sleep:
[Note]: Actually I had to disable the Event Subsystem side of multi-handler calling (that is required by this feature). Multiple handlers for events from same source were in original Event Subsystem design, however somehow got forgotten during implementation. And when I attempted to enable them in current implementation, all hell broke loose, so I had to disable them for now. While it would be possible to work around the hell via temporary lists (similar as is done in SocketWatcher - but there it's really needed), I don't think it's worth the effort anymore since Event Subsystem is scheduler for complete internal reimplementation (which also results in several API changes) in short term (win32 port requires new Event Subsystem implementation as we all know).

There has been a lot more activity on CVS than the above ofcourse, but I omitted the less-important, and/or partial changes from the above list. You probably don't know, but there's RSS feed available for CVS updates also. I registred hydranode CVS with CIA Open Source Notification System few days ago, and it also provides RSS feed (in addition to the cool CIA-bot on our IRC channel which provides real-time updates on CVS changes). The XML feed, as well as bunch of CVS statistics, are available at http://cia.navi.cx/stats/project/hydranode. So if you'r into RSS and want to get real-time updates on CVS updates - that's the way to fly.

In theory it's possible to really download stuff with hydranode now, altough it still needs more work. Those wishing to test the stuff, here's what you do (in hydranode shell, available at port 9999 by default):
hnsh$ search [string]
... search results come in, all numbered ...
hnsh$ download [resultnum]
HydraNode starts downloading the file. Temp files are stored in $(HOME)/.hydranode/temp, incoming files are put to $(HOME)/.hydranode/incoming. Note that both search and download commands may be abbrevated to minimum of s and d to save typing. The abbrevation code is experimental and probably not final, but it works to some extent in simpler cases as this.

Madcat, ZzZz

PS: We just exceeded 29'000 source code lines today. Just 1000 more ... uuuzaa... :D



Saturday, December 04, 2004

Transfer rates speed testing

While the transfering code still needs some tuning, we have the results of first transfer rates tests. In order to test the various engines overall performance, I was transferring data between aMule and HydraNode on localhost, so as to figure out the maximum possible transfer rates, as well as to detect possible bottlenecks in networking/scheduler/ed2kparser/partdata engines.

The maximum transfer rate aMule -> HydraNode that I managed to achieve today was 1177 kbytes / second, which is slightly over 1mbyte/sec. The speed was still raising at that point (aMule raises upload speed gradually), and at that point I was at 122mb of the test file, but hydranode-side problems stopped me from testing any further. In the next few days, I hope to be able to test hydranode <-> hydranode transfer rates aswell to fully measure both the uploading and downloading code speeds. As a sidenote, CPU usage during the 1177kb/s data transfer was roughly 15%.

Madcat, ZzZz



Friday, December 03, 2004

New sources handling code, website upgrade plans

Yet another development day has reached it's end. It's 5am, by now I'm rather drunk, and frankly I have no idea what to write in today's blog entry. But since this is still "daily" diary, guess I'll have to cook up something.

From code point of view, the following upgrades might be worth mentioning:
Regarding temp files reloading, the current problem is that we need to construct PartData object passing the correct filesize (it is used to initialize the PartData internal FullRangeList objects), however at the time of temp file loading from disk, we do not know the filesize until we actually construct PartData, which in turn reads the filesize from the reference file. This leads to the FullRangeLists initialized with wrong filesize (detected from the actual filesize on disk - which is wrong), and breaks the entire thing. Adjusting the variables in FullRangeLists after we get the right filesize is not allowed by FullRangeList logic (resizing a pending download ??). So we need some PartData reference file changes possibly to be able to read the size from the file before constructing PartData object.

On other news, wubbla wanted to help with upgrading the main website (with the upgrades in blog and forum, the mainpage looks like crap now compared to those two), and arlekin was already doing some gfx stuff for web, so I brought them together. Hopefully they can come up with some cool new website, one being good with web coding, other being a talented designer.

As for myself, I'm dead tired and require sleep, since I'm still only hum ... er cat.

Madcat, ZzZz


Thursday, December 02, 2004

Hm. Upgrades.

Well, I don't know what hit me, but I'm dead tired, and it's only midnight ... bah. Well, call it a day then. Today hydranode learned the following new tricks:
Other than that, I'v been working on the Client::DownloadInfo and Client::SourceInfo objects separation and handling. While originally Client::DownloadInfo was managing everything, now it will only be loaded when actual data transfering occours, and the rest of the time, only Client::SourceInfo object is kept alive. This is cleaner, and saves a lot of memory when many sources are being handled. However, the new implementation is not yet complete (and I'm too tired to complete it today), so hopefully it can be deployed tomorrow.

Port news
Windows port uses, as we know, MSVC. And the function object's lib the Event Subsystem uses is too old, and breaks on modern MSVC. While I originally thought it only applied to function calls to inside modules, it also breaks on function calls within core. The Event Subsystem has been scheduled for structural redesign anyway, to be based on Boost.Signal instead, so...

For OSX port, modules loading/handling seems to be working fine now, but there are two problems left:
On other news, seems my eDonkey2000 Protocol Specification has drawn some interest - today it was linked from eMule forums :)

Madcat, ZzZz

PS: HydraNode uses port 4663 for ED2K right now to allow co-existing with existing ED2K clients. If you'r getting LowID with hydranode, open up TCP port 4663 in your firewall, or modify TCP Port setting in $(HOME)/.hydranode/ed2k/ed2k.ini file.



Wednesday, December 01, 2004

Welcome to the new blog :)

Well, here it is - the new and improved blog engine. This thing is powered by Blogger (by Google). It has a nice and cool WYSIWYG editor, which allows doing all kind of cool formatting tricks. The old nucleus broke the damn layout as soon as I tried using even the simple things like preformatted text or lists (even though it allowed full html). So - hopefully the blog posts will now be nicer to read and so on and so forth.

The graphics on this blogs header and forum headers are made by arlekin, who's hopefully becoming our official graphics designer. He's currently working on various hydranode logo concepts and other design/layout-related things.

On the code side, it's been still mainly bugfixing state, downloading capabilities are stressing the existing components more than anything ever did before - and remember - those components have only been tested under lab conditions (e.g. testsuite). So far the bugs I'v encountered have been shallow and easy to fix, but it still takes time to stabilize the entire thing. I haven't added any fancy new stuff to the downloading code right now, but will, shortly. I'v got several ideas how to improve the related API's, which you'll see in the next few days to come.

Until then, enjoy the new blog layout (which is now IE-compatible :D) and the new forum.

Madcat, ZzZz



Archives: December 2004 January 2005 February 2005 March 2005 April 2005 May 2005 June 2005 July 2005 August 2005 September 2005 October 2005 November 2005 December 2005 January 2006 February 2006 March 2006 April 2006 May 2006 June 2006 July 2006 August 2006 September 2006 Current Posts

This page is powered by Blogger. Isn't yours?