Madcat begins some strange incantations... Madcat utters the words 'raise compiler' GCC raises from the ashes and compiles again! MSVC raises from the ashes and ... falls back to ashes.
Well, thanks to help from Boost.MultiIndex library author, we tracked down the problems with some GCC versions crashing on partdata.cpp - all that was needed was a minor syntactical change, and voala - it now compiles again on OSX, and should also compile on the reported SuSE 8.2 GCC 3.3.5.
While on the topic - after some testing and some fixes (related to OSX's dynamic loader handling things differently), hydranode should now be 99% compliant with Mac OS X 10.3 (don't have access to older OSX versions, so don't know if it works there). There are some minor irritations - seems some layouts are slightly screwed up in hnshell - but other than that, it works, downloads, uploads, searches - e.g. everything it does on x86. On second thought - I don't know how it really behaves on 64-bit macs - native 64-bit support implementation is planned to be done in a few weeks from now, so right now, I don't know ...
Anyway, on general code side - lots of topic has been on timeouts. The thing is, when some sockets never return back writable/readable status, hydranode doesn't do anything with them, so they just sit there, doing nothing. The problem is, soon it hits our (currently) hardcoded connection limit 300, and then everything starts slowing down. Those "dead" connections would be detected when someone attempted to explicitly read or write with them, however, since no events are emitted from them, nothing triggers anything, so ... I implemented full timeouts support into Socket API - client code can call setTimeout(milliseconds), and if no events whatsoever happen during that time, Socket API emits SOCK_TIMEOUT event.
I also implemented LowID callback timeouts, which were planned for quite some time now. This was yet another thingie that never got triggered - when a client requested a callback, it expected that sooner or later the client "calls back" to us. However, there are many reasons why this may fail, and the remote client never connects us, so again, we end up having "dead" clients around. After implementing it, I ran into a different kind of problem though. Naturally, I implemented the timeouts using delayed events, which are emitted (and handled) by Client class. However, when dealing with volatile event sources, and delayed events, the problem raises that if the delayed event gets emitted after the source object is already destroyed, we have a problem (e.g. segfault).
In order to compensate that, I implemented Trackable concept, similar in many ways to boost::signals::trackable. When a Trackable-derived object is destroyed, it invalidates all pending events emitted from it. This does not only apply to delayed events - this also includes events that are already in main event queue. So even if you post an event from your object's destructor (kinda stupid, but hey - who knows) - it won't get emitted, because the source dies. The entire system is completely un-intrusive, optional, and implemented using compile-time type-checking algorithms, so provides nearly no runtime-overhead. Special thanks go to Xaignar (from amule team) for providing some useful thoughts on this.
Next up we need some kind of Speed-o-Meter type class. The thing is, we need to calculate speeds at many places - for example, PartData would like to report it's download speed, SharedFile might want to show it's upload speed, Client object might want to show it's download/upload speeds, Scheduler must keep track of global up/down speeds, etc. The current speed-o-meter in scheduler is somewhat flawed - after I upgraded it a while back from real-time to 100ms-resolution, I left out one problem - namely, when no data is transmitted during a 100ms period, it doesn't include the "nulls" in the calculations, so all our speed-calculations right now are slightly higher than they should - explaining why I haven't managed to get the 10s averages and 1s averages in sync. So this needs some thought on implementation side.
PS: Sorry about missing last nights post... was so busy handling some internal project stuff that I don't even want go into discussing here .. that I completely forgot about the blog post - was so tired. I know there are many ppl reading this blog, so I'll try to avoid such "blanks" ...