Alo Sarv
lead developer

Donate via
MoneyBookers

Latest Builds

version 0.3
tar.gz tar.bz2
Boost 1.33.1 Headers
MLDonkey Downloads Import Module Development
Payment completed
Development in progress.
Developer's Diary
irc.hydranode.com/#hydranode

Sunday, December 11, 2005

A lot of useless ideas?

A few days ago, chemical pointed out to me (again) that the context-sensitive commands in hnshell sounded like a good idea at first, but is completely unusable in reality. The fact that you have to type four commands in order to change a server is simply stupid. For those readers who actually haven't used hnshell and/or haven't figured out how this works, here's what you'd need to do to change a server:
$ cd modules
$ cd ed2k
$ cd serverlist
$ connect Razorback
Similar command sequence is required to view current server connection status. Yet initially, everyone was so excited about the innovative and cool idea of context-sensitive commands. Which brings up the point - many ideas sound cool and useful as ideas, but fail miserably in real world and/or have very limited usability, so affect only a very small amount of userbase. Unfortunately, Hydranode is full of such ideas. Basically Hydranode is a huge collection of such ideas, all stuffed together into a single application.

When big corporations start considering a feature or idea for a software, they spend a lot of money on research before starting any development, simply because it's a lot cheaper to find out an idea isn't worth the time/effort/money during research rather than spending millions on a feature that only 0.1% of userbase actually uses. Yet in open source world, this is not done, since the general direction of thinking is that software should include features for everybody. This is actually inherent from the fundamenatl idea of open source - everyone can submit patches - and developers rarely say no, resulting in feature/idea bloat.

Hydranode was started based on a lot of ideas - actually all of Hydranode's ideas were developed / designed during first 2-3 months into the project (summer 2004), since then no ideological changes have made, and very little, if any, new concepts have been introduced. Looking back, Hydranode was actually started because at the time, xMule (where I was maintainer at the time) was an awful codebase, lacked core/gui separation which was, again at the time, the "holy grail", and only supported one network. ShareDaemon project was created to address those concerns, and after the failure of that, Hydranode, based on the same ideas, and quite a lot of new ideas. Yet all those ideas are niche features which add very little value to the final software, while each one of them exponentially increases the development time.

Taking one by one - cooperative multi-network downloads is perhaps the most fundamental idea of Hydranode. Or, in a wider scale, multi-network downloads in general. Yet how many successful multinetwork applications do you know? Both mldonkey and shareaza have shown extremely bad network behaviour on all networks they support. It isn't because they such at coding or lack the skills - ShareAza code, for example, is very high quality. It's because the fundamental idea of multi-network client is flawed at it's roots. When you develop a single-network client, you spend all your effort to support that net, and it usually takes 4-6 months to become a considerably-good client for ONE network, about a year to win the market. In multi-network environment (leaving aside the added complexity of multi-network handling itself), the effort is automatically split between multiple tasks, leading inherently worse performance on all supported networks. It's not something the developers intend, but it's inevitable.

To add to that, the idea of true cooperative multinetwork downloads is also a very weak one. With the addition of BT module recently, Hydranode is capable (under certain conditions) of downloading files cooperatively from ed2k and BT. Yet the amount of torrents that actually have hashes for both networks, or even if there was a DHT backend which supplied those hashes, the actual use for such a feature is very limited - all networks are self-contained, and the fact that you can leech off two networks can actually be harmful for both networks being involved, while giving very little speed / reliability increase. Yet another useless feature that sounded good as idea but is useless in practice.

Another idea - core/gui separation. It has very little use for local usage (shutting down GUI and leaving core running - maybe some 10-15 ppl might find some use for it). And for remote usage, it only makes sense for people who actually have TWO computers at home, possibly one running some unix variant. Imagine how large amount of users actually have that? Considering that 5% of the userbase actually use linux, 0.5% use both linux and windows in a multi-computer environment... and the remaining 99.5% of users suffer from the added performance penalty of inter-process communication. It is also a major hit on overall development time (the cgcomm code isn't a trivial task), the added bugs (more code == more bugs), and as practice has shown, it means that it takes longer for the project to attract user/tester base due to lack of user interfaces in early/middle stages of the project. Such large penalties for a feature that a VERY small group needs.

So based on that, isn't Hydranode just a big bucket of ideas that have little or no value at all?

Madcat.



Comments:
Hi madcat, u seem in a bad mood today:

well let's point 2 fact:

1° cooperative multinetwork downloads:
don't un think is pretty hard to find today someone who share a file in a double place if no application exist that support it?
What the CERN should have thought when they made the WWW?
Well who is ever going to write a HTML page? where are teh page??

2° core/gui separation:
is it harder to made? yes
is it harder to bring tester? yes
is it cleaner, and later much more easy to develop? i think yes
i develop all my project with this Separation.
Well maybe not with a interprocess comunication as u do, but layering the application, is the only way to made thing menagable!
and in this i think u made a wonderful work!

but in the end all of this is bullshit. what matter is:
is this idea worth for u?
in open source who develop is god :)
and we have a monoteistic religion here ;)

And i for all welcome our new "Cooperative multinetwork downloads" overlord ( ok ok too much ./ ;) )

Good work again Madcat!
 
Layering I agree and am a strong supporter of, this is the only way to make things manageable, as you said. However interprocess communication is what I'm debating here. The overheads, both from performance, and potential bug amounts do not seem to be justified by the very limited usefulness of the feature.

Madcat.
 
( DISCLAIMER i never made a real program with interprocess comunication so maybe what i say is crap )

this is a problem of tester...
who is here is probably someone not exactly newbie, so the usefulness of this feauture, expecially remote control is not little probably.

but in the end, the question is: with ur design of modular inter network program, why do u choose this road?

for example Azereus, have a lot of plugin, what road did they go to accomplish that?

if the simple reason is: it0s cool and i love to play with cool tech..

well if that made u do this wonderful work, it worth it :)

anyother choice would have made less buggy code cuz would not have made u code at all ;)

( btw i think that for a user, to be able to crash the GUI but not loose his Download and queue is not a little accomplishment.. but again i'm not a normal user.)
 
Must be one of those days today...

3 points - I disagree with all of them.
1. Context sensitive commands - have used then on Cisco routers, loved them. The whole linux is basically like that from the command line. Do we throw it away? Key is shortcuts, tab auto completion, aliases. A layered command then just becomes the equivalent of a normal CL command with many switches.
2. Multinetwork downloads - come on! Hashes don't exists because who does that properly apart from the two you mentioned? Set the tone Madcat and soon we'll have them all over the place!
3. Core/GUI - you would have to separate them anyway. There is simply NO other way unless you want to end up with a monolithic mess like eMule. If you are worried about performance who says we have to use sockets. We can use shared memory IPC. In fact that's what windows messages are and so I believe windows localhost sockets (don't know about Linux - perhaps there they are true networked sockets but I doubt it). Worst case scenario - cgcomm and the lib could share an implementation where a global function will deliver messages from one to the other as normal method invocation=no speed loss.
GC
 
About context sensitive commands, I like them, only need to being implemented smarter.

Instead:

$ cd modules
$ cd ed2k
$ cd serverlist
$ connect Razorback

Only needed:

connect ed2k Razorback

or:

connect ed2k put-the-address-here

Shell command is for geeks and unix people, but it's one reasons why I love the hnsh! Am I a geek? Probably, but geeks need smart things like hn ;)

Think about intuitive context commands instead only context commands because it's cool :)

About problem with a lot of networks supported and only one really useful, find good maintainers for each network and develop the low-level stuff and fixing when those guys are being trained for HN development ;)
 
1. cooperative multi network downloads

doesnt really work unless you have lots of bandwidth as people forget that not only this is complex due to conflicting hashes but it requires uploading to the networks too

2. core/gui separation

just the savings in memory of not having to run a gui make this a good option as well as nice for people running the core on routers and dedicated unix/linux servers that dont have guis or that are hosted somewhere

3. Context sensitive commands

just needs aliases and shortcuts as mentioned in previous comments, as not all networks even need or support connecting to a server for example

look at the mldonkey command line ui for lots of network specific commands
 
Maybe it is a little more complex to write this stuff in the way you do, but its simply better than anything I encountered before when it comes to P2P.
I've followed the project ever since sharedaemon was started and check that blog as often as I have the time. Till now I only watched and used it, but it seemes someone really needs some chear up now. :)
I would say that I'm kind of your fan - you're my programming 'idol', Madcat. Since I saw your code, I had an Idea of what is really good coding style. I am far from beeing a good coder, but I think I got one or two tricks from here. I sometimes wished I had some more skills, so I could help out here a litte.
All I can say to the Problems you mentioned: Maybe the "average guy" doesn't have use for some of the features of hydranode, but I know enough people that do not have "average needs". I can tell you: I've been searching around for a peace of software that fits my needs and never found something better than Hydranode. And I don't think I'm the only one. I've been using hn for more than half a year 24/7 with fantastic results, especially on my power-bill (emule on a 150 MHz Laptop without a cooler is not working very good). I just love the Project and hope it has a good future.
Greets

Majin Sniper

P.S. btw, I really didn't figure out how the change the server till today ;)
 
"So based on that, isn't Hydranode just a big bucket of ideas that have little or no value at all?"

I really don't think so. Hydranode is multi-plattform, multi-network and seperate core/GUI. You (and the users) just get lots of ideas. It isn't easy to solve them all, especially because there are currently not that many many people working on Hydranode.
 
First off your right with many things you said madcat. HN is a bucket of insane ideas not usefull to many. I so respect what youre doing here and as someone said, your my programming idol too.
1) Multi-network-download is so useless to gain download speed. All my torrents I download are rared in small chunks. All my ed2k downloads I do are full length and most often uncompressed. It doesn't help with download speed. But I want it pretty badly in HN. One program to rule them all. One console I can start every download in.
2) core/gui separation
One console I start the download the evening on my server. Shut down the main computer and let the silent laptop download what I want - ed2k, bt, http, everything. A must and something I'm looking forward to pretty badly as well!!
3) I wasn't aware of the context sensitive commands until now. I always thought, what the cd command might do. I guess I'll start to take a deeper look into this.

Madcat youre doing something great here. Every day you post in this blog and get like 0-2 comments. This post shows me, that I'm not the only one reading the blog and being interested in HN. And this is great, as it shows me that there are a lot of people staying behind you and supporting you. If we ever could do something for you just call out and we'll be there with what we can do.
 
Hi madcat, I've been reading your blog posts each day since mid 2004. I think you have embarked on a massive project here that will be the one P2P to bind them all. :-)

For me, this project is the one I'm waiting for to kick mldonkey off my server. I like the idea of core/gui separation, as it allows my ClarkConnect server to run 24x7, looking after my home network as well as P2P. The separation also promotes better P2P use, rather than firing up a windows client, download a file and shutting the client down, the core runs all the time helping out other network participants.

Your idea of compiling both a pretty windows version and linux version will cater for both types of users and with linux version becoming more user friendly, more people will have them installed as a core on a linux box.

Broadband 'always on' connections around the world are increasing at a blindingly fast rate, and joe public is becoming more savvy with this technology and what is can offer. What I'm trying say is that 'always on' linux servers will increase, and Hydranode can end up in many of the standard packages.

You just need to contain all these ideas and get the core and a plug-in out there in a stable released form. Then incrementally introduce all the other ideas.

You are close, please don't lose the faith now!
 
In first place, thanx for the great work you are doing.
As was said above, your code is an example of good style programming for many people.
Please, don't give up. I'm using hydranode mainly because it has separate code/gui, and because of multinetwork capabilities.
Many of my friends (not geeks) like these features as well.
The idea behind your project is brilliant: a client which can be used by normal users looking for user friendly programs, but that can be enjoyed by geeks as well. So you are able to make hydranode a popular application and, at the same time, you can get a nice group of would-be beta testers (and hopefully developers).
I think context sensitive commands is a good idea; maybe it should improve, but there are more urgent stuff right now.
Thanks again and best regards
 
Hello madcat,

I personally think that hydranode is the only useable linux / unix version for p2p networks.

mldonkey which has the only core/gui separation as far as I know lacks on speed and usability side.

all other clients need an X to run, which mostly isn't availible on a computer running 24/7/365.

I think Core/Gui conversation does not need to be slow just because of the "overhead". I personally think that with the right caching functions for the communication it could be nearly as fast as without separation.

About the usability. Even my wife is capable of using hydranode to download, things she wants, it is not difficult to explain that in order to download something via ed2k for example you need to type search mydownload and after that type do #.

For usability improvements it maybe usefull to introduce a feature like path to the hnsh where all the standard plugins are included. Additionally for better usage I think you should enable all networks by default, because you then don't need to explain that you need http for downloading a torrent ;-).

Even if many users don't use mixed environments most users have a dedicated "server" running where thay are using the p2p of their choice. For all other programs you need some kind of vnc which slows down the performance and the usability of the tool quite a lot, because you can not define a central p2p/downloadmanager for your whole network. That IMHO is the greatest usability advantage of hn over emule once there is a gui availible.

Keep up the great work!
 
Context sensitive commands is what we have in a every shell, it's known (at least the linux shells)
to be very powerfull and fast.
/modules/ed2k/serverlist/connect Razorback
including autocomplete make thinks very comfortable.
For this example we would only need to type 11chars
(including tabs) for the complete line.
For those it would be too long anyway,
userdefined aliases are imho the way to go.

Thx for this wonderfull tool!
 
Wow... i didn't thought there were so many people reading this blog... :)

nice to know i'm not the only one here to like madcat work ;)
 
Great work so far...
Why give up?

Hydranode is probably the only headless linux app that can do both emule and bt and not kill my computer.

Also, I think you should enable the http module by default.
 
I have chosen multi-network client not to swarm downloads thru them but to get advantage of a single application centralizing all the downloads and having overall bandwith/connections limits.

The GUI and remote control is useful to guys or families having a single 24/24h 7/7d server up and running.

consider my mldonkey mulus's client being used by 6 guys, 3 internals and 3 externals(friends from work and no connection home) thru ssh.

Advanced remote control is requested by administrators with particular competence but you have to remind that little percentages when populations is millions (p2p users) means bunch x ten x thousands users who 'll be happy.

Personally I think you have only to define completely and implement the GUI's protocol by letting Sancho's developer implement it in his GUI. You'll see your eta tester base grow instantly.

Fabtar
 
Madcat,hydranode is a great application,I have learned lots about software engineering from you(and HN :)),and ...it's not all about performance ;As I understand you ,the reason beeing sad about HN's future is not technical?
Hey,see how many people are using it ,and how many of us are actually beta testing it !Maybe the project needs some marketing :) something like uploading it to sourceforge.While downloading rare files most of the time HN is the bautiful solution to the 4-5 p2p programs eating my RAM.I also don't like much waiting hashing at start up ,when need X restart or simply aMule GUI crashes :)But I am to bare it ...While sharezaa and MLDonkey also have problems tons of ppl use them.
Nothig is perfect.Remember the old unix law ,about the 90% solution ?
I send all my love to you,Alo ...
Please don't loose the passion (and excuse my english:))
P.S. My girlfriend also likes hydranode for a strange reason,she is learning 'programmers' english from the blog :)) while downloading "the hackers way" in the shell.She is proud of you,beeing a non native speaker !
 
Any program can be improved, do not care how good it already is, don´t give up now with this good work MadCat. Here my ideas:

1) "Context sensitive commands"
I think too that makes program harder to use.

With shell: net ed2k up OR net ed2k down OR net all up OR net all reconnect OR net ed2k connect IP/ServerName or something like that. With GUI just proper options that can be implemented in different ways, do not care wich while comfortable.


2) "Cooperative multinetwork downloads"
That should rely on user. When using slow networks, use a single network. If fast net access, various downloads. With core you may use some simple and intuitive commands, as anonymous said. GUI users may have an option with as many checkboxes as many networks they have and configure if to connect automatically to new networks when new plugins are added. Some graphical assistant may be added for first time GUI run.


3)"Core/GUI separation"
As Madcat said before, the bigger problems are in Linux/Unix. Under Windows/MAC you can create a single interface for it, and you could bind everything in a single graphical file, and modules to use networks, web interfaces and remote administrations. Under Windows and MAC there is no problem, because always run in graphical interface. For me eMule is very good and is very transparent.

Maybe the question is: Binded core/GUI for all platafoms or for all except linux/unix?

Perhaps you could offer the windows/mac binaries with all binded and offer separate core/GUI for linux/unix users, because as you said there are different libraries and you should create various interfaces for the same linux (hope this to be solved with portland, the nwe desktop environment). You hold the answer depending on how much would this increase the development/testing time and efforts, you may thing is not worth and hydranode is not going to work the same way for windows/linux.
 
No, it is not useless.
I'm one of these guys with 5 Servers on 5 lines running 16h/365d
If have stuff online since 2000.
I'm the one you get the realy old but interessing stuff when all "desktop-gui-users" had the stuff loooong deleted.

And I'm _very_ interesseted in a gui/core seperated multi network client.
Not because it is faster to download, because it is much easy to share two or three p2p networks with one client.
So, *please* continue your fine work.
 
Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?