This document describes the packet scheduler used in HydraNode core. The scheduler performs fair packet queueing and bandwidth management between multiple networking modules with varying priorities.
Contents1. Priority scores
1. Priority scores (PC)
Every request, be it a connection, or a packet, has a priority score attached to it. The base score starts out at 0. The lowest possible PC is -100, the highest is +100. Scores that exceed that range are truncated to within that range.
The following modifiers apply to PC:
This is set by the module itself upon loading, within range -100 to +100. P2P modules are required to start out at PC 0, custom modules may use different scores. For example, hnshell will start out at PC 100 (since we want the shell to be fast).
By default, socket priority is set to 0, but can be adjusted on runtime if needed. This functionality is provided for maximum customizability and flexibility - it is possible that a module would want a high-priority socket for some communication, while lower priority sockets for other kind of communications. This priority also ranges from -100 to +100. Thus a -100 priority module could at best get a 0 priority socket (-100 + (+100) = 0).
Individual packets can also be set a priority. This is not a free-form score - two types of packets are defined: data packet and overhead packet. Overhead packets have extra score modifier +10 to bring them slightly ahead of others. Data packets are used for trasmitting actual file data, while overhead packets are used for everything else. As a general rule, overhead packets are smaller and more useful in overall view (e.g. searching for more sources, responding to other clients requests etc), so they should be slightly prioritized. However, this shouldn't affect them too much, otherwise we'll end up sending only overhead packets and no real data. Thus +10 seems like a good modifier.
The basic concept is to figure out how "useful" a module has been to us. Modules that have higher download / upload ratio should also get higher priority. Thus, first calculate how large percent of the total upload and total download bandwidth has the module used:
totalup% = module_upload_bytes * 100 / total_upload_bytes totaldn% = module_dload_bytes * 100 / total_dload_bytesNow we get two percents, which are independant of the actual data amounts having been transmitted. If we substract totalul from totaldn, we get a ratio - in range -100 to +100 - exactly where we want it. Thus:
Total upload: 250mb total dload: 500mb module with 50mb upload usage and 100mb download usage gets: totalup% = 50*100/250 = 20% totaldn% = 100*100/500 = 20% ratio = 20 - 20 = 0.0 module with 100mb upload and 50mb download usage gets: totalup% = 100*100/250 = 40% totaldn% = 50*100/500 = 10% ratio = 10 - 40 = -30.0 module with 100mb upload and and 350mb upload usage gets: totalup% = 100*100/250 = 40% totaldn% = 350*100/500 = 70% ratio = 70-40 = 30.0 Check: 0.0 + -30 + 30 = 0.0 -> correctThis score is added to the base score. The result is that modules that have used the given bandwidth more effectivly to our advantage get higher priority. This also ensures that new modules start out at relativly fair state compared to old modules. If we didn't involve percents in our calculations, new modules would have no chance competing for bandwidth.
2. Packet and connection scheduling
2.1 Connections schedulingFrontend requests a new connection from scheduler. The scheduler checks if there are any free connection slots open right now. If there are any, it grants the request. If there are no free connections available at this time, the request is marked pending. Every time a connection is lost (disconnected), the pending connections queue must be scanned for pending requests, and the highest ranking request granted.
2.2 Downstream scheduling
Whenever incoming data is detected in one of the scheduled sockets, the scheduler must verify that there is indeed free bandwidth to receive the data sent to us. If there is no free bandwidth at the time, the socket must be inserted into readable sockets queue. Whenever additional bandwidth frees up, the data is read out from the socket and buffered internally within the scheduler. After that, notification is submitted to the owner of the socket that the data is ready to be retrieved. Basically, we will have a method that is called during each event loop, and which performs the following operations:
2.3 Upstream schedulingWhenever a request to send out data is submitted to scheduler, the packet is inserted into scheduler internal buffer, and the function should immediately return to the caller. During each event loop, the packet queue is scanned for pending packets, and the following done: