RFC: HydraNode Networking Scheduler (v1)

Abstract

This document describes the packet scheduler used in HydraNode core. The scheduler performs fair packet queueing and bandwidth management between multiple networking modules with varying priorities.

Contents

1. Priority scores
2. Packet and connection scheduling

1. Priority scores (PC)

Every request, be it a connection, or a packet, has a priority score attached to it. The base score starts out at 0. The lowest possible PC is -100, the highest is +100. Scores that exceed that range are truncated to within that range.

The following modifiers apply to PC:

2. Packet and connection scheduling

2.1 Connections scheduling

Frontend requests a new connection from scheduler. The scheduler checks if there are any free connection slots open right now. If there are any, it grants the request. If there are no free connections available at this time, the request is marked pending. Every time a connection is lost (disconnected), the pending connections queue must be scanned for pending requests, and the highest ranking request granted.

2.2 Downstream scheduling

Whenever incoming data is detected in one of the scheduled sockets, the scheduler must verify that there is indeed free bandwidth to receive the data sent to us. If there is no free bandwidth at the time, the socket must be inserted into readable sockets queue. Whenever additional bandwidth frees up, the data is read out from the socket and buffered internally within the scheduler. After that, notification is submitted to the owner of the socket that the data is ready to be retrieved. Basically, we will have a method that is called during each event loop, and which performs the following operations:

Repeat until either all readable sockets have been read from. Note that ideally, we should not have to abort the above loop at any time because of running out of spare bandwidth, since we'r dividing the bandwidth always fairly between all sockets, thus each pending socket should at least get some data read from. However, it is possible that this thing is running at very high resolution, and the data amounts here are too small to be divided evenly (for example, 3 bytes between 5 sockets). Thus it is up to implementation to deal with the situation. However, it is recommended that the scheduler wouldn't run on so high resolutions, due to the performance hit inherent from this.

2.3 Upstream scheduling

Whenever a request to send out data is submitted to scheduler, the packet is inserted into scheduler internal buffer, and the function should immediately return to the caller. During each event loop, the packet queue is scanned for pending packets, and the following done: It is possible that not all of the appointed amount could be transmitted, either because the receiver couldn't receive so fast, or because of local networking problems. In that case, also the remainder of the bandwidth must be re-scheduled between the remainder of the packets.