Monday, 23 April 2007

Cost of Time

Network protocols have a heavy dependency on time: when should a packet be resent? Will I ever receive this packet? Is the other party still running? The PGM protocol defines many timers in the receiver for determining packet state: NAK_RB_IVL, NAK_RPT_IVL and NAK_RDATA_IVL. There are also many different methods of calculating time, from POSIX gettimeofday() & clock_gettime(), Windows QueryPerformanceCounter() & _ftime() to Intel's RDTSC & RDTSCP instructions. The Glib suite defines a GTimer to provide some abstraction but uses doubles and hence potential expensive floating-point math.

So one question is what kind of overhead can one expect with Glib timers? Here is a graph with timers:


Now removing the timers completely and re-running gives the following results:


The test series "sequence numbers in jumps" causes generation of many NAKs each requiring its own times tamp for expiry detection, 68% of the processing is simply getting the current time!

No comments:

Post a Comment