I’ve favored the fairer XX, I’ve made the data be collected, and I’ve crunched the numbers. For periods of a few days each, applying several different traffic shaping and collecting the individual measurements, overall trends and applying my critical faculties, I’ve settled what this is going to look like.
It turns out any one connection needs to be severely limited, but with reasonable bursts in place. It turns out this is a matter of both filtering and scheduling. It turns out there’s a seemingly acceptable balance to be struck between allowing for a certain bandwidth to be used per individual connection, and subsequently scheduling all traffic over that threshold collectively.
Today, we apply a bandwidth limitation to less than 8% of all traffic.
You can see that at 11:00, I go apply a policy that may reasonably be considered just slightly too restrictive.
I “recover” from this at 11:15, and bursts start kicking in (not unlike the one just before 13:00).
Sustained connections with a high bandwidth consumption will consume less of the total amount of bandwidth available, though. This is what I would call fair.
The very few users hogging most of the bandwidth I referred to in my previous blog post have been going about this for quite some time, but there doesn’t seem to be any amount of data coming in that would correspond to that amount of bandwidth being spent at (re-)distributing that data.