Global rate-limiting
Apr. 17th, 2013 09:04 amLet's say we wanted to stop spam by letting each IP only send a certain number of mails a second. How could we do that if we accepted relying on a single server to do the limiting?
Well, it would look like this:
- The mail program sends a request to the mail server to send mail.
- The mail server gives the mail program a one-time token and says "get the time server to sign that it has bound you for five seconds, on this token".
- The mail program sends the token to the time server. The time server says "please wait five seconds".
(If the mail program tries to send another token before five seconds has passed, the time server says "you've already got something waiting, hold.")
- Five seconds later, the time server returns this token, signed.
- The mail program sends this token to the mail server, is permitted to send mail, and sends the mail.
There's a bit of subtlety if one wants to have it work with chains of mail servers: the server has to ask each server down the chain to get a token to represent the transaction, then ask the sender to have the time server sign them all. But other than that, it would work.
... except for the simple matter that nobody would accept having an utterly centralized service of this kind; and that zombies make it hard to find a delay that's both fair for legitimate senders and that would block enough spam to make it uneconomical.
The first problem might be solvable by making use of Byzantine fault tolerance, however, while the second problem would require some kind of trust network. Both of these are interesting problems in themselves.
In a way, the blacklists, most commonly the DNSBL and RBL types, serve as very crude trust networks. They only have two states (accepted or rejected) and although their contents are distributed, the admission process itself is centralized: the maintainers decide who go on the blacklist. If we could set up a fully distributed trust network, that would be better, because it would have fewer points of failure and would respond more quickly. But doing that... not going to be simple.
A rate-limiting service would be useful not just in the context of spam, but in any other context where one may not want too many requests to be sent at once. Using it to limit TCP flooding is probably overkill, but if it's quick enough, it could be used to limit chat flooding in a system without servers.
Well, it would look like this:
- The mail program sends a request to the mail server to send mail.
- The mail server gives the mail program a one-time token and says "get the time server to sign that it has bound you for five seconds, on this token".
- The mail program sends the token to the time server. The time server says "please wait five seconds".
(If the mail program tries to send another token before five seconds has passed, the time server says "you've already got something waiting, hold.")
- Five seconds later, the time server returns this token, signed.
- The mail program sends this token to the mail server, is permitted to send mail, and sends the mail.
There's a bit of subtlety if one wants to have it work with chains of mail servers: the server has to ask each server down the chain to get a token to represent the transaction, then ask the sender to have the time server sign them all. But other than that, it would work.
... except for the simple matter that nobody would accept having an utterly centralized service of this kind; and that zombies make it hard to find a delay that's both fair for legitimate senders and that would block enough spam to make it uneconomical.
The first problem might be solvable by making use of Byzantine fault tolerance, however, while the second problem would require some kind of trust network. Both of these are interesting problems in themselves.
In a way, the blacklists, most commonly the DNSBL and RBL types, serve as very crude trust networks. They only have two states (accepted or rejected) and although their contents are distributed, the admission process itself is centralized: the maintainers decide who go on the blacklist. If we could set up a fully distributed trust network, that would be better, because it would have fewer points of failure and would respond more quickly. But doing that... not going to be simple.
A rate-limiting service would be useful not just in the context of spam, but in any other context where one may not want too many requests to be sent at once. Using it to limit TCP flooding is probably overkill, but if it's quick enough, it could be used to limit chat flooding in a system without servers.