I had a workshop at the offices of a major international backbone provider today, on the subject of security logging/monitoring/response and DDoS defense for large companies.
In the course of providing managed security service, part of the problem faced by these guys is the proliferation of custom exploits and attacks, according to their research up by 59% this year (I’d link to the pdf, but since I’m under NDA with the client I was representing, I’m not entirely sure if it’d be appropriate. You figure it out.) As it turns out, part of their mitigation strategy for quickly identifying new attacks is to set up all non-allocated netblocks they own as gigantic honeynets, for research purposes.
I asked one of their sales guys why they didn’t allocating a small portion of these IPs as tarpits? It’s not a new technology by anyone’s estimation, and given the sheer scale of IPs they’re using as honeypots, it wouldn’t be a big sacrifice, considering that a single LaBrea installation can trap hundreds of worms and stop older beasties from clogging up the tubes. His response? ”Our mission isn’t to save the Internet.”
Honestly though, it should be — it’d be in everyone’s interest, including their own, to minimize capacity used by worms and bots, bandwidth that could be used productively for other purposes. To this end, I had an idea; I hadn’t been aware of the huge amount of unused IP space available to providers of this size. I also recall various organizations and individuals having excessive IP allocations removed and thrown back into the pool (a good move, considering that the total filling up of IPv4 space has been predicted with clockwork regularity every year since at least 2000.)
So, why not have IANA and the regional RIRs set it as a condition that 5% of any unused IP space an organization owns must be allocated for the public good — i.e. sticky honeynets / tarpits?
2. Stopping DDoS at the Edge
We started discussing DDoS mitigation strategies for big customers; this particular ISP’s service consists of a cloud-based solution. Which brings me back to the idea that, since counter-attack is kind of a silly idea when you’re fighting 100,000+ hosts, (plus, do you really want companies / governments deciding whom they get to actively knock off the net?) why don’t we unearth the concept of stopping DDoS at the edge?
This has been around as a concept for some time; it would require a fair amount of coordination on the part of ISPs, and, potentially, governments, considering the magnitude of attacks suffered by Estonia in 2007, as well as China’s and North Korea’s burgeoning military / government-sponsored cyberwar capabilities. However, it seems to make sense to me that figuring out how to stop baddies getting in is a far more sensible goal than stopping them on the way out.
Remotely Triggered Black Hole Routing is a reasonably fresh approach to this, allowing the remote reconfiguration of BGP routes to drop malicious DDoS traffic. Older concepts include Diadem, a combination of software and dedicated hardware.
There are a few major problems with the idea:
a) It would require a massive amount of CPU power.
Considering how much time and money is being invested by Cisco and the likes in deep packet inspection technologies (not always for good purposes — caveat, very one-sided article, but the point holds), I wouldn’t find it to be such an insane development goal to create and deploy dedicated edge routers capable of on-the-fly reconfiguration.
b) It would require large amounts of co-operation and coordination
Well, yes. The Internet is supposed to work on the basis of this.
c) It has the potential to knock innocuous sites offline
This is the biggie; using spoofed source origin IPs, it’s conceivable that anyone could be falsely accused of launching a DDoS and in turn be knocked off. The ISP I spoke with today actually had the tools to deal with this; they worked on the basis of source IP trust levels for customer IPS deployments. Roughly speaking, every dodgy-looking connection is checked by the provider against a dynamic database of activity emanating from that IP / IP range over its backbone network. Connections (not contents) are archived for around two weeks; the security management service informs customers of the trust level of that IP.
This could conceivably be adapted to prevent false accusations of DDoS traffic — for example, by placing modified, heavily audited IDS probes at company networks’ egress points which match “real” traffic emanating from the company with the ISP’s database of traffic claiming to come from this company — bogus connections would thus not be attributed to the innocent victim. It’d be quite tricky with individual users, but it would make it fairly difficult for a botnet to indirectly knock a company or edge ISP offline by getting backbone carriers to drop its traffic due to spoofing.
d) Would you trust a consortium of governments / ISPs to decide whom to blackhole?
Obviously this is a problem, but I believe that the distributed nature of the Internet is proof against such shenanigans.