Yesterday the latest D2 exploit pack for CANVAS was released. In it there was an exploit module for the local file include in TP-LINK's TL-WR841N Wireless Router (CVE-2012-6276). I'd been looking at some data in SWARM that had a high concentration of TP-LINK routers so in the back of my mind I thought that it'd be interesting to see if I could find vulnerable routers with a high degree of confidence.
For those of you playing at home SWARM is Immunity's answer to distributed and parallel CANVAS. If you need to scale CANVAS to hundreds of thousands of IPs per hour, then SWARM is the solution. One of the things that SWARM is really good at is recon on a massive scale which can give you perspective on how much you should care about a particular vulnerability. Sure enough today another interesting router vulnerability was disclosed by the folks at www.s3cur1ty.de, and they were nice enough to include some tips on finding these routers. Having looked at well over 100,000,000 IPs with SWARM at this point I decided to see how many of these routers were really available.
We learned a few things:
1) We had much better results parsing the index page for the webserver than relying on server banners because of #2
2) There are a variety of server headers associated with this product. We saw: Mathopd, "Linux, HTTP/1.1, DIR-[3|6]00", obviously spoofed headers, and even blank server headers
3) Sometimes you get the firmware version in the http server header (2.X series), others you have to parse the server's index.html (1.x), occasionally you wouldn't get it at all
4) Some configurations will leak what appear to be internal host names and MAC addresses to unauthenticated users (name=$HOSTNAME&key=$MAC_ADDR is the string if you're curious). They didn't look like SSIDs but without more context it's hard to tell.
5) There aren't that many easily found routers of this type with their web servers exposed to the internet, out of our data set we found ~1500
6) We found only a handful of routers which met the version criteria to be considered vulnerable
A subtle thing people forget with scans of any real magnitude is that they offer a rolling snapshot of the IP space. The results you get for a particular IP reflect only the results for that IP at the time the request was sent. This may seem obvious but if your scan takes two days to run you can't consider the output to be "Thursday's results". Half your requests went out on Thursday the other half Friday. For environments where hosts are statically mapped this isn't as big of an issue but when you look at big networks it becomes a headache. It's fun to see the exact same host pop up at multiple IPs within the same set of results. It's possible you'll miss hosts entirely if they jump from the end of the IP range to the beginning while the scan is running.
Going through this exercise gave me another idea for SWARM. Many American ISPs now bundle anti-virus with their broadband offerings and security is playing a bigger part of the marketing narrative. I think a neat application of SWARM that fits with this is for your ISP to notify you if your router has a public vulnerability. At some set interval the ISP can kick off a scan that finds all the routers with their webservers exposed to the internet, then filter that through some Python to pick out models with public bugs (which is all of them). Easy, automated, helpful.