2008-03-01
Abstract
Morton Swimmer reports on two security conferences of the more hands-on nature: the 24th Chaos Communication Congress in Berlin, Germany, and Black Hat DC in Washington, DC, USA.
Copyright © 2008 Virus Bulletin
Recently I had the privilege of attending two security conferences of the more hands-on nature: the 24th Chaos Communication Congress in Berlin, Germany, and Black Hat DC in Washington, DC, USA. While I was new to the Black Hat series of conferences, I was certainly familiar with the sort of material presented, ranging from network security issues to embedded device security. In contrast, the CCC conferences always contain something unexpected and unfamiliar. I’m not ashamed to say that I learned a lot from both conferences, including OS fingerprinting, code mutations, a new OS tracing facility popping up in many Unix derivatives, how to phish phishers, web application forensics, measuring botnet sizes using DNS and measuring the Storm botnet size.
The venerable Chaos Computer Club of Germany organized its 24th conference in late December – although ‘organized’ should be taken with a pinch of salt, with the conference on the whole emanating an air of haphazardness (the suspicion, however, is that these days that is by design).
In terms of content, the conference is always a mixed bag, and you won’t know what you will be getting until you are at the event. There are quite a few talks on current events and policy, others focus on old-school hacking pure and simple (and often bordering on art), and then there are the security-related talks.
The political/policy talks included: experiences of being under constant surveillance because a household member was suspected of being a terrorist; experiences of being an MI5 whistle-blower; hacking ideologies; and electronic voting. Old-school hacking topics included: building a steam-powered telegraph; DIY survival; building with microcontrollers; reverse engineering embedded devices; and electronic documents. Of course, the topics that interested me most were those from the field of security: DNS rebinding attacks; the TOR network; the Storm bot; Mac OSX kernel and Windows security issues; hacking barcodes; web application security; and new ways of port scanning. Let me give you a sample of these.
The always indomitable Dan Kaminsky talked us through DNS rebinding attacks, which is an oldish vulnerability that nevertheless had snuck back into the browser stack. Basing his work on some that had been done previously by Martin Johns, he was able to show how DNS rebinding can be used to gain access to an intranet through the firewall. Though it is not an easy attack to set up, it is completely conceivable if the browser and its various plugins have not been brought up to the latest level.
By far the most interesting talk for me was Thorsten Holz’s talk on the Storm botnet. By actively infiltrating the P2P network that the Storm bot creates, he and his team were able to approximate how large the network actually is and came to the conclusion that it is not as large as originally suspected. In October 2007, they observed a minimum of 30,000 infected nodes and 5,000–6,000 control nodes with an upper limit of 45,000–80,000 nodes that could be considered infected. Furthermore, they haven’t seen any network partitioning using the recently discovered keying included in the Storm bot. Lastly, Thorsten went into mitigation strategies, although no silver bullet appeared here.
Jonathan Weiss talked about the security of Ruby on Rails (RoR) web applications. RoR is often behind new Web 2.0 and social networking sites, e.g. Twitter, and is used because it facilitates rapid design. Luckily for us, Jonathan demonstrated that RoR has a reasonable level of security out of the box, though there are certain facets of RoR that the programmer must take into account to avoid compromising his application. On the other hand, Jonathan also showed how RoR applications leak information that may be useable in an attack.
A related talk by ‘kuza55’ gave a broader overview of web app security issues covering browser-specific attacks, e.g. involving the browser cache and pre-fill functionalities. He also demonstrated various ways for sessions to be manipulated so that the session ID is fixated beyond the normal login period of the user.
Now that exploitable vulnerabilities in operating systems are becoming rare, we need to look elsewhere for vulnerabilities that may be used in attacks. Luke Jennings talked about Windows access tokens, which are used for single sign-on and other forms of authentication in Windows. He covered the use of these tokens for impersonation and privilege elevation. The main issue with tokens, he states, is that a single system compromise can lead to the compromise of many other systems using the security tokens.
Not all presentations were of high quality, but one was particularly amusing in its ineptness. Marcell Dietl (aka ‘Skyout’) gave a talk about the Virus Underground (VX) and left us in stitches. His intention was to present a survey of the virus-writing scene, so he listed a few dead or semi-dead virus-writing groups and stressed over and over again the importance of e-zines. He tried to get scientific by describing the virus properties, but missed many of the finer points of virus classification – perhaps because many of these showed up well before he was born. After stating at the end of his talk that viruses are an art form and are peaceful, I came to the conclusion that his interest in viruses stems not from a philosophical position, but from wanting to orient his persona along something he feels is elite. What is sad is that he was not able to justify his interest in viruses in a convincing manner. I could probably have done a better job if motivated, despite not being a virus writer. Dietl did admit that the VX scene is in crisis and that there are perhaps only 50 of his sort left. We should probably consider it a good thing that the rest of those 50 are likely to be equally moronic.
Black Hat DC kicked off with a keynote speech from Jerry Dixon, former director of the National Cyber Security Division at the US Department of Homeland Security, and Andy Fried, former special agent to the US treasury, looking at the state of security on the Internet. While website defacement and malware for the sake of fame is behind us, we now have to contend with the much more severe threats coming from criminals. One of Dixon’s main gripes was that, when confronted with a threat, many organizations do not know exactly how their infrastructure works or where their data resides. They lack a map telling them how things are interconnected, which is often due to the way corporate divisions are managed. Each division has its own priorities and signing power and tends to grow its own infrastructures, so the company lacks an all-encompassing network cognisance. He touched on the subjects of P2P botnets, DDoS extortion and the fact that we make it so easy for ID thieves.
Fried described his activities defending the IRS and its customers from IRS phish. Unsurprisingly, the threats the IRS has to deal with encompass the entire palette: malware, 419 schemes, vishing (defined by Fried as pretext calling), tax rebate and e-file scams (e-file is the US electronic tax filing system). Fried’s main gripes were that the perpetrators are out of reach in Eastern Europe, that it takes too long to take a malicious site down and that anti-virus software just doesn’t work from his point of view. He also expressed the fear that backdoor systems may make phishing obsolete in the future.
Chuck Willis continued in the Web App track talking about Cross Site Request Forgery (CSRF) attacks and defence. He started by stating that CSRF had not been seen in the field so far, though it is still very likely, and went over the mechanisms of the attack and the Netflix case study. The problem we face is that web programmers are not actively trying to prevent these attacks, though luckily the frameworks they use often correct such problems eventually. A big concern with CSRF is in forensics. CSRF can pollute the web history and cache both on the client and the gateway and a naive forensics analyst who does not consider the possibility of a CSRF attack may wrongly incriminate the suspect.
I switched over to the Wireless track to hear Adam Laurie talk about RFID. After covering various attacks against simple RFID tags that essentially store an ID and don’t offer a lot of resistance to attacks like cloning, he went on to cover smart RFID. These devices establish a proper dialogue with the reader and Laurie looked at RFID in passports as an example. While there is strong cryptographic authentication in these chips, they are not proof against brute force attacks given enough time with the passport. Equally worrying, even though there is no straightforward way of determining the nationality of the passport holder without authenticating, each country’s RFID implementation is unique, allowing Laurie to create reliable profiles of national passports.
Back in the Web App track, Nitesh Dhanjani and Billy Rios presented a look at phishers from the inside out. After seeding their search with Google’s safebrowsing blacklist, they examined various phishing sites and eventually were able to enter into a dialogue with some of the phishers. Perhaps it is not surprising to find that a good deal of the phishers they found were nearly clueless – comparable to the script kiddies of yesteryear. On examining various phishing kits, they discovered that many of these little-phishers were themselves being phished by the authors of the kits – the little-phishers would customize the standard settings but leave the block of obfuscated code untouched, with the result that the kits’ authors would be able to see anything that the little-phishers could see.
Only slightly off topic was Dhanjani and Rios’s rather disturbing report on ATM skimmers (hardware for obtaining ATM card credentials) and getting around browser blacklisting. They concluded with the advice that we can’t expect users to help us too much in this effort and that companies must be much more proactive.
Nathan McFeter talked about URI misuse in operating systems. URIs may lead to Cross Site Scripting attacks involving local data on the client PC, but could also be used in stack overflow attacks. Especially problematic are the browser/operating system (and sometimes browser-OS-browser) interactions when it comes to URI handling, in particular under Windows. There are also inconsistencies in the URI vs file extension handling that can lead to unwanted results.
Sheeraj Shah presented a collection of tools and techniques for web application analysis. He showed us the various attack scenarios (various XSS scenarios and XSRF) and how to detect and analyse them with his toolset. He also went into the fairly new field of SOA web services analysis and mashups. While it is great to have such tools, he is the first to point out that one should always exercise due diligence and check the code by hand as well, as some things may be too far obfuscated for tools to handle.
The next day saw me in the Defense track learning from Tiller Beauchamp and David Weston about DTRACE, a new tracing framework for various UNIX-based systems. This system was originally pioneered by Sun for Solaris but recently included in Mac OSX Leopard as well as some versions of Linux. It is a fantastic tool for tracing through code in a system, not just in a single process (as, for example, with ktrace). The architecture includes low-level probes, high-level interfaces and a non-Turing complete language for scripting simple things. While it also supports things like performance monitoring, it shines in reverse engineering code. This is a tool that will see much use if Mac OSX malware increases significantly in numbers and provided me with the first good reason for upgrading from Tiger to Leopard for my Mac.
Brian Chess and Jacob West then talked about using taint propagation to detect security flaws in software during the software testing process. This I didn’t find too inspiring, mainly because I never get involved at that stage in the process and instead get presented with the problem after the fact. However, I agree with their thesis and the approach through taint propagation analysis, and would encourage any software development team to take their advice to heart.
We then got a survey on stack protection mechanisms by Shawn Moyer. For me, it was a great recap on the state of this subject and it was encouraging that over the last seven years much of this technology has gone mainstream and has matured. Moyer took us through the defence measures and counter attacks, showing that even though the field is mainstream, it is not completely mature yet.
Next I switched to the Hardware/Embedded track to see Felix ‘FX’ Lindner’s talk on Cisco IOS Forensics. Mostly people are just happy that their network infrastructure works, but a good case can be made that the routers we use could tell us more about the attacks going on than they currently do. FX decided to see if it would be possible to tickle more security-relevant information out of the Cisco IOS and homed in on a method of producing core dumps for off-router analysis on a regular basis. He also talked about the fact that routers are hackable, mainly because system admins rarely update the IOS. Certainly food for thought for all network administrators.
Back in the Defense track, we looked at measuring botnet sizes with Christopher Davis and David Dagon. Ignoring P2P botnet systems, their thesis is that we can use DNS metrics to measure botnet sizes. In case you want to try this at home, keep in mind that doing vast DNS cache queries is considered impolite at best, so a significant amount of their time was spent trying to get permission from the various ISPs.
Black Hat DC was certainly of a different flavour from the Black Hat Japan event that I attended last year – the DC event was much more intense. There was a greater number of governmental delegates, as you would expect in DC, but some of these were not from the US. I met a few European officials whom I frankly wasn’t expecting so see at a stateside conference. Despite being in DC, there was no ‘corporate’ or ‘policy’ track as I would have expected – though that would probably not be quite true to the nature and origins of Black Hat.
One has to say that in order to get as much out of these conferences as possible, you need to do your homework upfront, but this could be said of many speciality conferences, including VB. The speakers at Black Hat and CCC assume you already know a lot about security and that you want them to take you to another level – and they certainly try.
After that, though, there is a cultural difference. CCC is less restrained, but you have a hard time finding the speakers after the talk to discuss the topic with them. Black Hat goes to great lengths to make the speakers accessible by providing a room after the talks where the speakers can be grilled. Also, with far fewer attendees (at Black Hat DC there were perhaps 150–300, while at CCC there were perhaps 2,000–3,000), it was much easier to locate the people you wanted to meet.
Another common aspect of Black Hat and CCC is that both embrace the philosophy of full disclosure. Not only will the speakers divulge all the details you need to replicate the attacks and sometimes even code, but the talks are often available on audio, video and as PDF after the conference. The CCC has even started streaming live for the past few years. However, this doesn’t replace going to the conferences and being able to meet the speakers and other delegates. There is nothing like discussing the finer points of CSRF over a pint of beer.