Yesterday I attended
BASC 2011 and had a pretty good time. There were some particularly interesting talks and demos, and I'd be lying if I said I didn't learn a thing or two about web and mobile application security. I'm far from an expert, of course, but it's always good to immerse myself in some knowledge from time to time in the hopes that some useful bits will stick
In the effort to retain some of it, I figured I'd write a short post on what I experienced while I was there...
8:30 - Breakfast
Sponsored by
Rapid7. Good selection ofnbagels and spreads, lots to drink. Good stuff.
9:00 - Key note session by
Rob Cheyne.
Great stuff. Not only was he an engaging speaker, but he had a lot of good insights on information security in the enterprise. Not just from a technical perspective, but more importantly from a corporate mindset perspective. (On a side note, he looked a lot like the bad guy from
Hackers. Kind of funny.)
My favorite part was a story he told about groupthink. There was an experiment conducted a while back on primate social behavior involving 5 monkeys in a closed cage. (Not a small cage or anything like that, there was plenty of room.) In the center of the cage was a ladder and at the top was a banana. Naturally, the monkeys tried to climb the ladder to retrieve it. But any time one of them was just about to reach the banana, they were all sprayed with ice cold water. So very quickly the monkeys learned that trying to get the banana was bad.
One of the monkeys was removed and replaced with a new monkey. From this point on the water was never used again. Now, the new monkey naturally tried to reach the banana. But as soon as he began trying, the other monkeys started beating on him to stop him. He had no idea what was going on, but they had learned that if anybody went for the banana then they'd all be sprayed. The new monkey never knew about the water, but he did quickly learn that reaching for the banana was bad.
One by one, each monkey was replaced. And each time the results were the same. The new monkey would try to get the banana and the others would stop him. This continued until every monkey had been replaced. So, in the end, there was a cage full of monkeys who knew that trying to get the banana was bad. But none of them knew why. None of them had experienced the cold water. All they knew was that "this is just how things are done around here." (Reminds me of the groupthink at a financial institution for which I used to work. They've since gone bankrupt and no longer exist as a company.)
It's a great story for any corporate environment, really. How many places have you worked where that story can apply? From an information security perspective it's particularly applicable. Most enterprises approach security in a very reactive manner. They get burned by something once, they create some policy or procedure to address that one thing, and then they go on living with that. Even if that one thing isn't a threat anymore, even if they didn't properly address it in the first place, even if the threat went away on its own but they somehow believe that their actions prevent it... The new way of doing things gets baked into the corporate culture and becomes "just the way we do things here."
A number of great quotes and one-liners came from audience participation as well. One of the attendees said, "Security is an illusion, risk is real." I chimed in with, "The system is only as secure as the actions of its users." All in all it was a great talk. If for no other reason than to get in the right mindset, enterprise leaders (both technical and non-technical) should listen to this guy from time to time.
10:00 - "Reversing Web Applications" with
Andrew Wilson from
Trustwave SpiderLabs.
Pretty good. He talked about information gathering for web applications, reverse engineering to discern compromising information, etc. Not as engaging, but actually filled with useful content. I learned a few things here that seem obvious in terms of web application security in hindsight, but sometimes we just need someone to point out the obvious for us.
11:00 - "The Perils of JavaScript APIs" with
Ming Chow from the
Tufts University Department of Computer Science.
This one was... an interesting grab bag of misinformation. I don't know if it was a language barrier, though the guy seemed to have a pretty good command of English. But the information he was conveying was in many cases misleading and in some cases downright incorrect.
For example, at one point he was talking about
web workers in JavaScript. On his PowerPoint slide he had some indication of the restriction that web workers will only run if the page is served over HTTP. That is, if you just open the file locally with "file://" then they won't run. Seems fair enough. But he said it as "web workers won't run on the local file system, they have to be run from the server." Somebody in the group asked, "Wait, do they really run on the server? That is, does the page actually kick off a server task for this? It doesn't run on the local computer?" He responded with "Um, well, I don't really know. But I do know that it won't run from the local file system, it runs from the server." Misleading at best, downright incorrect at worst.
He spent his time talking about how powerful JavaScript has become and that now with the introduction of HTML5 the in-browser experience is more powerful than ever. He said that it's come a long way from the days of simple little JavaScript animations and browser tricks and now we can have entire applications running just in the browser. During all of this, he kept indicating that it's become "too powerful" and that it introduces too many risks. Basically he was saying that client-side code can now become very dangerous because of these capabilities and if the application is client-side then anybody can hack into it.
At one point he said, "Now we can't trust data from the client."
Now? Seriously? We've
never been able to trust data from the client. This is nothing new. Is this guy saying that he's never felt a need to validate user input before? That's a little scary. Most of the insights he had on the state of JavaScript were insights from a couple years ago. Most of his opinions were misled. Most of his information was incorrect. I shudder to think what he's doing to his students at Tufts.
(By the way, the JavaScript image slideshow on the Tufts University Department of Computer Science is currently broken at the time of this writing, at least in Safari. Loading the images blocks the loading of the rest of the page until they're complete; the images are cycled through very rapidly (as in, within about a second total) on the initial load, and then they no longer cycle at all. I wonder if this guy wrote it.)
12:00 - Lunch
Sponsored by
Source. Sandwiches and wraps. Tons of them. Not bad, pretty standard. Good selection, which is important. And there were plenty left for people to snack on throughout the rest of the day.
1:00 - "
OWASP Mobile Top 10 Risks" with
Zach Lanier of
Intrepidus Group.
Good speaker, good information. The talk was basically covering the proposed OWASP top ten list of mobile security threats, which people are encouraged to evaluate and propose changes. He explained the risks and the reasons behind them very well. I don't know much about mobile development, but this list is exactly the kind of thing that should be posted on the wall next to any mobile developer. Any time you write code that's going to run on a mobile platform, refer back to the list and make sure you're not making a mistake that's been made before.
2:00 - "Don't Be a Victim" with
Jack Daniel from
Tenable.
This man has an epic beard. So, you know, that's pretty awesome. He's also a fun speaker and makes the audience laugh and all that good stuff. But I don't know, I didn't really like this presentation. It was lacking any real content. To his credit, he warned us about this at the beginning. He told us the guy in the other room has more content and that this talk is going to be very light-hearted and fun. I understand that, I really do. But I think there should at least be
some content.
Something useful. Otherwise don't bother scheduling a room for this, just make it something off to the side for people to take the occasional break.
The whole talk was through metaphor. I can see what he was trying to get at, but he never wrapped it up. He never brought the metaphor back to something useful. Imagine if Mr. Miyagi never taught Daniel how to fight, he just kept having him paint fences and wax cars. The metaphor still holds, and maybe Daniel will one day understand it, but the lesson kind of sucks. The premise was stretched out to the point of being razor-thin. The entire hour felt like an introduction to an actual presentation.
It was mostly a slideshow of pictures he found on the internet, stories about his non-technical exploits (catching snakes as a kid in Texas, crap like that), references to geek humor, and the occasional reference to the fact that he was wearing a Star Trek uniform shirt during the presentation. Was he just showing off his general knowledge and his geekiness?
Don't get me wrong. He seemed like a great guy. He seemed fun. I'm sure he knows his stuff. I'm sure he has plenty of stories about how he used to wear an onion on his belt. But this seemed like a waste of time.
3:00 - "Binary Instrumentation of Programs" with
Robert Cohn of Intel.
This was one of the coolest things I've ever seen. He was demonstrating for us the use of a tool he wrote called
Pin which basically edits the instructions of running binaries. He didn't write it for security purposes, but its implications in that field are pretty far-reaching. (Its implications in aspect-oriented programming also came up, which is certainly of more interest to me. Though this is a bit more machine-level than my clients would care to muck with.)
A lot of the talk was over my head when talking about binaries, instruction sets (the guy is a senior engineer at Intel, I'm sure he knows CPU instruction sets like the back of his hand), and so on. But when he was showing some C++ code that uses Pin to inject instructions into running applications, that's where I was captivated. Take any existing program, re-define system calls (like malloc), add pre- and post-instruction commands, etc. Seriously bad-ass.
Like I said, the material doesn't entirely resonate with me. It's a lot closer to the metal than I generally work. But it was definitely impressive and at the very least showed me that even a compiled binary isn't entirely safe. Instructions can be placed in memory from anything. (Granted, I knew this of course, but you'd be surprised how many times a client will think otherwise. That once something is compiled it's effectively sealed and unreadable. This talk makes a great example and demonstration against that kind of thinking.)
4:00 - "Google & Search Hacking" with
Francis Brown of
Stach & Liu.
Wow. Just...
wow. Great speaker,
phenomenal material. Most of the time he was mucking around in a tool called
Search Diggity. We've all seen Google hacking before, but not like this. In what can almost be described as "MetaSploit style" they aggregated all the useful tools of Google hacking and Bing hacking into one convenient package. And "convenient" doesn't even begin to describe it.
The first thing he demonstrated was hacking into someone's Amazon EC2 instance. In under 20 seconds. He searched for a specific regular expression via Google Code, which found tons of hits. Picking a random one, he was given a publicly available piece of source code (from Google Code or Codeplex of GitHub, etc.) which contained hard-coded authentication values for someone's Amazon EC2 instance. He then logged into their EC2 instance and started looking through their files. One of the files contained authentication information for every administrative user in that company.
Seriously, I can't make this stuff up. The crowd was in awe as he jumped around the internet randomly breaking into things that are just left wide open through sloppy coding practices. People kept asking if this is live, they just couldn't believe it.
One of the questions was kind of a moral one. Someone asked why he would help create something like this when its only use is for exploitation. He covered that very well. That's not its only use. The link above to the tool can also be used to find their "defense" tools as well, which use the same concepts. Together they provide a serious set of tools for someone to test their own domains, monitor the entire internet for exploits to their domains (for example if an employee or a contractor leaks authentication information, this would find it), monitor the entire internet for other leaked sensitive data, etc. By the end of the talk he was showing us an app on his iPhone which watches a filtered feed that their tool produces by monitoring Google/Bing searches and maintaining a database of
every exploit it can find on the entire internet. Filter it for the domains you own and you've got an IT manager's dream app.
A great thing about this tool also is that it doesn't just look for direct means of attack. Accidental leaks are far more common and more of a problem. This finds them. He gave one example of a Fortune-100 company that had a gaping security hole that may otherwise have gone unnoticed. The company owned tons of domains within a large IP range, and he was monitoring it. One of the sites he found via Google for that IP range stuck out like a sore thumb. Nothing business-related, but instead was a personal site for a high school reunion for some class from the 70s.
Apparently the CEO of this company (I think he said it was the CEO, it was definitely one of the top execs) was using the company infrastructure to host a page for his high school reunion. Who would have ever noticed that it was in the same IP range as the rest of the company's domains? Google notices, if you report on the data in the right way. Well, apparently this site had a SQL injection vulnerability (found by a Google search for SQL error message text indexed on that site). So, from this tiny unnoticeable little website, this guy was able to exploit that vulnerability and gain domain admin access to the core infrastructure of a Fortune-100 company. (Naturally he reported this and they fixed the problem.)
The demo was incredible. The tools demonstrated were incredible. The information available on the internet is downright
frightening. Usually by this time in the day at an event like this people are just hanging around for the raffles and giveaways and are tuning out anything else. This presentation was the perfect way to end the day. It re-vitalized everyone's interest in why we were all at this event in the first place. It got people
excited. Everybody should see this presentation. Technical, non-technical, business executives, home users,
everybody.
5:00 - Social Time
Sponsored by
Safelight. Every attendee got two free beers (or wine, or various soda beverages) while we continued to finish the leftovers from lunch. And not crappy beers either. A small but interesting assortment of decent stuff, including
Wachusett Blueberry Ale, which tastes like blueberry pancake syrup but not as heavy.
5:30 - Wrap-Up
OWASP raffled off some random swag, which is always cool. One of the sponsors raffled off an iPad, which is definitely the highlight of the giveaways. For some reason, though, the woman who won it seemed thoroughly unenthused. What the hell? If she doesn't want it, I'll take it. My kids would love an iPad.
6:00 - Expert Panel
Admittedly, I didn't stay for this. I was tired and I wanted to get out of there before everybody was trying to use the bathroom, the elevators, and the one machine available for paying for garage parking. So I left.
All in all, a great little conference. I'm definitely going to this group's future events, and I'd love to work with my employer on developing a strategic focus on application security in our client projects. (Again, I'm no expert. But I can at least try to sell them on the need to hire experts and present it to clients as an additional feature, which also means more billable hours. So it's win-win.)
One thing I couldn't help but notice throughout the event was a constant serious of issues with the projectors. This is my third or fourth conference held at a Microsoft office and this seems to be a running theme. It always takes a few minutes and some tweaking to get projectors in Microsoft offices to work with Windows laptops. Someday (hopefully within a year or two) I'm going to be speaking at one of these local conferences (maybe Code Camp or something of that nature), and I'm going to use my Mac. Or my iPad. Or my iPhone. And it's going to work flawlessly. (Note that one person was using a Mac, and the projector worked fine, but he was using Microsoft PowerPoint on the Mac and that kept failing. I'll be using Keynote, thank you very much.)