Posts Tagged ‘vulnerability assessment’

The Human Vulnerability

It seems to us that one of the biggest threats that businesses face today is socially augmented malware attacks. These attacks have an extremely high degree of success because they target and exploit the human element. Specifically, it doesn’t matter how many protective technology layers you have in place if the people that you’ve hired are putting you at risk, and they are.

Case in point, the “here you have” worm that propagates predominantly via e-mail and promises the recipient access to PDF documents or even pornographic material. This specific worm compromised major organizations such as NASA, ABC/Disney, Comcast, Google Coca-Cola, etc. How much money do you think that those companies spend on security technology over a one-year period? How much good did it do at protecting them from the risks introduced by the human element? (Hint: none)

Here at Netragard we have a unique perspective on the issue of malware attacks because we offer pseudo-malware testing services. Our pseudo-malware module, when activated, authorizes us to test our clients with highly customized, safe, controlled, and homegrown pseudo-malware variants. To the best of our knowledge we are the only penetration testing company to offer such a service (and no, we’re not talking about the meterpreter).

Attack delivery usually involves attaching our pseudo-malware to emails or binding the pseudo-malware to PDF documents or other similar file types. In all cases we make it a point to pack (or crypt) our pseudo-malware so that it doesn’t get detected by antivirus technology (see this blog entry on bypassing antivirus). Once the malware is activated, it establishes an encrypted connection back to our offices and provides us with full control over the victim computer. Full control means access to the software and hardware including but not limited to keyboard, mouse, microphone and even the camera. (Sometimes we even deliver our attacks via websites like this one by embedding attacks into links).

So how easy is it to penetrate a business using pseudo-malware? Well in truth its really easy. Just last month we finished delivering an advanced external penetration test for one of our more secure customers. We began crafting an email that contained our pseudo-malware attachment and accidentally hit the send button without any message content. Within 45 seconds of clicking the send button and sending our otherwise blank email, we had 15 inbound connections from 15 newly infected client computer systems. That means that at least 15 employees tried to open our pseudo-malware attachment despite the fact that the email was blank! Imagine the degree of success that is possible with a well-crafted email?

One of the computer systems that we were able to compromise was running a service with domain admin privileges. We were able to use that computer system (impersonation attack involved) to create an account for ourselves on the domain (which happened to be the root domain). From there we were able to compromise the client’s core infrastructure (switches, firewalls, etc) due to a password file that we found sitting on someone’s desktop (thank you for that). Once that was done, there really wasn’t much more that we had left to do, it was game over.

The fact of the matter is that there’s nothing new about taking advantage of people that are willing to do stupid things. But is it really stupidity or is it just that employees don’t have a sense of accountability? Our experience tells us that in most cases its a lack of accountability that’s the culprit.

When we compromise a customer using pseudo-malware, one of the recommendations that we make to them is that they enforce policies by holding employees accountable for violations. We think that the best way to do that is to require employees to read a well-crafted policy and then to take a quiz based on that policy. When they pass the quiz they should be required to sign a simple agreement that states that they have read the policy, understood the policy, and agree to be held accountable for any violations that they make against the policy.

In our experience there is no better security technology than a paranoid human that is afraid of being held accountable for doing anything irresponsible (aka: violating the policy). When people are held accountable for something like security they tend to change their overall attitude towards anything that might negatively affect it. The result is a significantly reduced attack surface. If all organizations took this strict approach to policy enforcement then worms like the “here you have” worm wouldn’t be such a big success.

Compare the cost and benefit of enforcing a strict and carefully designed security policy to the cost and benefit of expensive (and largely ineffective) security technologies. Which do you think will do a better job at protecting your business from real threats? Its much more difficult to hack a network when that network is managed by people that are held accountable for its security than it is to hack a network that is protected technology alone.

So in the end there’s really nothing special about the “here you have” worm. It’s just another example of how malicious hackers are exploiting the same human vulnerability using an ever so slightly different malware variant. Antivirus technology certainly won’t save you and neither will other expensive technology solutions, but a well-crafted, cost-effective security policy just might do the trick.

It’s important to remember that well written security policies don’t only impact human behavior, but generally result in better management of systems, which translates to better technological security. The benefits are significant and the overall cost isn’t in comparison.

PDF Printer    Send article as PDF   

Professional Script Kiddies vs Real Talent

The Good Guys in the security world are no different from the Bad Guys; most of them are nothing more than glorified Script Kidies. The fact of the matter is that if you took all of the self-proclaimed hackers in the world and you subjected them to a litmus test, very few would pass as acutal hackers.

This is true for both sides of the so called Black and White hat coin. In the Black Hat world, you have script-kids who download programs that are written by other people then use those programs to “hack” into networks. The White Hat’s do the exact same thing; only they buy the expensive tools instead of downloading them for free. Or maybe they’re actually paying for the pretty GUI, who knows?

What is pitiable is that in just about all cases these script kiddies have no idea what the programs actually do. Sometimes that’s because they don’t bother to look at the code, but most of the time its because they just can’t understand it. If you think about it that that is scary. Do you really want to work with a security company that launches attacks against your network with tools that they do not fully understand? I sure wouldn’t.

This is part of the reason why I feel that it is so important for any professional security services provider to maintain an active research team. I’m not talking about doing market research and pretending that its security research like so many security companies do. I’m talking about doing actual vulnerability research and exploit development to help educate people about risks for the purposes of defense. After all, if a security company can’t write an exploit then what business do they have launching exploits against your company?

I am very proud to say that Everything Channel recently released the 2010 CRN Security Researchers list and that Netragard’s Kevin Finisterre was on the list. Other people that were included in the list are people that I have the utmost respect for. As far as I am concerned, these are some of the best guys in the industry: (clearly this list is not all inclusive and in no way includes all of the people that deserve credit for their contributions and/or talent).

  • Dino Dai Zovi
  • Kevin Finisterre
  • Landon Fuller
  • Robert Graham
  • Jeremiah Grossman
  • Larry Highsmith
  • Billy Hoffman
  • Mikko Hypponen
  • Dan Kaminsky
  • Paul Kocher
  • Nate Lawson
  • David Litchfield
  • Charles Miller
  • Jeff Moss
  • Jose Nazario
  • Joanna Rutkowska

In the end I suppose it all boils down to what the customer wants. Some customers want to know their risks; others just want to put a check in the box. For those who want to know what their real risks are, you’ve come to the right place.

PDF Creator    Send article as PDF   

Social Engineering — Its Nothing New

With all the recent hype about Social Engineering we figured that we’d chime in and tell people what’s really going on. The fact is that Social Engineering is nothing more than a Confidence Trick being carried out by a Con Artist. The only difference between the term Social Engineering and Confidence Trick is that Social Engineering is predominately used with relation to technology.

So what is it really? Social Engineering is the act of exploiting a person’s natural tendency to trust another person or entity. Because the vulnerability exists within people, there is no truly effective method for remediation. That is not to say that you cannot protect your sensitive data, but it is to say that you cannot always prevent your people or even yourself from being successfully conned.

The core ingredients required to perform a successful confidence trick are no different today then they were before the advent of the Internet. The con artist must have the victim’s trust, and then trick the victim into performing an action or divulging information. The Internet certainly didn’t create the risk but it does make it easier for the threat to align with the risk.

Before the advent of the Internet the con artist (threat) needed to contact the victim (risk) via telephone, in person, via snail mail, etc. Once contact was made a good story needed to be put into place and the victim’s trust needed to be earned. That process could take months or even years and even then success isn’t guaranteed.

The advent of the Internet provided the threat with many more avenues’ through which it could successfully align with the risk. Specifically, the Internet enables the threat to align with hundreds or even thousands of risks simultaneously. That sort of shotgun approach couldn’t be done before and significantly increases an attackers chances of success. One of the most elementary examples of this shotgun approach is the email based phishing attack.

The email based phishing attack doesn’t earn the trust of its victims; it steals trust from existing relationships. Those relationships might exist between the victim and their bank, family member, co-worker, employer, etc. In all instances the email based phishing attack hinges on the attacker’s ability to send emails that look like they are coming from a trusted source (exploitation of trust). From a technical perspective, email spoofing and phishing is trivial (click here for a more sophisticated attack example).

The reason why it is possible for an attacker to steal trust from a victim instead of earning that trust is because “face to face” trust isn’t portable to the Internet. For example, most people trust their spouse. Many people talk to their spouse on AIM, MSN, Yahoo, Skype, etc. while at work. How do they know that they are really chatting with their spouse and not a hacker?

So how do you protect against the social risks and prevent the threat from successfully aligning with those risks? The truth is that you can’t. Con artists have been conning people since the dawn of man. The better question what are you doing to protect your data from the hacker that does penetrate into your IT Infrastructure?

PDF    Send article as PDF   

ROI of good security.

The cost of good security is a fraction of the cost of damages that usually result from a single successful compromise. When you choose the inexpensive security vendor, you are getting what you pay for. If you are looking for a check in the box instead of good security services, then maybe you should re-evaluate your thinking because you might be creating a negative Return on Investment.

Usually a check in the box means that you comply with some sort of regulation, but that doesn’t mean that you are actually secure. As a matter of fact, almost all networks that contain credit card information and are successfully hacked are PCI compliant (a real example). That goes to show that compliance doesn’t protect you from hackers, it only protects you from auditors and the fines that they can impose. Whats more is that those fines are only a small fraction of the cost of the damages that can be caused by a single successful hack.

When a computer system is hacked, the hacker doesn’t stop at one computer. Standard hacker practice is to perform Distributed Metastasis and propagate the penetration throughout the rest of the network. This means that within a matter of minutes the hacker will likely have control over the most or all of the critical aspects of your IT infrastructure and will also have access to your sensitive data. At that point you’ve lost the battle… but you were compliant, you paid for the scan and now you’ve got a negative Return on that Investment (“ROI”).

So what are the damages? Its actually impossible to determine the exact cost in damages that result from a single successful hack because its impossible to be certain of the full extent of the compromise. Never the less, here are some of the areas to consider when attempting to calculate damages:

  • Man hours to identify every compromised device
  • Man hours to reinstall and configure every device
  • Man hours required to check source code for malicious alterations
  • Man hours to monitor network traffic for hits of malicious traffic or access
  • Man hours to educate customers
  • Penalties and fines.
  • The cost of downtime
  • The cost of lost customers
  • The cost of a damaged reputation
  • etc.

(The damages could *easily* cost well over half a million dollars on a network of only ~50 or so computers. )

Now lets consider the Return on Investment of *good* security. An Advanced Penetration Test against a small IT Infrastructure (~50 computers in total) might cost something around $16,000.00-$25,000 for an 80 hour project. If that service is delivered by a quality vendor then it will enable you to identify and eliminate your risks before they are exploited by a malicious hacker. The ROI of the quality service would be equal to the cost in damages of a single successful compromise minus the cost of the services. Not to mention you’d be complaint too…

(Note: the actual cost of services varies quite a bit depending on what needs to be done, etc.)

So why is it that some vendors will do this work for $500.00 or $2,000.00, etc? Its simple, they are not delivering the same quality service as the quality vendor. When you pay $500.00 for a vulnerability scan you are paying for something that you could do yourself for free (go download nessus). Never the less, when you pay $500.00 you are really only paying for about 5 minutes of manual labor, the rest of the work is automated and done by the tools. (If you broke that down to an hourly rate you’d be paying something like $6000.00 an hour since you’re paying $500.00 per 5 minutes). In the end you might end up with a check in your compliance box but you’ll still just as vulnerable as you were in the beginning.

PDF Download    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.