Archive for January, 2009

A Quality Penetration Test

Someone on the pen-testing mailing list asked me to write an entry about the difference between vulnerability scanning (and services that rely on it) and Real Time Dynamic Testing™. This entry is a sanitized description of a real Advanced External Penetration Test that our team delivered to a customer. Many details were left out and our customer’s information was removed or augmented to protect their identity. Our customer did approve this entry.

Our team (Netragard, LLC.) was hired to perform an Advanced External Penetration Test as a follow-up engagement to a pen-test that was delivered by a different vendor. This might seem unusual, but we get these types of engagements more and more frequently. This test was no different than most of them, and we found significant exploitable vulnerabilities that the other vendor missed entirely, which unfortunately seems all too common.

When we deliver Advanced services we expose our customers to specific type of threat. Our goal is to create a threat that is a few levels higher than what they would likely face in the real world. Testing our customers at a threat level that is less than that would do nothing to help them defend against the actual threat. Our services are not the product of automated vulnerability scanners and scripts; they are the product of human talent.

During this particular engagement we were authorized to perform Distributed Metastasis, Covert Testing, Social Engineering, Malware Deployment, ARP Poisoning, etc. All targets were also authorized and included Web Servers that were hosted by third parties, Web Servers that were hosted locally, VPN end points, FTP servers, IDS systems, DNS servers, Secure Email Servers like tumbleweed and so on. We were not given a list of IP addresses to target, we had to identify them and request approval.

We began the engagement by performing covert social and technical reconnaissance. Reconnaissance is the military term for the collection of intelligence about an enemy prior to attacking the enemy; in this case our customer was the “enemy”. Our philosophy is that we cannot produce an accurate threat level without first understanding some details about our target’s political structure, social behavior, and technology infrastructure. We might not use all of the information that we collect while testing, but more times than not it provides us with a good idea of what will be effective, and what will not.

During reconnaissance we focused on two separate target groups. The first target group was the social structure of the client’s employees that we felt was of interest. As such we collected information about those employees that included office-location, telephone extensions, email address, relationships to other employees, friends outside of work, etc. Our secondary sets of targets for reconnaissance were technical targets. Those targets included the identification of servers used by the client, vendor identification, partner identification, the identification of IP addresses belonging to the client, the internal IP addressing scheme, operating system information, patch frequency information, etc.

We were able to use the information collected during reconnaissance to begin performing vulnerability identification through analysis. Because this service was an advanced service and required covert testing, vulnerability identification was mostly done with manual testing (Real Time Dynamic Testing™) and during reconnaissance. As testing progressed we increased our noise level until we received notification from the customer that we’d been detected. This enabled us to identify what level of testing was considered “flying below the radar” and what level was “tagged”. (Knowing this enables us to help our customers retune their IDS technologies so that they are more difficult to evade. In most cases IDS technologies are not tuned properly, and yes this includes IPS and Correlation Systems too.)

Once we were finished with vulnerability identification we built a target matrix that was organized by probability of penetration. This matrix is used as a guide for the team and enables us to test the most probable points of entry first, and the least probable points of entry last. In the case of this particular customer we identified three probable points of entry along with a few other basic vulnerabilities like Cross-site Scripting, etc. (While Cross-site Scripting is useful for performing Social Engineering based attacks, we won’t go into the details about how we used them here.) The other vendor even with basic scanning services should have detected most, if not all of these vulnerabilities, but they didn’t.

The first point of attack that we focused on was the customer’s corporate website. This website was being hosted by a third party and was using a Content Management System (“CMS”) that was created by vendor that we’ll call the Noname Group. This particular CMS was written entirely in PHP, was closed source and had no security functionality to speak of. There were multiple points were unchecked variables were passed directly to SQL statements or other critical internal application components. We were able to use those unchecked variables to penetrate into our Customer’s Web Server and take control of it.

Upon accessing that web server we found customer data that was stored in the database in clear text. This information contained names, addresses, account numbers, social security numbers, etc. In some cases the information was from users requesting information, in other cases it was users looking to sign up. As a proof of concept we wrote a ruby script that would automatically dump the contents of the database when executed. That script was submitted to the customer. Because this server was not hosted within our Customer’s IT Infrastructure it did not provide us with a platform from which we could perform Distributed Metastasis.

The next target lined up for testing was another Web Application, this time it was hosted from within our customer’s infrastructure. Again, the application suffered from a basic SQL Injection vulnerability that could be triggered by a back-tick. We used the vulnerability to fingerprint the application’s backend database and learned that it was a MS-SQL database. We also learned that was hosted on a separate server from the Web Server. We then tested for “xp_cmdshell” access and found that the “sa” user had no password set and as a result we could execute arbitrary commands against the database server with administrator privileges.

Once we gained control over the database server we began to examine other systems within proximity to our new point of control (Distributed Metastasis). That was when we learned that we’d compromised a key server that was deep within the customers IT Infrastructure and had clear access to other critical systems. We also noticed that the server that we were controlling contained multiple databases that contained a wide variety of highly sensitive information including customer banking information, social security numbers, etc. In addition, while performing network probes we identified a secondary database server. Ironically this second database server was running on the web server with the SQL Injection vulnerability that we’d just attacked.

When we tried to connect to the second database server from the internal server we were unable to access it because this time the “sa” password was set and we didn’t know what it was set to. We did however know which system accessed that database server as a result of the Social Engineering efforts that were mixed into our Social Reconnaissance. The system with access was also the third system in our targeting matrix and contained another vulnerable Web Application. This time, due to the configuration of the application SQL Injection capabilities were limited. We did howe
ver manage to find an arbitrary file read vulnerability and were able to use it to read the application’s configuration file that contained the “sa” password.

This enabled us to go back to the previously inaccessible database and access it using the sa password. This also gave us access to the xp_cmdshell function that in turn allowed us to execute arbitrary commands against the system. At this point in the test we’d managed to penetrate into both the DMZ and the corporate LAN which also allowed us to connect to any other system within proximity without issue. In other words, there was no internal segmentation in the form of VLAN’s or physical isolation. The networks were flat.

The server that we penetrated in the LAN contained a SAM file. We were able to crack 90% of the passwords in that SAM file with rainbow tables, including the Administrator password. Once we had that password we were able to use RDP to access the Active Directory server and it was technically game over. If we had not discovered the SAM we were prepared to perform ARP Poisoning to collect data and possibly in-transit credentials. Our penetration of the AD server concluded the penetration test.

It is important to note that this is not a complete description of all of the testing that we did for the customer. As with any engagement we produce a deliverable that outlines all discovered points of risk with their respective methods for remediation. In this particular case our report identified 47 risks and provided 47 methods for remediation. Remember that this customer just completed a penetration test from a different vendor, how is it that they missed 47 risks? Their services certainly did not protect our customer from hackers.

PDF Creator    Send article as PDF   

Network Vulnerability Scanning Doesn’t Protect You

Vulnerability scanning can have a detrimental negative impact on the security posture of your IT infrastructure if used improperly. This negative impact is due to a perceptional issue that has been driven by the vendors who sell vulnerability scanning services or the vulnerability scanners themselves. The hard facts prove that vulnerability scanners can not protect your IT Infrastructure from malicious hackers. (My team penetrates “scanned” networks on a regular basis during customer engagements). That is not to say that vulnerability scanners are useless, but it is to say that people need to readjust their perception of what vulnerability scanning really is.

While there are various types of vulnerability scanners they suffer from the same disease that most security technologies suffer from. That disease is that they are reactive to hackers and will never be proactive. The fact is that vulnerability scanners can not detect vulnerabilities unless someone has first identified the vulnerability and created a signature for its detection. This process can take quite a while and is often not an ethical one. So here is how it works…

A hacker decides to perform research against a common technology like your firewall. That hacker might spend minutes, months or even years doing research just for the purpose of identifying an exploitable security vulnerability. Once that vulnerability is identified the hacker has an ethics based decision to make. Does he notify the vendor of his discovery and release a formal advisory or does he use his discovery to hack networks, steal information and profit.

If the hacker decides to notify the vendor and release an advisory then there is usually a wait period of 1-3 months before the vendor releases a patch. This lag time means that the vendor’s customers will remain vulnerable for at least another 1-3 months, most probably longer. What’s even more interesting is that this vulnerability may have been discovered previously by a different researcher that didn’t notify the vendor. If that’s the case then that probably means that the vulnerability has been in use as a tool to break into networks for a while. Who knows, it could have been discovered months or even years ago? That type of unpublished vulnerability is known as a 0day and is the favorite weapon of the malicious hacker.

At some point the vulnerability does become public knowledge. Its also at this point that the vendors who make the vulnerability scanning technology become aware of the new risk. When they do learn about the new risk they need to develop a signature, or script for their scanning technology so that it can detect the risk. That development process can take anywhere from a few days to a few weeks depending on the complexity risk. As a result, the customers that rely on vulnerability scanning are in the dark until the vendor can publish a working and tested signature… but the hackers don’t need to wait at all. The hackers can use it almost immediately.

So in summary, there is a large risk window between the point of discovery of a vulnerability and the point at which a vulnerability scanner can detect the vulnerability. This risk and exposure window is usually never smaller than a few months, and can be as large as several years. During that time there is a very good chance that malicious hackers will be using your undiscovered risks to penetrate into your infrastructures. Whats worse is that you’ll have no idea that you’ve been hacked because like vulnerability scanning technology, Intrusion Detection technology also can’t identify threats if it doesn’t know what to look for. Moreover most Intrusion Detection technologies aren’t configured properly and as such don’t work properly.

Unfortunately the story doesn’t end there. Vulnerability scanners also suffer from significant issues with accuracy. In all cases where I’ve used (various) vulnerability scanners, the best results that I’ve ever achieved were about 30% accurate. This means that most of the vulnerabilities that were detected during my various scans weren’t actually vulnerabilities but instead were false alarms, also called false positives. More frightening is the number of vulnerabilities that I discovered while performing Real Time Dynamic Testing (manual hacking) that were entirely missed by the vulnerability scanner. If you don’t believe me then go download a free vulnerability scanner, test your network and verify the results yourself.

This inaccuracy is partially due to the architecture of the vulnerability scanners and the fact that no two networks are alike. Vulnerability scanners use static signatures or scripts that are only capable of checking a target for a vulnerability if their syntax is exactly accurate and if the target responds in a way that the scanner can understand. If however the target, lets say its a computer system, is configured in a custom way then it may not respond in a way that the scanner will understand (how many of you keep the default configuration?). This communication barrier is a large part of what causes false positives and false negatives.

An important note about false positives and false negatives. Some vendors claim that their vulnerability scanners have low rates of false positives. As with Intrusion Detection, if low false positive rates are true then its usually reasonable to say that the technology has high rates of false negatives. You can think of it as a sliding scale of 1 to 10 where 1 is 100% False Positives and 10 is 100% False Negatives. As you move up and down the scale you inevitably end up with more of one or the other, you can never eliminate them. With that said, its my opinion that more false positives are better than more false negatives.

If vulnerability scanners aren’t the right way to protect yourself then what is? You should protect yourself by exposing your business to an accurate and controlled reproduction of the threat by using a quality security provider. It is important to remember that no single hacker, good or bad, has access to all of the 0-day’s in the world. As such, it is entirely possible for a team of ethical hackers to accurately reproduce the threat that unethical hackers can create. Testing at that level enables you to identify weaknesses in your defenses that would not otherwise be detected by testing at lesser levels. What good would a penetration test or a vulnerability assessment do if the malicious hackers will test you harder?

One of the many advantages of using a team of talented hackers for security testing instead of relying on automated vulnerability scanners is that those hackers can and should perform research against unique technologies that they encounter during a security test. I practice what I preach by the way. When our team delivers an Advanced Penetration Test to a customer we always perform our own research against interesting targets. Those targets can be Web Applications, Web Services, or even custom daemons running on systems. In the end, if we find something new we’ll write an exploit (proof of concept) for the customer and include that in the final deliverable.

In closing, I am not suggesting that network vulnerability scanners are bad because they do have their place and they do serve a purpose. They are particularly useful in the hands of a skilled security expert especially when performing reconnaissance against large networks. In that scenario the scanner enables the expert to save time and to rapidly collect intelligence about targets given that the engagement is non-stealth in nature. With that said, I wouldn’t rely on scanners for anything more than just reconnaissance, at least not yet.

Note: (Thank you to minoo for pointing out a few mistakes in my previous revision of this entry. I hope that this entry is as clear as I intend it to be. There is no one team that is th
e best, but there are only a few good ones. If this isn’t clear enough or if it needs more revision please comment.)

PDF Printer    Send article as PDF   

Finding The Quality Security Vendor (Penetration Testing, Vulnerability Assessments, Web Application Security, etc)

While I’ve written several detailed white-papers on the subject of identifying quality security vendors, I still feel compelled to write more about the subject. It is my opinion that choosing the right security vendor is critical to the health and safety of a business.  Choosing the wrong vendor can leave you with a false sense of security that in the end might result in significant damages. Often times those damages can’t be fully measured and appreciated, especially when they involve the tarnishing of a good name.

This problem of identifying quality isn’t new but it does take on a new importance when it involves the safety of your trade secrets, source code, or otherwise critically sensitive information.  When you trust a security provider to test your IT Infrastructure, your people, physical security, etc. you are relying on them to identify risks that malicious hackers might otherwise discover.  If the provider does not test you at the same threat level as the malicious hackers then their service is almost useless. 
If that doesn’t compel you to want quality security services then go ahead and take the risk.  I suppose the question really is, how much is your network (and its data) worth? If its worth more than $500,000.00 then its probably worth spending money on a quality security vendor to protect it right?
So how do you know which providers are quality and which ones are frauds?
The first rule of thumb is to watch out for the vendors that produce deliverables that are the product of vulnerability scanners.  There are two reasons for this, the first being that you don’t need to pay anyone to run an automated scan when you can do it yourself for much less, or for free. You can choose from a variety of free tools like nessus, or you go out and buy a license for a vulnerability scanner.  
Don’t be fooled though, vulnerability scanners do not produce accurate results. In fact most vulnerability scanners produce results that contain anywhere from 40-90% false positives with an unknown rate of false negatives.  While these tools are useful for reconnaissance they should not be used as the primary method for security testing. 
Watch out for the vendor that tells you that they will run a vulnerability scan against your network and then “vet” the results.  Vetting doesn’t mean that they are going to do additional discovery. Vetting only means that the vendor will check the results of the vulnerability scan and eliminate the false positives. The quality of the end product is then only as good as the accuracy of the vulnerability scanner. Would you bank on that?
When you are choosing the vendor make sure to ask them specific questions.  Questions that I find helpful are realistic but based on theoretical architectures.  For example you could ask a vendor the following question:
“Suppose you are confronted with an architecture that consisted of 10 desktops behind a single firewall.  That firewall has properly configured IPS capabilities and there are no ports forwarded from the internet to any system behind that firewall. How would you [the vendor] penetrate into that network? Once you penetrate how would you perform Distributed Metastasis?” Email me for the answer if you don’t know it already. 
You can also ask the vendor how they would use a directory traversal vulnerability to penetrate into a network.  This is a bit of a trick question but if they know what they are doing then they will be able to answer it properly.  The short answer is that you need to inject code into the web-server’s error log and then use the directory traversal vulnerability to render the code. (Again, if you need the complete answer email me and I’ll get it to you.)
 
Another good rule is to only choose security vendors who also perform Vulnerability Research and Development (“R&D;”).  That is to say that the vendor must frequently perform security research against technology, identify vulnerabilities in that technology, create exploits for those vulnerabilities and must release formal security advisories. If they don’t then chances are they don’t know how to do it, but why is R&D; important? 
R&D; enables the vendor to keep its penetration testing skills honed (so long as the research done by the penetration testers).  Penetration Testers who do not perform this kind of research are literally Script Kids (sorry guys).  Script Kids are people who download tools and use those tools to penetrate into networks. In almost all cases they don’t have any understanding of how the tools work.  If you think about it, thats like giving a loaded gun to a 3 year old. 
You can also ask the vendor how they collect their threat intelligence.  Threat intelligence is a critical aspect of delivering quality security services.  If the vendor doesn’t have current threat intelligence about the threat then how will they help you to defend yourself against the threat? While I won’t tell you how my team collects this intel, I will tell you that its not from the news and most certainly not all public forum. 
In closing, my recommendation to you is that you do your homework before you choose a vendor. Research the components required for delivering a quality service, then use your research to question the provider.  As an example, if you were going to get a Web Application Penetration Test ask the vendor to define the term “Penetration Test”.  Ask the vendor what the difference is between a Penetration Test and a Vulnerability Assessment.  Also ask them to explain RFI, LFI, XSS, SQL Injection, Blind SQL Injection, etc.  Remember, you are going to spend money on security, might as well make it worth while.  If you don’t then you’re just adding that money to the damages from the hack that you’ll suffer in the end. 
If you have any questions please feel free to leave me a comment or send me an email.  You might also want to check out the white papers that I’ve linked at the upper right hand corner of this blog.  Those papers go into more detail about how to choose a good security vendor and how to select the right service. 
Create PDF    Send article as PDF   

Followup to my last Brian Chess – Fortify Software post.

Recently I published a post about Fortify Software’s Brian Chess because of some outlandish claims that he made in an article about penetration testing being “Dead by 2009″. The off-line and 0n-line comments that resulted from that post were mostly in favor of what I’d written and one of those comments really caught my eye. So here is a post dedicated to Rafal in response to his comment on my article about Brian Chess.

Comment By Rafal shown below, verbatim in pink:

“If I may call a sanity timeout here folks – while I don’t agree with Brian’s assertions necessarily – if you combine a few factors you could conceivably come to the conclusion that penetration testing will start to dwindle (just not as quickly as 2009).”

Its only conceivable for those who do not know what Penetration Testing is, and many self-proclaimed security guru’s don’t. So lets start with some (partial) definitions here:

Vulnerability Assessment:
(Assessment: the act of assessing; appraisal; evaluation.)
A Vulnerability Assessment is a service that evaluates a particular target, or set of targets for the purpose of identifying points of exposure that are open to assault. A Vulnerability Assessment does not attempt to compromise or penetrate into a target once a point of exposure is identified, it only aims at assessing the target for points of risk. Vulnerability Assessments by their very nature are prone to False Positives and False Negatives as the findings are never validated via Penetration or Exploitation.

Vulnerability Assessment Tools include:

  • WebInspect for Web Application Vulnerability Assessments
  • Nessus for Network Vulnerability Scanning
  • Fortify for Web Application Vulnerability Assessments
  • Retina Network Vulnerability Scanning
  • etc… you get the idea.


Penetration Test:
(Penetration: the act or power of penetrating.)
A Penetration Test is a service that evaluates a particular target, or a set of targets for the purpose of identifying points of exposure that are open to assault. A Penetration Test differs from a Vulnerability Assessment in that it attempts to penetrate into the target by exploiting any discovered points of risk and exposure. A Penetration Test when done properly will result in an accurate deliverable that contains no false positives. This is possible because exploitation of a risk or point of exposure is either successful or not. Penetration Tests can include theoretical findings but they should not be reported on as positives.

Penetration Testing Tools include (I’d recommend these):


You can use a Vulnerability Assessment or a Penetration Test against any type of target not just technology based targets. At Netragard we perform physical penetration tests, wireless penetration tests, network penetration tests, social engineering based penetration tests, web application penetration tests, etc. Likewise we can deliver vulnerability assessments against the same set of targets if penetration testing is too aggressive.

(I get the feeling that both Rafal and Brian Chess think that Penetration Testing is a Web Application only service)

“Here’s my logic – feel free to scrutinize. For the record I work for HP (the SPIDynamics acquisition) so you guys can feel free to rip on the fact that our marketing folks I’m sure make interesting claims as well… but I digress. Here are some things to consider:

Actually we’ve got quite a bit of interesting history with HP, but that’s a different story. With respect to SPIDynamics and the Web Inspect tool, I’m sorry that HP ever acquired SPIDynamics. WebInspect was a reasonable tool for doing preliminary reconnaissance against Web Applications during non-covert services. Once HP acquired the technology its quality went down the tubes. Not only that but the process of acquiring a license from HP is excruciatingly painful at best. What ever happened to being able to purchase the product online? /end rant

“1. When you do penetration testing, what are you really testing? Are you testing the system or the intelligence and skill of the pen tester? This is a very tough question to answer.

Why is that a difficult question to answer? If you’ve built your penetration testing team properly then your team will be able to expose its targets to the same or greater threat level than that which they will likely face in the real world. The fact of the matter is the more secure the infrastructure the more challenging the test and yes, its impossible to know everything but its not impossible to do a great job.

“2. Pursuant to #1 above, and the business’ (living in reality land here) need to do lowest-cost vendors… what value do you suppose that the 90%+ of companies that go lowest-cost (outsourced to India, China, Mexico, etc) are getting?”

Businesses do not “need to do the low-cost vendors”, they choose to because they are making uneducated decisions in most cases. Mind you the lack of education on their part is not their fault, its the fault of the poor quality vendors. Poor quality vendors advertise their services as if they are the same quality as the high quality vendors thereby causing confusion. When a business compares the two services they don’t see the difference and so they choose the less expensive one.

“3. With every point-and-click testing tool there is a double-edged sword… here’s why 3a. Tools make you more efficient BUT”

I only partially agree. When the tool spits out over 2,000 false positives (like WebInspect did the last time we used it) with only 3 real positives its doing very little to increase the efficiency of a team. Other tools that produce less false positives and more accurate results are very useful for time savings but their results should not be used to create an end product. Automated tools are not dynamic by nature and as such can not identify the same risks as talented penetration testers.

“3b. Tools can make yo
u less “hands-on” when it comes to writing low-level exploits or code…”

Tools are also the root cause of the the fraudulent security experts. I’m not saying that tools don’t have their place because they certainly do. But they allow people to become lazy and as such breed “experts” that are for all intents and purposes no better than script kids (which might I add are very dangerous because they don’t know what they are doing).

“4. Penetration testing is an after-the-fact requirement… which is too late. You have to use tools to augment and empower your developers to write better code at the grass-roots otherwise you’re hosed.”

You’re partially wrong. The tools that you speak of are derived from the attacks that were created by Penetration Testers (aka: hackers). With respect to the world of Web Applications, do you think that a tool discovered the first SQL Injection vulnerabilitiy and created a method for exploitation? Ofcourse not! Tools will always be a few steps behind the capabilities of a real hacker, regardless of that hackers ethical bias. The fact of the matter is that as hackers, we perform research and identify new methods for penetration that were not previously discovered and your tools can not and will not ever be able to defend against that.

“So – to summarize, penetration testing isn’t going to be “dead” in this year of 2009, but it may start to dwindle down some depending on how good the marketing machines of the tools vendors are. Brian’s statement is a self-fulfilling prophecy… he is making a statement that he hopes will incite people to make that statement come true.

I disagree, and again, you are working for a vendor that makes these tools. Its in your best interest to suggest that some how Penetration Testing will be less of a requirement because of the tools that you create. The reality of it is that if people drink that kool aid they will become more vulnerable, not more secure.

When our military tests the armor of its M1A2 Abrams Tank they test it against the real threat. So why aren’t we pushing our customers to do the same thing, it makes perfect sense? In our case the real threat is always going to be the malicious hacker, not the software vendor making pretty and easy to use tools. The tools do have a place but they will only ever identify the low hanging fruit. It takes a professional hacker/penetration tester to actually test an infrastructure properly. Lets see your tools perform Social Engineering or drop USB sticks in parking-lots.


PDF    Send article as PDF   

ROI of good security.

The cost of good security is a fraction of the cost of damages that usually result from a single successful compromise. When you choose the inexpensive security vendor, you are getting what you pay for. If you are looking for a check in the box instead of good security services, then maybe you should re-evaluate your thinking because you might be creating a negative Return on Investment.

Usually a check in the box means that you comply with some sort of regulation, but that doesn’t mean that you are actually secure. As a matter of fact, almost all networks that contain credit card information and are successfully hacked are PCI compliant (a real example). That goes to show that compliance doesn’t protect you from hackers, it only protects you from auditors and the fines that they can impose. Whats more is that those fines are only a small fraction of the cost of the damages that can be caused by a single successful hack.

When a computer system is hacked, the hacker doesn’t stop at one computer. Standard hacker practice is to perform Distributed Metastasis and propagate the penetration throughout the rest of the network. This means that within a matter of minutes the hacker will likely have control over the most or all of the critical aspects of your IT infrastructure and will also have access to your sensitive data. At that point you’ve lost the battle… but you were compliant, you paid for the scan and now you’ve got a negative Return on that Investment (“ROI”).

So what are the damages? Its actually impossible to determine the exact cost in damages that result from a single successful hack because its impossible to be certain of the full extent of the compromise. Never the less, here are some of the areas to consider when attempting to calculate damages:

  • Man hours to identify every compromised device
  • Man hours to reinstall and configure every device
  • Man hours required to check source code for malicious alterations
  • Man hours to monitor network traffic for hits of malicious traffic or access
  • Man hours to educate customers
  • Penalties and fines.
  • The cost of downtime
  • The cost of lost customers
  • The cost of a damaged reputation
  • etc.

(The damages could *easily* cost well over half a million dollars on a network of only ~50 or so computers. )

Now lets consider the Return on Investment of *good* security. An Advanced Penetration Test against a small IT Infrastructure (~50 computers in total) might cost something around $16,000.00-$25,000 for an 80 hour project. If that service is delivered by a quality vendor then it will enable you to identify and eliminate your risks before they are exploited by a malicious hacker. The ROI of the quality service would be equal to the cost in damages of a single successful compromise minus the cost of the services. Not to mention you’d be complaint too…

(Note: the actual cost of services varies quite a bit depending on what needs to be done, etc.)

So why is it that some vendors will do this work for $500.00 or $2,000.00, etc? Its simple, they are not delivering the same quality service as the quality vendor. When you pay $500.00 for a vulnerability scan you are paying for something that you could do yourself for free (go download nessus). Never the less, when you pay $500.00 you are really only paying for about 5 minutes of manual labor, the rest of the work is automated and done by the tools. (If you broke that down to an hourly rate you’d be paying something like $6000.00 an hour since you’re paying $500.00 per 5 minutes). In the end you might end up with a check in your compliance box but you’ll still just as vulnerable as you were in the beginning.

Free PDF    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.