Archive for the ‘penetration test’ Category

How much should you spend on penetration testing services?

The most common question asked is “how much will it cost for you to deliver a penetration test to us?”.  Rather than responding to those questions each time with the same exact answer, we thought it might be best to write a detailed yet simple blog entry on the subject.  We suspect that you’ll have no trouble understanding the pricing methods described herein because they’re common sense.

The price for a genuine penetration test is based on the amount of human work required to successfully deliver the test.  The amount of human work depends on the complexity of the infrastructure to be tested.  The infrastructure’s complexity depends on the configuration of each individual network connected device. A network connected device is anything including but not limited to servers, switches, firewalls, telephones, etc.  Each unique network connected device provides different services that serve different purposes.  Because each service is different each service requires different amounts of time to test correctly. It is for this exact reason that a genuine penetration test cannot be priced based on the number of IP addresses or number of devices.  It does not make sense to charge $X per IP address when each IP address requires a different amount of work to test properly.  Instead, the only correct way to price a genuine penetration test is to assess the time requirements and from there derive workload.

At Netragard the workload for an engagement is based on science and not an arbitrary price per IP.  Our pricing is based on something that we call Time Per Parameter (TPP).  The TPP is the amount of time that a Netragard researcher will spend testing each parameter.  A parameter is either a service being provided by a network connected device or a testable variable within a web application.  Higher threat penetration tests have a higher TPP while more basic penetration tests have a lower TPP.  Certainly this makes sense because the more time we spend trying to hack something the higher the chances are of success.  Netragard’s base LEVEL 1 penetration test is our most simple offering and allows for a TPP of 5 minutes. Our LEVEL 2 penetration test is far more advanced than LEVEL 1 and allows for a TPP of up to 35 minutes.  Our LEVEL 3 penetration test is possibly the most advanced threat penetration test offered in the industry and is designed to produce a true Nation State level threat (not that APT junk).  Our LEVEL 3 penetration test has no limit on TPP or on offensive capabilities.

The details of the methodology that we use to calculate TPP is something that we share with our customers but not our competitors (sorry guys).  What we will tell you is that the count based pricing methodology that is used by our competition is a far cry from our TPP based pricing. Here’s one example of how our pricing methodology saved one of our customers $49,000.00.

We were recently competing for a Penetration Testing engagement for a foreign government department.  This department received a quote for a Penetration Test from another penetration testing vendor that also created software used by penetration testers.   When we asked the department how much money the competitive quote came in at they told us roughly $70,000.00.  When we asked them if that price was within their budget they said yes.  Our last question was about the competitive pricing methodology.  We asked the department “did the competitor price based on how many IP addresses you have or did they do a detailed workload assessment?”.  The department told us that they priced based on the number of IP addresses that they had and that the number was 64.

At that moment we understood that we were competing against a vendor that was offering a Vetted Vulnerability Scan and not a Genuine Penetration Test.  If a vendor prices an engagement based on the number of IP addresses involved then that vendor is not taking actual workload into consideration.  For example, a vendor that charges $500.00 per IP address for 10 IP addresses would price the engagement at $5,000.00.  What happens if those 10 IP addresses require 1,000 man-hours of work to test because they are exceedingly complex?  Will the vendor really find a penetration tester to work for less than $1.00 an hour?  Of course not.  The vendor will instead deliver a Vetted Vulnerability Scan and call it a Penetration Test.  They will scan the 10 IP addresses, vet the results that are produced by the scanner and exploit things where possible, then produce a report.  Moreover they will call the process of vetting “manual testing” which is a blatant lie.  Any vendor that does not properly evaluate workload requirements must use a Vetted Vulnerability Scan methodology to avoid running financially negative on the project.

The inverse of this (which is far more common) is what happened with the foreign government department.  While our competitor priced the engagement at $1093.75 per IP for 64 IP’s which equates to $70,000.00, we priced at $21,000.00 for 11 IP’s (each of which offered between 2 and 6 moderately complex Internet connectable services).  More clearly, our competitor wanted to charge the department $57,968.75 for testing 54 IP addresses that were not in use which equates to charging for absolutely nothing!  When we presented our pricing to the department we broke our costs down to show the exact price that we were charging per internet connectable service.  Needless to say the customer was impressed by our pricing and shocked by our competitor, we won the deal.

While we wish that we could tell you that being charged for nothing is a rare occurrence, it isn’t.  If you’ve received a penetration test then you’ve probably been charged for nothing.  Another recent example involves a small company that was in need of Penetration Testing for PCI.  They approached us telling us that they had already received quotes from other vendors and that the quotes were all in the thousands of dollars range.  We explained that we would evaluate their network and determine workload requirements.  When we did that we found that they had zero responding IP addresses and zero Internet connectable services which equates to zero seconds of work.   Instead of charging them for anything we simply issued them a certificate that stated that as of the date of testing no attack surface was present.  They were so surprised by our honesty  that they wrote us this awesome testimonial about their experience with us.

Finally, our TPP based pricing doesn’t need to be expensive.  In fact, we can deliver a Penetration Test to any customer with any budget.  This is because we will adjust the engagement’s TPP to match your budget.  If your budget only allows for a $10,000.00 spend then we will reduce the TPP to adjust the project cost to meet your budgetary requirements.  Just remember that reducing the TPP means that our penetration testers will spend less time testing each parameter and increasing the TPP means that they will spend more time.  The more the time, the higher the quality.   If we set your TPP at 10 but we encounter services that only require a few seconds of time to test then we will allocate the leftover time to other services that require more time to test.  Doing this ensures that complex services are tested very throughly.

 

 

PDF    Send article as PDF   

Whistleblower Series – The real problem with China isn’t China, its you.

Terms like China, APT and Zero-Day are synonymous with Fear, Uncertainty and Doubt (FUD).  The trouble is that, in our opinion anyway, these terms and respective news articles detract from the actual problem.  For example, in 2011 only 0.12% of compromises were attributed to zero-day exploitation and 99.88% were attributed to known vulnerabilities.  Yet, despite this fact the media continued to write about the zero-day threat as if it was a matter of urgency.  What they really should have been writing about is that the majority of people aren’t protecting their networks properly.  After all, if 99.88% of all compromises were the result of the exploitation of known vulnerabilities then someone must not have been doing their job. Moreover, if people are unable to protect their networks from the known threat then how are they ever going to defend against the unknown?

All of the recent press about China and their Advanced Persistent Threat is the same, it detracts from the real problem.  More clearly, the problem isn’t China, Anonymous, LulzSec, or any other FUD ridden buzzword.  The problem is that networks are not being maintained properly from a security perspective and so threats are aligning with risks to successfully affect penetration.  A large part of the reason why these networks are such soft targets is because  their maintainers are sold a false sense of security from both the services and technology perspective.

In this article we’ll show you how easy it was for us to hack into a sensitive government network that was guarded by industry technologies and testing best practices.  Our techniques deliberately mimicked those used by China.  You’ll notice that the  techniques aren’t particularly advanced (despite the fact that the press calls them Advanced) and in fact are based largely on common sense.  You’ll also notice that we don’t exploit a single vulnerability other than those that exist in a single employee (the human vulnerability).  Because this article is based on an actual engagement we’ve altered certain aspects of our story to protect our customer and their respective identity.  We should also make mention that since the delivery of this engagement our customer has taken effective steps to defeat this style of attack.

Here’s the story…

We recently (relatively speaking anyway) delivered a nation state style attack against one of our public sector customers.  In this particular case our testing was unrestricted and so we were authorized to perform Physical, Social and Electronic attacks.  We were also allowed to use any techniques and technologies that were available should we feel the need.

Lets start off by explaining that our customers network houses databases of sensitive information that would be damaging if exposed.  They also have significantly more political weight and authority than most of the other departments in their area.  The reason why they were interested in receiving a nation-state style Penetration Test is because another department within the same area had come under attack by China. 

We began our engagement with reconnaissance.  During this process we learned that the target network was reasonably well locked down from a technical perspective.  The external attack surface provided no services or application into which we could sink our teeth.  We took a few steps back and found that the supporting networks were equally protected from a technical perspective.  We also detected three points where Intrusion Prevention was taking place, one at the state level,  one at the department level, and one functioning at the host level.

(It was because of these protections and minimalistic attack surface that our client was considered by many to be highly secure.)

When we evaluated our customer from a social perspective we identified a list of all employees published on a web server hosted by a yet another  department.  We were able to build micro-dossiers for each employee by comparing employee names and locations to various social media sites and studying their respective accounts.  We quickly constructed a list of which employees were not active within specific social networking sites and lined them up as targets for impersonation just in case we might need to become someone else.

We also found a large document repository (not the first time we’ve found something like this) that was hosted by a different department.  Contained within this repository was a treasure trove of Microsoft Office files with respective change logs and associated usernames.  We quickly realized that this repository would be an easy way into our customers network.  Not only did it provide a wealth of META data, but it also provided perfect pretexts for Social Engineering… and yes, it was open to the public.

(It became glaringly apparent to us that our customer had overlooked their social points of risk and exposure and were highly reliant on technology to protect their infrastructure and information.  It was also apparent that our customer was unaware of their own security limitations with regards to technological protections.  Based on what we read in social forums, they thought that IPS/IDS and Antivirus technologies provided ample protection.)

We downloaded a Microsoft Office document that had a change log that indicated that it was being passed back and fourth between one of our customers employees and another employee in different department.  This was ideal because a trust relationship had already been established and was ripe for the taking.  Specifically, it was “normal” for one party to email this document to the other as a trusted attachment.  That normalcy provided the perfect pretext.

(Identifying a good pretext is critically important when planning on launching Social Engineering attacks.  A pretext is a good excuse or reason to do something that is not accurate.  Providing a good pretext to a victim often causes a victim to perform actions that they should not perform.)

Once our document was selected we had our research team build custom malware specifically for the purpose of embedding it into the Microsoft Office document.  As is the case with all of our malware, we built in automatic self-destruct features and expiration dates that would result in clean deinstallation when triggered.  We also built in covert communication capabilities as to help avoid detection.

(Building our own malware was not a requirement but it is something that we do.  Building custom malware ensures that it will not be detected, and it ensures that it will have the right features and functions for a particular engagement.)

Once the malware was built and embedded into the document we tested it in our lab network.  The malware functioned as designed and quickly established connectivity with its command and control servers.   We should mention that we don’t build our malware with the ability to propagate for liability and control reasons.  We like keeping our infections highly targeted and well within scope.

When we were ready to begin our attack we fired off a tool designed to generate a high-rate of false positives in Intrusion Detection / Prevention systems and Web Application Firewalls alike.  We ran that tool against our customers network and through hundreds of different proxies to force the appearance of multiple attack sources.  Our goal was to hide any real network activity behind a storm of false alarms.

(This demonstrates just how easily Intrusion Detection and Prevention systems can be defeated.   These systems trust network data and generate alerts based on that network data.  When an attacker forges network data then that attacker can also forge false alerts.   Generate enough false alerts and you disable ones ability to effectively monitor the real threat.)

While that tool was running we forged an email to our customers employee from the aforementioned trusted source.  That email contained our infected Microsoft Office document as an attachment.  Within less than three minutes our target received the email and opened the infected attachment thus activating our malware.  Once activated our malware connected back to its command and control server and we had successfully penetrated into our customer’s internal IT infrastructure.  We’ll call this initial point of Penetration T+1.  Shortly after T+1 we terminated our noise generation attack.

(The success of our infection demonstrates how ineffective antivirus technologies are at preventing infections from unknown types of malware.  T+1 was using a current and well respected antivirus solution.  That solution did not interfere with our activities.  The success of our penetration at this point further demonstrates the risk introduced by the human factor.  We did not exploit any technological vulnerability but instead exploited human trust.         )

Now that we had access we needed to ensure that we kept it.  One of the most important aspects of maintaining access is monitoring user activity.  We began taking screenshots of T+1’s desktop every 10 seconds.  One of those screenshots showed T+1 forwarding our email off to their IT department because they thought that the attachment looked suspicious.  While we were expecting to see rapid incident response, the response never came.  Instead, to our surprise, we received a secondary command and control connection from a newly infected host (T+2).

When we began exploring T+2 we quickly realized that it belonged to our customers head of IT Security.  We later learned that he received the email from T+1 and scanned the email with two different antivirus tools.  When the email came back as clean he opened the attachment and infected his own machine.  Suspecting nothing he continued to work as normal… and so did we.

Next we began exploring the file system for T+1. When we looked at the directory containing users batch files we realized that their Discretionary Access Control Lists (DACL’s) granted full permissions to anyone (not just domain users).  More clearly, we could read, write, modify and delete any of the batch files and so decided to exploit this condition to commit a mass compromise.

To make this a reality, our first task was to identify a share that would allow us to store our malware installer.  This share had to be accessible in such a way that when a batch file was run it could read and execute the installer.  As it turned out the domain controllers were configured with their entire C:\ drives shared and the same wide open DACL’s.

We located a path where other installation files were stored on one of the domain controllers.  We placed a copy of our malware into that location and named it “infection.exe”.  Then using a custom script we modified all user batch files to include one line that would run “infection.exe” whenever a user logged into a computer system.  The stage was set…

In no time we started receiving connections back from desktop users including but not limited to the desktops belonging to each domain admin.  We also received connections back from domain controllers, exchange servers, file repositories, etc.  Each time someone accessed a system it became infected and fell under our control.

While exploring the desktops for the domain administrators we found that one admin kept a directory called “secret”.   Within that directory were backups for all of the network devices including intrusion prevention systems, firewalls, switches, routers, antivirus, etc.  From that same directory we also extracted viso diagrams complete with IP addresses, system types, departments, employee telephone lists, addresses, emergency contact information, etc.

At this point we decided that our next step would be to crack all passwords.  We dumped passwords from the domain controller and extracted passwords from the backed up configuration files.  We then fed our hashes to our trusted cracking machine and successfully cracked 100% of the passwords.

(We should note that during the engagement we found that most users had not changed their passwords since their accounts had been created.  We later learned that this was a recommendation made by their IT staff.  The recommendation was based on an article that was written by a well-respected security figure. ).

With this done, we had achieved an irrecoverable and total infrastructure compromise.  Had we been a true foe then there would be no effective way to remove our presence from the network.  We achieved all of this without encountering any resistance, without exploiting a single technological vulnerability, without detection (other than the forwarded email of course) and without the help of China.  China might be hostile but then again so are we.

 

 

PDF Download    Send article as PDF   

Whistleblower Series – Don’t be naive, take the time to read and understand the proposal.

In our last whistleblower article, we showed that the vast majority of Penetration Testing vendors don’t actually sell Penetration Tests. We did this by deconstructing pricing methodologies and combining the results with common sense. We’re about to do the same thing to the industry average Penetration Testing proposal. Only this time we’re not just going to be critical of the vendors, we’re also going to be critical of the buyers.

A proposal is a written offer from seller to buyer that defines what services or products are being sold. When you take your car to the dealer, the dealer gives you a quote for work (the proposal). That proposal always contains an itemized list for parts and labor as well as details on what work needs to be done. That is the right way to build a service-based proposal.

The industry average Network Penetration Testing proposal fails to define the services being offered. Remember, the word ‘define’ means the exact meaning of something. When we read a network penetration testing proposal and we have to ask ourselves “so what is this vendor going to do for us?” then the proposal has clearly failed to define services.

For example, just recently we reviewed a proposal that talked about “Ethos” and offered optional services called “External Validation” and “External Quarterlies” but completely failed to explain what “External Validation” and “External Quarterlies” were. We also don’t really care about “Ethos” because it has nothing to do with the business offering. Moreover, this same proposal absolutely failed to define methodology and did not provide any insight into how testing would be done. The pricing section was simply a single line item with a dollar value, it wasn’t itemized. Sure the document promised to provide Penetration Testing services, but that’s all it really said (sort of).

This is problematic because Penetration Testing is a massively dynamic service that contains a potentially infinite amount of techniques (attacks and tests) for penetration attempts. Some of those techniques are higher threat than others; some are even higher risk than others. If a proposal doesn’t define the tests that will be done, how they will be done, what the risks are, etc.,  then the vendor is free to do whatever they want and call it a day. Most commonly this means doing the absolute minimum amount of work while making it look like a lot.

Here’s some food for thought…

Imagine that we are a bulletproof vest Penetration Testing Company. It’s our job to test the effectiveness of bulletproof vests for our customers so that they can guarantee the safety of their buyers. We deliver a proposal to a customer that is the same quality as the average Network Penetration Testing proposal and our customer signs the proposal.

A week later, we receive a shipment of vests for testing. We hang those vests on dummies made up of ballistics gel in our firing range. We then take our powerful squirt guns, stand ten feet down range and squirt away. After the test is complete, we evaluate the vests and determine that they were not penetrated and so passed the Penetration Test. Our customer hears the great news and begins selling the vest on the open market.

In the scenario above, both parties are to blame. The customer did not do their job because they failed to validate the proposal, to demand clear definitions, to assess the testing methodology, etc. Instead they naively trusted the vendor. The vendor failed to meet their ethical responsibilities because they offered a misleading and dishonest service that would do nothing more than promote a false sense of security. In the end, the cost in damages (loss of life) will be significantly higher than the cost of receiving genuine services. In the end, the customer will suffer as will their own customers.

Unfortunately, this is what is happening with the vast majority of Network Penetration Tests. Vendors are perceived as experts by their customers and are delivering proposals like the ones described above. Customers then naively evaluate proposals assuming that all vendors are created equal and make buying decisions based largely on cost. They receive services (usually not a genuine penetration test), put a check in the box and move onto the next task. In reality, the only thing they’ve bought is a false sense of security.

How do we avoid this?

While we can’t force Network Penetration Testing firms to hold themselves to a higher standard, their customers can. If customers took the time to truly evaluate Network Penetration Testing proposals (or any proposal for that matter) then this problem would be eradicated. The question is do customers really want high quality testing or do they just want a check in the box? In our experience, both types of customers exist but most seem to want a genuine and high-quality service.

Here are a few things that customers can do to hold their Network Penetration Testing vendor to a higher standard.

  • Make sure the engagement is properly scoped (we discussed this in our previous article)
  • Make sure the proposal uses terms that are clearly defined and make sense. For example, we saw a proposal just one week before writing this article that was for “Non-intrusive Network Penetration Testing.” Is it possible to penetrate into something without being intrusive? No.
  • Make sure that the proposal defines terms that are unique to the vendor. For example, the proposal that we mentioned previously talks about “External Quarterlies” but fails to explain what that means. Why are people signing proposals that make them pay for an undefined service? Would you sign it if it had a service called “Goofy Insurance”?
  • Make sure the vendor can explain how they came to the price points that are reflected in the proposal. Ask them to break it down for you and remember to read our first article so that you understand the differences between count based pricing (wrong) and attack surface based pricing (right).
  • (We’ll provide more points in the next article).

As the customer, it is up to you to hold a vendor’s feet to the fire (we expect it). When you purchase poor quality services that are mislabeled as “Penetration Tests” then you are enabling the snake-oil vendors to continue. This is a problem because it confuses those who want to purchase genuine and high-quality services. It makes their job exceedingly difficult and in some cases causes people to lose faith in the Network Penetration Testing industry as a whole.

If you feel that what we’ve posted here is inaccurate and can provide facts to prove the inaccuracy then please let us know. We don’t want to mislead anyone and will happily modify these entries to better reflect the truth.

Create PDF    Send article as PDF   

Whistleblower Series – Finding a genuine Penetration Testing vendor.

There’s been a theme of dishonesty and thievery in the Penetration Testing industry for as long as we can remember.  Much in the same way that merchants sold “snake-oil” as a cure-all for what ails you, Penetration Testing vendors sell one type of service and brand it as another thus providing little more than a false sense of security.  They do this by exploiting their customers lack of expertise about penetration testing and make off like bandits.  We’re going to change the game; we’re going to tell you the truth.

Last week we had a new financial services customer approach us.  They’d already received three proposals from three other well-known and trusted Penetration Testing vendors. When we began to scope their engagement we quickly realized that the IP addresses that they’d been providing were wrong.  Instead of belonging to them they belonged an e-commerce business that sold beer-making products!  How did we catch this when the other vendors didn’t?  Simple, we actually take the time to scope our engagements carefully because we deliver genuine Penetration Testing services.

Most other penetration testing vendors do what is called count based pricing which we think should be a major red-flag to anyone.  Count based pricing simply says that you will pay X dollars per IP address for Y IP addresses. If you tell most vendors that you have 10 IP addresses they’ll come back and quote you at around $5,000.00 for a Penetration Test ($500.00 per IP). That type of pricing is not only arbitrary but is fraught with serious problems. Moreover, it’s a solid indicator that services are going to be very poor quality.

Scenario 1: The Overcharge (Too much for too little)

If you have 10 IP addresses but none of those 10 IP addresses are running any connectable services then there’s zero seconds worth of work to be done.  Do you really want to pay $5,000.00 for zero seconds worth of work?  Moreover, is it ethical for the vendor to charge you $5,000.00 for testing targets that are not really testable? While we don’t think its ethical we’ve seen many, many vendors do this very thing.

Scenario 2: The Undercharge (Too little money for too much work)

What if those 10 IP addresses were serving up medium complexity web applications? Lets assume that each web application would take 100 hours to test totaling 1,000 hours of testing time (not including the reporting, presentation, etc.).  If you do the math then that equates to an absolutely absurd hourly rate of $5.00 per hour for the Penetration Tester!  Of course, no penetration tester is going to work for that much money so what are you really paying $5,000.00 for?

Well, lets assume that the very-low-end cost of a penetration tester is around $60.00 per hour (its actually higher than that).  In order to deliver 1,000 hours of work at $5,000.00 the test would need to be 92.7% automated resulting in an exact hourly rate of $60.24 per hour. Do you really want to pay $5,000.00 for a project that is 92.7% automated? Moreover, is that even a Penetration Test?

The terms Penetration Test and Vulnerability Scan only have one correct definition.  The definition of Penetration Test is a test that is designed to identify the presence of points where something can make its way into or through something else.  Penetration Tests are not assessments (best guesses) of any kind.  A Penetration Test either successfully penetrates or it does not not, there is no grey zone; there are no false positives.  (This is why we can guarantee the quality of our penetration testing services.)

A Vulnerability Assessment on the other hand is a best guess or an educated guess as to how susceptible something is to risk or harm.  Because it is an assessment and not  a test there is room for error (guessing wrong) and so false positives should be expected.  A Vulnerability Scan is similar to a Vulnerability Assessment only instead of a human doing the guess work a computer program (with a much higher margin of error) does the guess work.

So, if 92.7% of a service is based on Vulnerability Scanning then how is it that Penetration Testing vendors can label such a service a Penetration Test?  They should call it what it really is, which is a Vetted Automated Vulnerability Scan; and a Vetted Automated Vulnerability Scan is about as effective as Penetration Testing a bulletproof vest with a squirt gun.  We’re not sure about you but we wouldn’t want to wear that vest into battle. These types of services provide little more than a false sense of security.

Back on track…

The question becomes how should a vendor price their services and build their proposals? While we won’t disclose our methodology here because we don’t want to enable copycats, we will provide you with some insight through an analogy.

Your car breaks down and you call a random mechanic.  You tell the mechanic what sort of car you drive and how many miles are on it but never provide any details as to what happened.  The mechanic then quotes you $300.00 to fix your car (without ever really diagnosing it) and $50.00 for a tow.  Would you bring your car to that mechanic? How can he afford to fix your car for $300.00 and make a profit?  To accomplish that must have an arsenal of junk parts gnome slaves working for peanuts, right?  Would you really trust the quality of his work? That is count based pricing.

Fortunately automobile mechanics (most of them anyway) are more ethical than that (and gnome slaves don’t really exist anyway).  Most of them won’t deliver a quote until after they’ve evaluated your car and successfully diagnosed the problem.  Once diagnosed they’ll provide you with an itemized quote that includes parts, labor, taxes, and a timeframe for service delivery.  They won’t negotiate much on price because in most cases you are getting what you pay for.

Genuine Penetration testing vendors are no different than genuine mechanics. All Penetration Testing vendors should be held to the same standard (including us).  What you pay for in services should be a direct reflection the amount of work that needs to be done. The workload requirement should be determined through a careful, hands-on assessment. This means that when pricing is done right there is no room to adjust pricing other than to change workload. Any vendor that offers a lower price if you close before the end of the month is either offering arbitrary pricing or they are padding their costs.

What if a vendor truly needs to discount services? When we deliver Penetration Testing services we don’t charge for automation.  If we automate 10% then our services are discounted 10%.  If we automate 100% then our services are delivered to our customers free of charge. (Yes, that’s right, we’ll scan you for free while other vendors might charge thousands of dollars for that). Why charge for automation when it takes less than 3 minutes of our time to kick off a scan?  Just because we can charge for something doesn’t mean that it’s ethically right.

Anyway, this article is one of many to come in our whistle blower series. Please feel free to share or contact us with any questions.

If you feel that what we’ve posted here is inaccurate and can provide facts to prove the inaccuracy then please let us know.  We don’t want to mislead anyone and will happily modify these entries to better reflect the truth.

 

PDF Printer    Send article as PDF   

The 3 ways we owned you in 2012

Here are the top 3 risks that we leveraged to penetrate into our customers’ networks in 2012. Each of these has been used to affect an irrecoverable infrastructure compromise during multiple engagements across a range of different customers. We flag a compromise “irrecoverable” when we’ve successfully taken administrative control over 60% or more of the network-connected assets. You’ll notice that these risks are more human-oriented than they are technology-oriented, thus demonstrating that your people are your greatest risk. While we certainly do focus on technological risks, they don’t fall into the top three categories.

The general methodology that we follow to achieve an irrecoverable infrastructure compromise is depicted below at a high-level.

  1. Gain entry via a single point (one of the 3 referenced below)
  2. Install custom backdoor (RADON our safe, undetectable, home-grown pseudo-malware)
  3. Identify and penetrate the domain controller (surprisingly easy in most cases)
  4. Extract and crack the passwords (we have pretty rainbows and access to this GPU cracker)
  5. Propagate the attack to the rest of the network (Distributed Metastasis)

 

Social Engineering

Social Engineering is the art of manipulating people into divulging information or performing actions usually for the purpose of gaining access to a computer system or network connected resource. It is similar to fraud, but the attacker very rarely comes face-to-face with his or her victims. Today, Social Engineering is used to help facilitate the delivery of technological attacks like the planting of malware, spy devices, etc.

During an engagement in 2012, Netragard used Social Engineering to execute an irrecoverable infrastructure compromise against one of its healthcare customers. This was done through a job opportunity that was posted on our customers website. Specifically, our customer was looking to hire a Web Application Developer that understood how to design secure applications. We built an irresistible resume and established fake references, which quickly landed us an on-site interview. When we arrived, we were picked up by our contact and taken to his office. While sitting there, we asked him for a glass of water and he promptly left us alone in his office for roughly 2 minutes. During that time, we used a USB device to infect his desktop computer with RADON (our pseudo-malware). When he returned we thanked him for the water and continued on with the interview. In the end, we were offered the job but turned it down (imagine if we accepted it).

If you aren’t subjecting your staff to real social engineering then you aren’t receiving a realistic penetration test. While the example above represents an elevated threat, we test at a variety of different threat levels. The key is to keep it realistic.

Malware

The word malware is derived from “Malicious Software.”  Any software that was written for the purpose of being malicious is by definition malware. This includes but is not limited to trojans, worms, and viruses.  Today malware is evolving and is becoming harder and harder to detect. Most people are under the pretense that their antivirus software will protect them from malware when in reality their antivirus software is borderline useless. Antivirus software can only detect malware if it knows what to look for. This means that new, never before seen variants of malware are undetectable so long as they don’t behave in any known way that would trigger false positives but never the less result in detection. What’s more interesting is that we found that antivirus software can’t even detect known malware if the known malware has been packed.

We used RADON, our home-brew “safe” malware, in nearly every engagement in 2012. RADON is designed to enable us to infect customer systems in a safe and controllable manner. Safe means that every strand is built with an expiration date that, when reached, results in RADON performing an automatic and clean self-removal. Safe also means that we have the ability to tell RADON to deinstall at any point during the engagement should any customer make the request. RADON is 100% undetectable and will completely evade all antivirus software, intrusion prevention / detection systems, etc. Why did we build RADON? We built it because we need to have the same capabilities as the bad-guys, if not then our testing wouldn’t be realistic.

One last thing that you should know about malware is that it doesn’t usually exploit technological vulnerabilities to infect systems. Instead, it exploits human gullibility and we all know that humans are far more easy to exploit than technology! The “ILOVEYOU” worm is a prime example. The worm would email itself to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs,” which was actually a copy of the worm. When a person attempted to read the attachment, they would inadvertently run the copy and infect their own computer. Once infected, the worm would begin the process again and email copies of itself to the first 50 email addresses in the victim’s address book. This technique of exploiting gullibility was so successful that in the first 10 days, more than 50 million infections were reported. Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

Configuration Vulnerabilities

Configuration Vulnerabilities most commonly seem to be created when third parties deploy software on customer networks.  They often fail to remove setup files, default accounts, default credentials, etc.  As a result, hackers scan the internet for common configuration vulnerabilities and often use those to penetrate into affected systems.  Sometimes configuration vulnerabilities aren’t particularly useful other than being destructive.

For example, our team was recently performing an Advanced External Penetration Test for a small bank on the southern part of the east coast. This bank had an online banking web application that was deployed by a vendor. The portal worked fine, but the vendors consultant didn’t clean up the setup files after configuration. During our engagement, we performed directory enumeration against the poorly configured target using wfuzz (standard practice during any penetration testing engagement). Our enumerator identified a directory called “AppSetup” and when it requested the page from that directory we received a message that read “Database Initialized Successfully.” Yes, that’s right, simply visiting the page wiped out one of the bank’s customer databases. What’s more scary is that this was done from the internet, didn’t require any authentication, and was unavoidable for anyone that would have simply accessed that specific URL.

Another common Configuration Vulnerability exists within networks of all sizes and has everything to do with Active Directory and the account lockout after login failure setting. When delivering a Penetration Test to two separate customers in 2012 we inadvertently created a company-wide denial of service. In both cases, the reason for the outage was because the customer had bad configurations established in Active Directory. Specifically, their lockout after 3 login failures was set to 24 hours. During both engagements, we were authorized to perform password guessing attacks against all external points of authentication. During both engagements, we managed to lock out nearly every user account for a 24-hour period. One of the customers (a large bank) couldn’t remember their master domain admin password and so their outage was extended to nearly 24 hours. In this particular case, setting a lockout to 24 hours creates a critical availability impacting vulnerability. We really don’t see any point in setting your lockout to anything greater than 5 minutes (if you want to know why feel free to leave a comment).

In closing, configuration vulnerabilities are the product of unsafe configurations. What is unfortunate is that these vulnerabilities are almost always identified through triggering, which results in an outage or damage of some sort. It’s important to remember that its not our fault that your systems might not configured properly.

 

 

Free PDF    Send article as PDF   

83% of businesses have no established security plan (but they’ve got Kool-Aid)

I (Adriel) read an article published by Charles Cooper of c|net regarding small businesses and their apparent near total lack of awareness with regards to security.  The article claims that 77% of small- and medium-sized businesses think that they are secure yet 83% of those businesses have no established security plan.  These numbers were based on a survey of 1,015 small- and medium-sized businesses that was carried out by the National Cyber Security Alliance and Symantec.

These numbers don’t surprise me at all and, in fact, I think that this false sense of security is an epidemic across businesses of all sizes, not just small-to-medium.  The question that people haven’t asked is why does this false sense of security exist in such a profound way? Are people really ok with feeling safe when they are in fact vulnerable?  Perhaps they are being lied to and are drinking the Kool-Aid…

What I mean is this.  How many software vendors market their products as secure only to have someone identify all sorts of critical vulnerabilities in it later?  Have you ever heard a software vendor suggest that their software might not be highly secure?  Not only is the suggestion that all software is secure an absurd one, but it is a blatant lie.  A more truthful statement is that all software is vulnerable unless it is mathematically demonstrated to be flawless (which by the way is a near impossibility).

Very few software vendors hire third-party  vulnerability discovery and exploitation experts to perform genuine reviews of their products. This is why I always recommend using a third-party service (like us) to vet the software from a security perspective before making a purchase decision.  If the software vendor wants to be privy to the results then they should pay for the engagement because in the end it will improve the product. Why should you (their prospective customer) pay to have their product improved?  Shouldn’t that be their responsibility?  Shouldn’t they be doing this as a part of the software development lifecycle?

Security vendors are equally responsible for promoting a false sense of security.  For example, how many antivirus companies market their technology in such a way that might be perceived as an end-all, be-all solution to email threats, viruses, and trojans, etc.,? Have you ever heard antivirus software vendors say anything like “we will protect you from most viruses, worms, etc.”?  Of course not. That level of honesty would leave doubt in the minds of their customers, which would impede sales.  Truth is, their customers should have doubt because antivirus products are only partially effective and can be  subverted, as we’ve demonstrated before.  Despite this fact, uninformed people still feel safe because they use antivirus software.

Let’s not only pick on antivirus software companies though, what about companies that are supposed to test the security of networks and information systems (like us for example)?  We discussed this a bit during our “Thank You Anonymous” blog entry.   Most businesses that sell penetration testing services don’t deliver genuine penetration tests despite the fact that they call their services penetration testing services.  What they really sell is the manually vetted product of an automated vulnerability scan.  Moreover, they call this vetting process “manual testing” and so their customers believe they’ve received a quality penetration test when in fact they are depending on an automated program like Nessus to find flaws in their customer networks.  This is the equivalent of testing a bulletproof vest with a squirt gun and claiming that its been tested with a .50 caliber rifle.  Would you want to wear that vest in battle?

It seems to me that security businesses are so focused on revenue generation that they’ve lost sight of the importance of providing clear, factual, complete and balanced information to the public.  It’s my opinion that their competitive marketing methodologies are a detriment to security and actually help to promote the false sense of security referenced in the c|net article above.  Truth is that good security includes the class of products that I’ve mentioned above but that those products are completely useless without capable, well-informed security experts behind them.  Unfortunately not all security experts are actually experts either (but that’s a different story)…

 

 

 

 

 

PDF Creator    Send article as PDF   

Selling zero-day’s doesn’t increase your risk, here’s why.

The zero-day exploit market is secretive. People as a whole tend to fear what they don’t understand and substitute fact with speculation.  While very few facts about the zero-day exploit market are publicly available, there are many facts about zero-days that are available.  When those facts are studied it becomes clear that the legitimate zero-day exploit market presents an immeasurably small risk (if any), especially when viewed in contrast with known risks.

Many news outlets, technical reporters, freedom of information supporters, and even security experts have used the zero-day exploit market to generate Fear Uncertainty and Doubt (FUD).  While the concept of a zero-day exploit seems ominous reality is actually far less menacing.  People should be significantly more worried about vulnerabilities that exist in public domain than those that are zero-day.  The misrepresentations about the zero-day market create a dangerous distraction from the very real issues at hand.

One of the most common misrepresentations is that the zero-day exploit market plays a major role in the creation of malware and malware’s ability to spread.  Not only is this categorically untrue but the Microsoft Security Intelligence Report (SIRv11) provides clear statistics that show that malware almost never uses zero-day exploits.  According to SIRv11, less than 6% of malware infections are actually attributed to the exploitation of general vulnerabilities.  Of those successful infections nearly all target known and not zero-day vulnerabilities.

Malware targets and exploits gullibility far more frequently than technical vulnerabilities.  The “ILOVEYOU” worm is a prime example.  The worm would email its self to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs”. The attachment was actually a copy of the worm.  When a person attempted to read the attachment they would inadvertently run the copy and infect their own computer.  Once infected the worm would begin the process again and email copies of its self to the first 50 email addresses in the victims address book.  This technique of exploiting gullibility was so successful that in the first 10 days over 50 million infections were reported.  Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

The Morris worm is an example of a worm that did exploit zero-day vulnerabilities to help its spread.  The Morris was created in 1988 and proliferated by exploiting multiple zero-day vulnerabilities in various Internet connectable services.  The worm was not intended to be malicious but ironically a design flaw caused it to malfunction, which resulted in a Denial of Service condition of the infected systems.  The Morris worm existed well before zero-day exploit market was even a thought thus proving that both malware and zero-day exploits will exist with or without the market.  In fact, there is no evidence that shows the existence of any relationship between the legitimate zero-day exploit market and the creation of malware, there is only speculation.

Despite these facts, prominent security personalities have argued that the zero-day exploit market keeps people at risk by preventing the public disclosure of zero-day vulnerabilities. Bruce Schneier wrote, “a disclosed vulnerability is one that – at least in most cases – is patched”.  His opinion is both assumptive and erroneous yet shared by a large number of security professionals.  Reality is that when a vulnerability is disclosed it is unveiled to both ethical and malicious parties. Those who are responsible for applying patches don’t respond as quickly as those with malicious intent.

According to SIRv11, 99.88% of all compromises were attributed to the exploitation of known (publicly disclosed) and not zero-day vulnerabilities.  Of those vulnerabilities over 90% had been known for more than one year. Only 0.12% of compromises reported were attributed to the exploitation of zero-day vulnerabilities. Without the practice of public disclosure or with the responsible application of patches the number of compromises identified in SIRv11 would have been significantly reduced.

The Verizon 2012 Data Breach Investigations Report (DBIR) also provides some interesting insight into compromises.  According to DBIR 97% of breaches were avoidable through simple or intermediate controls (known / detectable vulnerabilities, etc.), 92% were discovered by a third party and 85% took two weeks or more to discover. These statistics further demonstrate that networks are not being managed responsibly. People, and not the legitimate zero-day exploit market, are keeping themselves at risk by failing to responsibly address known vulnerabilities.  A focus on zero-day defense is an unnecessary distraction for most.

Another issue is the notion that security researchers should give their work away for free.  Initially it was risky for researchers to notify vendors about security flaws in their technology.  Some vendors attempted to quash the findings with legal threats and others would treat researchers with such hostility that it would drive the researchers to the black market.  Some vendors remain hostile even today, but most will happily accept a researchers hard work provided that its delivered free of charge.  To us the notion that security researchers should give their work away for free is absurd.

Programs like ZDI and what was once iDefense (acquired by VeriSign) offer relatively small bounties to researchers who provide vulnerability information.  When a new vulnerability is reported these programs notify their paying subscribers well in advance of the general public.  They do make it a point to work with the manufacturer to close the hole but only after they’ve made their bounty.  Once the vendors have been notified (and ideally a fix created) public disclosure ensues in the form of an email-based security advisory that is sent to various email lists.  At that point, those who have not applied the fix are at a significantly increased level of risk.

Companies like Google and Microsoft are stellar examples of what software vendors should do with regards to vulnerability bounty programs.  Their programs motivate the research community to find and report vulnerabilities back to the vendor.  The existence of these programs is a testament to how seriously both Google and Microsoft take product security. Although these companies (and possibly others) are moving in the right direction, they still have to compete with prices offered by other legitimate zero-day buyers.  In some cases those prices offered are as much as 50% higher.

Netragard is one of those entities. We operate the Exploit Acquisition Program (EAP), which was established in early 2000 as a way to provide ethical security researchers with top dollar for their work product. In 2011 Netragard’s minimum acquisition price (payment to researcher) was $20,000.00, which is significantly greater than the minimum payout from most other programs.  Netragard’s EAP buyer information, as with any business’ customer information, is kept in the highest confidence.  Netragard’s EAP does not practice public vulnerability disclosure for the reasons cited above.

Unlike VUPEN, Netragard will only sell its exploits to US based buyers under contract.  This decision was made to prevent the accidental sale of zero-day exploits to potentially hostile third parties and to prevent any distribution to the Black Market.  Netragard also welcomes the exclusive sale of vulnerability information to software vendors who wish fix their own products.  Despite this not one single vendor has approached Netragard with the intent to purchase vulnerability information.  This seems to indicate that most software vendors are sill more focused on revenue than they are end-user security.  This is unfortunate because software vendors are the source of vulnerabilities.

Most software vendors do not hire developers that are truly proficient at writing safe code (the proof is in the statistics). Additionally, very few software vendors have genuine security testing incorporated into their Quality Assurance process.  As a result, software vendors literally (and usually accidently) create the vulnerabilities that are exploited by hackers and used to compromise their customer’s networks. Yet software vendors continue to inaccurately tout their software as being secure when in fact t isn’t.

If software vendors begin to produce truly secure software then the zero-day exploit market will cease to exist or will be forced to make dramatic transformations. Malware however would continue to thrive because it is not exploit dependent.  We are hopeful that Google and Microsoft will be trend setters and that other software vendors will follow suit.  Finally, we are hopeful that people will do their own research about the zero-day exploit markets instead of blindly trusting the largely speculative articles that have been published recently.


PDF    Send article as PDF   

Thank You Anonymous

We (Netragard) have been meaning to say Thank You to Anonymous for a long time now. With that said, Netragard does not condone the actions of Anonymous, nor the damage they have caused.   What Anonymous has demonstrated, and continues to demonstrate, is just how poorly most network infrastructures are managed from a security perspective (globally, not just within the USA).  People need to wake up.

If you take the time to look at most of the hacks done by Anonymous, you’ll find that their primary points of entry are really quite basic.  They often involve the exploitation of simple SQL Injection vulnerabilities, poorly configured servers, or even basic Social Engineering.  We’re not convinced that Anonymous is talentless; we just think that they haven’t had to use their talent because the targets are so soft.

What Anonymous has really exposed here are issues with the security industry as a whole and with the customers that are being serviced. Many of Anonymous’s victims use third party Penetration Testing vendors and nightly Vulnerability Scanning services.  Many of them even use “best of breed” Intrusion Prevention Systems and “state of the art” firewalls.  Despite this, Anonymous still succeeds at Penetration with relative ease, without detection and by exploiting easy to identify vulnerabilities.  So the question becomes, why is Anonymous so successful?

Part of the reason is that regulatory requirements like PCI DSS 11.3 (and others) shift businesses from wanting Penetration Tests for security reasons to needing Penetration Tests in order to satisfy regulatory requirements.  What is problematic is that most regulatory requirements provide no minimum quality standard for Penetration Testing.  They also provide no incentive for quality testing.  As a result, anyone with an automated vulnerability scanner and the ability to vet results can deliver bare minimum services and satisfy the requirement.

We’ll drive this point home with an analogy.  Suppose you manufacture bulletproof vests for the military.  Regulations state that each version of your vest must pass a Penetration Test before you can sell it to the military.   The regulations do not define a quality standard against which the vests should be tested.  Since your only goal is to satisfy regulations, you hire the lowest bidder.  They perform Penetration Testing against your bulletproof vest using a squirt gun.  Once testing is complete you receive a report stating that your vests passed the test.  Would you want to wear that vest into battle?  (Remember, Anonymous uses bullets not water).

This need to receive the so-called “Passing” Penetration Test has degraded the cumulative quality of Penetration Testing services.  There are far more low-quality testing firms than there are high-quality.  Adding to the issue is that the low-quality firms advertise their services using the same language, song and dance as high-quality firms.  This makes it difficult for anyone interested in purchasing high-quality services to differentiate between vendors.  (In fact, we’ve written a white paper about this very thing).

A possible solution to this problem would be for the various regulatory bodies to define a minimum quality standard for Penetration Testing.  This standard should force Penetration Testing firms to test realistically.  This means that they do not ask their customers to add their IP addresses to the whitelist for their Intrusion Prevention Systems or Web Application Firewalls.  It means that they require Social Engineering (including but not limited to the use of home-grown pseudomalware), it means that they test realistically, period.   If a vendor can’t test realistically then the vendor shouldn’t be in the testing business.

What this also means is that people who need Penetration Tests will no longer be able to find a low-cost, low-quality vendor to provide them with a basic check in the box.  Instead, they will actually need to harden their infrastructures or they won’t be compliant.

We should mention that not all business are in the market of purchasing low quality Penetration Testing services.  Some businesses seek out high-quality vendors and truly want to know where there vulnerabilities are.  They not only allow but also expect their selected Penetration Testing vendor to test realistically and use the same tactics and talent as the actual threat.  These types of businesses tend to be more secure than average and successfully avoid Anonymous’s victim list (at least so far). They understand that the cost of good security is equal to a fraction of the cost in damages of a single successful compromise but unfortunately not everyone understands that.

So is it really any surprise that Anonymous has had such a high degree of success?  We certainly don’t think so.  And again, while we don’t condone their actions, we can say thank you for proving our point.

PDF Download    Send article as PDF   

Netragard on Exploit Brokering

Historically ethical researchers would provide their findings free of charge to software vendors for little more than a mention.  In some cases vendors would react and threaten legal action citing violations of poorly written copyright laws that include but are not limited to the DMCA.  To put this into perspective, this is akin to threatening legal action against a driver for pointing out that the breaks on a school bus are about to fail.

This unfriendliness (among various other things) caused some researchers to withdraw from the practice of full disclosure. Why risk doing a vendor the favor of free work when the vendor might try to sue you?

Organizations like CERT help to reduce or eliminate the risk to security researchers who wish to disclose vulnerabilities.  These organizations work as mediators between the researchers and the vendors to ensure safety for both parties.  Other organizations like iDefense and ZDI also work as middlemen but unlike CERT earn a profit from the vulnerabilities that they purchase. While they may pay a security researcher an average of $500-$5000 per vulnerability, they charge their customers significantly more for their early warning services.  Its also unclear (to us anyway) how quickly they notify vendors of the vulnerabilities that they buy.

The next level of exploit buyers are the brokers.  Exploit brokers may cater to one or more of three markets that include National, International, or Black.  While Netragard’s program only sells to National buyers, companies like VUPEN sell internationally.  Also unlike VUPEN, Netragard will sell exploits to software vendors willing to engage in an exclusive sale.   Netragard’s Exploit Acquisition Program was created to provide ethical researchers with the ability to receive fair pay for their hard work; it was not created to keep vulnerable software vulnerable.  Our bidding starts at $10,000 per exploit and goes up from there.

 

Its important to understandwhat a computer exploit is and is not.  It is a tool or technique that makes full use of and derives benefit from vulnerable computer software.   It is not malware despite the fact that malware may contain methods for exploitation.  The software vulnerabilities that exploits make use of are created by software vendors during the development process.  The idea that security researchers create vulnerability is absurd.  Instead, security researchers study software and find the already existing flaws.

The behavior of an exploit with regards to malevolence or benevolence is defined by the user and not the tool.  Buying an exploit is much like buying a hammer in that they can both be used to do something constructive or destructive.  For this reason it’s critically important that any ethical exploit broker thoroughly vet their customers before selling an exploit.  Any broker that does not thoroughly vet their customers is operating irresponsibly.

What our customers do with the exploits that they buy is none of our business just as what you do with your laptop is not its vendors business.   That being said, any computer system is far more dangerous than any exploit.  An exploit can only target one very specific thing in a very specific way and has a limited shelf life. It is not entirely uncommon for vulnerabilities to be accidentally fixed thus rendering a 0-day exploit useless.  A laptop on the other hand has an average shelf life of 3 years and can attack anything that’s connected to a network.   In either case,  its not the laptop or the exploit that represents danger it’s the intent of its user.

Finally, most of the concerns about malware, spyware, etc. are not only unfounded and unrealistic, but absolutely absurd.  Consider that businesses like VUPEN wants to prevent vendors from fixing vulnerabilities.  If VUPEN were to provide an exploit to a customer for the purpose of creating malware then that would guarantee the death of the exploit.  Specifically, when malware spreads antivirus companies capture and study it.  They would most certainly identify the method of propagation (the exploit) that in turn would result in the vendor fixing the vulnerability.

Create PDF    Send article as PDF   

Hacking the Sonexis ConferenceManager

Netragard’s Penetration Testing services use a research based methodology called Real Time Dynamic Testing™. Research based methodologies are different in that they focus on identifying both new and known vulnerabilities whereas standard methodologies usually, if not always identify known vulnerabilities. Sometimes when performing research based penetration testing we identify issues that not only affect our customer but also have the potential to impact anyone using a particular technology. Such was the case with the Sonexis ConfrenceManager.

The last time we came across a Sonexis ConferenceManager we found a never before discovered Blind SQL Injection vulnerability.  This time we found a much more serious (also never before discovered) authorization vulnerability. We felt that this discovery deserved a blog entry to help make people aware of the issue as quickly as possible.

What really surprised about this vulnerability was its simplicity and the fact that nobody (not even us) had found it before.  Discovery and exploitation required no wizardry or special talent. We simply had to browse to the affected area of the application and we were given keys to the kingdom (literally). What was even more scary is that this vulnerability could lead to a mass compromise if automated with a specialized Google search (but we won’t give more detail on that here, yet).

So lets dig in…

All versions of the Sonexis ConferenceManager fail to check and see if users attempting to access  the “/admin/backup/settings.asp”, “/admin/backup/download.asp ” or the “/admin/backup/upload.asp ” pages are authorized. Because of this, anyone can browse to one of those pages without first authenticating.  When they do, they’ll have full administrative privileges over the respective Sonexis ConferenceManager pages.  A screen shot of the “settings.asp” age is provided below.

The first thing that we noticed when we accessed the page was that the fields were filled out for us.  This made us curious, especially since the credentials appeared to belong to our customers domain.  When we looked at the document source we found that we didn’t only have the “User ID:” but were also provided with the “Password:” in clear text.

As it turned out, compromising our customer’s IT infrastructure was as simple as using the disclosed credentials to VPN into their network.  Once in we used the same credentials to access Active Directory and to create a second domain administrator account called “netragard”.  We also downloaded the entire password table from Active Directory and began cracking that with hashcat.  While that was being cracked we used our new domain admin account to access any resource that authenticated against Active Directory.  Suffice it to say we had used the Sonexis ConferenceManager vulnerability to compromise the entire IT infrastructure.

But the vulnerabilities didn’t stop there…

As it turns out we could also download the Sonexis ConferenceManager Microsoft SQL database in its entirety.  This was done by changing the configuration in the “settings.asp” page.  Once changed to the right location we were able to download the database (after configuring samba locally).

 

After we downloaded the file (in zip format) we decompressed it.  Decompression revealed the following contents:

Once decompressed we loaded the files into our local MsSQL database and began to explore the contents.  Not only did we have audio recordings, configuration settings, and other sensitive data, but the administrative password for the Sonexis ConferenceManager was also stored in plain text as is shown in the screen shot below. This in and of its self is yet another vulnerability.

We were able to use the credentials to login to the Sonexis ConferenceManager without issue…

Last but not least…

We found that it was also possible to insert a backdoor into our local copy of the Sonexis ConferenceManager database.  Once the backdoor was created we could re-zip the files and upload the “infected” Microsoft SQL database back to the Sonexis ConferenceManager.  Once loaded, the backdoor will activate allowing the attacker to gain entry to the system (again).

Regarding vendor notification…

Sonexis was notified on 1/31/2012 about the authorization vulnerabilities disclosed in this article.  Sonexis responded once (with a less than friendly, non-cooperative response) on 2/1/2012 and a second time (with a very friendly cooperative response) on 2/6/2012.  We replied to the second response providing the full details of our research to Sonexis.  Sonexis took the information and had a quick fix ready for their customers the next day on 02/07/2012!  They notified their customers that same day with the following email.

We’d like to thank Sonexis for taking the time to be receptive and working with us.  Not only is that the right thing to do, but it showed us that Sonexis takes their customer security very seriously.  As it turned out the initial (less than friendly response) was due to a miscommunication (someone not paying attention to what we were telling them).  The second response came from someone else that was actually a great pleasure to work with.  Suffice it to say that in the end we were really quite impressed with how quickly Sonexis pushed out a fix (and yes it works, we verified that).

If you are a Sonexis ConferenceManager user we strongly urge you to update your system now.

Updated on: 02-16-2012

This vulnerability can be exploited from the Internet.   The image below shows a small sample of Sonexis ConferenceManager users who are vulnerable.  This sample was identified using a combination of ruby with a specially crafted google search.

 

 

 

 

 

 

 

 

 

PDF Printer    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.