Archive for the ‘Exploits’ Category

Whistleblower Series – The real problem with China isn’t China, its you.

Terms like China, APT and Zero-Day are synonymous with Fear, Uncertainty and Doubt (FUD).  The trouble is that, in our opinion anyway, these terms and respective news articles detract from the actual problem.  For example, in 2011 only 0.12% of compromises were attributed to zero-day exploitation and 99.88% were attributed to known vulnerabilities.  Yet, despite this fact the media continued to write about the zero-day threat as if it was a matter of urgency.  What they really should have been writing about is that the majority of people aren’t protecting their networks properly.  After all, if 99.88% of all compromises were the result of the exploitation of known vulnerabilities then someone must not have been doing their job. Moreover, if people are unable to protect their networks from the known threat then how are they ever going to defend against the unknown?

All of the recent press about China and their Advanced Persistent Threat is the same, it detracts from the real problem.  More clearly, the problem isn’t China, Anonymous, LulzSec, or any other FUD ridden buzzword.  The problem is that networks are not being maintained properly from a security perspective and so threats are aligning with risks to successfully affect penetration.  A large part of the reason why these networks are such soft targets is because  their maintainers are sold a false sense of security from both the services and technology perspective.

In this article we’ll show you how easy it was for us to hack into a sensitive government network that was guarded by industry technologies and testing best practices.  Our techniques deliberately mimicked those used by China.  You’ll notice that the  techniques aren’t particularly advanced (despite the fact that the press calls them Advanced) and in fact are based largely on common sense.  You’ll also notice that we don’t exploit a single vulnerability other than those that exist in a single employee (the human vulnerability).  Because this article is based on an actual engagement we’ve altered certain aspects of our story to protect our customer and their respective identity.  We should also make mention that since the delivery of this engagement our customer has taken effective steps to defeat this style of attack.

Here’s the story…

We recently (relatively speaking anyway) delivered a nation state style attack against one of our public sector customers.  In this particular case our testing was unrestricted and so we were authorized to perform Physical, Social and Electronic attacks.  We were also allowed to use any techniques and technologies that were available should we feel the need.

Lets start off by explaining that our customers network houses databases of sensitive information that would be damaging if exposed.  They also have significantly more political weight and authority than most of the other departments in their area.  The reason why they were interested in receiving a nation-state style Penetration Test is because another department within the same area had come under attack by China. 

We began our engagement with reconnaissance.  During this process we learned that the target network was reasonably well locked down from a technical perspective.  The external attack surface provided no services or application into which we could sink our teeth.  We took a few steps back and found that the supporting networks were equally protected from a technical perspective.  We also detected three points where Intrusion Prevention was taking place, one at the state level,  one at the department level, and one functioning at the host level.

(It was because of these protections and minimalistic attack surface that our client was considered by many to be highly secure.)

When we evaluated our customer from a social perspective we identified a list of all employees published on a web server hosted by a yet another  department.  We were able to build micro-dossiers for each employee by comparing employee names and locations to various social media sites and studying their respective accounts.  We quickly constructed a list of which employees were not active within specific social networking sites and lined them up as targets for impersonation just in case we might need to become someone else.

We also found a large document repository (not the first time we’ve found something like this) that was hosted by a different department.  Contained within this repository was a treasure trove of Microsoft Office files with respective change logs and associated usernames.  We quickly realized that this repository would be an easy way into our customers network.  Not only did it provide a wealth of META data, but it also provided perfect pretexts for Social Engineering… and yes, it was open to the public.

(It became glaringly apparent to us that our customer had overlooked their social points of risk and exposure and were highly reliant on technology to protect their infrastructure and information.  It was also apparent that our customer was unaware of their own security limitations with regards to technological protections.  Based on what we read in social forums, they thought that IPS/IDS and Antivirus technologies provided ample protection.)

We downloaded a Microsoft Office document that had a change log that indicated that it was being passed back and fourth between one of our customers employees and another employee in different department.  This was ideal because a trust relationship had already been established and was ripe for the taking.  Specifically, it was “normal” for one party to email this document to the other as a trusted attachment.  That normalcy provided the perfect pretext.

(Identifying a good pretext is critically important when planning on launching Social Engineering attacks.  A pretext is a good excuse or reason to do something that is not accurate.  Providing a good pretext to a victim often causes a victim to perform actions that they should not perform.)

Once our document was selected we had our research team build custom malware specifically for the purpose of embedding it into the Microsoft Office document.  As is the case with all of our malware, we built in automatic self-destruct features and expiration dates that would result in clean deinstallation when triggered.  We also built in covert communication capabilities as to help avoid detection.

(Building our own malware was not a requirement but it is something that we do.  Building custom malware ensures that it will not be detected, and it ensures that it will have the right features and functions for a particular engagement.)

Once the malware was built and embedded into the document we tested it in our lab network.  The malware functioned as designed and quickly established connectivity with its command and control servers.   We should mention that we don’t build our malware with the ability to propagate for liability and control reasons.  We like keeping our infections highly targeted and well within scope.

When we were ready to begin our attack we fired off a tool designed to generate a high-rate of false positives in Intrusion Detection / Prevention systems and Web Application Firewalls alike.  We ran that tool against our customers network and through hundreds of different proxies to force the appearance of multiple attack sources.  Our goal was to hide any real network activity behind a storm of false alarms.

(This demonstrates just how easily Intrusion Detection and Prevention systems can be defeated.   These systems trust network data and generate alerts based on that network data.  When an attacker forges network data then that attacker can also forge false alerts.   Generate enough false alerts and you disable ones ability to effectively monitor the real threat.)

While that tool was running we forged an email to our customers employee from the aforementioned trusted source.  That email contained our infected Microsoft Office document as an attachment.  Within less than three minutes our target received the email and opened the infected attachment thus activating our malware.  Once activated our malware connected back to its command and control server and we had successfully penetrated into our customer’s internal IT infrastructure.  We’ll call this initial point of Penetration T+1.  Shortly after T+1 we terminated our noise generation attack.

(The success of our infection demonstrates how ineffective antivirus technologies are at preventing infections from unknown types of malware.  T+1 was using a current and well respected antivirus solution.  That solution did not interfere with our activities.  The success of our penetration at this point further demonstrates the risk introduced by the human factor.  We did not exploit any technological vulnerability but instead exploited human trust.         )

Now that we had access we needed to ensure that we kept it.  One of the most important aspects of maintaining access is monitoring user activity.  We began taking screenshots of T+1’s desktop every 10 seconds.  One of those screenshots showed T+1 forwarding our email off to their IT department because they thought that the attachment looked suspicious.  While we were expecting to see rapid incident response, the response never came.  Instead, to our surprise, we received a secondary command and control connection from a newly infected host (T+2).

When we began exploring T+2 we quickly realized that it belonged to our customers head of IT Security.  We later learned that he received the email from T+1 and scanned the email with two different antivirus tools.  When the email came back as clean he opened the attachment and infected his own machine.  Suspecting nothing he continued to work as normal… and so did we.

Next we began exploring the file system for T+1. When we looked at the directory containing users batch files we realized that their Discretionary Access Control Lists (DACL’s) granted full permissions to anyone (not just domain users).  More clearly, we could read, write, modify and delete any of the batch files and so decided to exploit this condition to commit a mass compromise.

To make this a reality, our first task was to identify a share that would allow us to store our malware installer.  This share had to be accessible in such a way that when a batch file was run it could read and execute the installer.  As it turned out the domain controllers were configured with their entire C:\ drives shared and the same wide open DACL’s.

We located a path where other installation files were stored on one of the domain controllers.  We placed a copy of our malware into that location and named it “infection.exe”.  Then using a custom script we modified all user batch files to include one line that would run “infection.exe” whenever a user logged into a computer system.  The stage was set…

In no time we started receiving connections back from desktop users including but not limited to the desktops belonging to each domain admin.  We also received connections back from domain controllers, exchange servers, file repositories, etc.  Each time someone accessed a system it became infected and fell under our control.

While exploring the desktops for the domain administrators we found that one admin kept a directory called “secret”.   Within that directory were backups for all of the network devices including intrusion prevention systems, firewalls, switches, routers, antivirus, etc.  From that same directory we also extracted viso diagrams complete with IP addresses, system types, departments, employee telephone lists, addresses, emergency contact information, etc.

At this point we decided that our next step would be to crack all passwords.  We dumped passwords from the domain controller and extracted passwords from the backed up configuration files.  We then fed our hashes to our trusted cracking machine and successfully cracked 100% of the passwords.

(We should note that during the engagement we found that most users had not changed their passwords since their accounts had been created.  We later learned that this was a recommendation made by their IT staff.  The recommendation was based on an article that was written by a well-respected security figure. ).

With this done, we had achieved an irrecoverable and total infrastructure compromise.  Had we been a true foe then there would be no effective way to remove our presence from the network.  We achieved all of this without encountering any resistance, without exploiting a single technological vulnerability, without detection (other than the forwarded email of course) and without the help of China.  China might be hostile but then again so are we.

 

 

Free PDF    Send article as PDF   

Selling zero-day’s doesn’t increase your risk, here’s why.

The zero-day exploit market is secretive. People as a whole tend to fear what they don’t understand and substitute fact with speculation.  While very few facts about the zero-day exploit market are publicly available, there are many facts about zero-days that are available.  When those facts are studied it becomes clear that the legitimate zero-day exploit market presents an immeasurably small risk (if any), especially when viewed in contrast with known risks.

Many news outlets, technical reporters, freedom of information supporters, and even security experts have used the zero-day exploit market to generate Fear Uncertainty and Doubt (FUD).  While the concept of a zero-day exploit seems ominous reality is actually far less menacing.  People should be significantly more worried about vulnerabilities that exist in public domain than those that are zero-day.  The misrepresentations about the zero-day market create a dangerous distraction from the very real issues at hand.

One of the most common misrepresentations is that the zero-day exploit market plays a major role in the creation of malware and malware’s ability to spread.  Not only is this categorically untrue but the Microsoft Security Intelligence Report (SIRv11) provides clear statistics that show that malware almost never uses zero-day exploits.  According to SIRv11, less than 6% of malware infections are actually attributed to the exploitation of general vulnerabilities.  Of those successful infections nearly all target known and not zero-day vulnerabilities.

Malware targets and exploits gullibility far more frequently than technical vulnerabilities.  The “ILOVEYOU” worm is a prime example.  The worm would email its self to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs”. The attachment was actually a copy of the worm.  When a person attempted to read the attachment they would inadvertently run the copy and infect their own computer.  Once infected the worm would begin the process again and email copies of its self to the first 50 email addresses in the victims address book.  This technique of exploiting gullibility was so successful that in the first 10 days over 50 million infections were reported.  Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

The Morris worm is an example of a worm that did exploit zero-day vulnerabilities to help its spread.  The Morris was created in 1988 and proliferated by exploiting multiple zero-day vulnerabilities in various Internet connectable services.  The worm was not intended to be malicious but ironically a design flaw caused it to malfunction, which resulted in a Denial of Service condition of the infected systems.  The Morris worm existed well before zero-day exploit market was even a thought thus proving that both malware and zero-day exploits will exist with or without the market.  In fact, there is no evidence that shows the existence of any relationship between the legitimate zero-day exploit market and the creation of malware, there is only speculation.

Despite these facts, prominent security personalities have argued that the zero-day exploit market keeps people at risk by preventing the public disclosure of zero-day vulnerabilities. Bruce Schneier wrote, “a disclosed vulnerability is one that – at least in most cases – is patched”.  His opinion is both assumptive and erroneous yet shared by a large number of security professionals.  Reality is that when a vulnerability is disclosed it is unveiled to both ethical and malicious parties. Those who are responsible for applying patches don’t respond as quickly as those with malicious intent.

According to SIRv11, 99.88% of all compromises were attributed to the exploitation of known (publicly disclosed) and not zero-day vulnerabilities.  Of those vulnerabilities over 90% had been known for more than one year. Only 0.12% of compromises reported were attributed to the exploitation of zero-day vulnerabilities. Without the practice of public disclosure or with the responsible application of patches the number of compromises identified in SIRv11 would have been significantly reduced.

The Verizon 2012 Data Breach Investigations Report (DBIR) also provides some interesting insight into compromises.  According to DBIR 97% of breaches were avoidable through simple or intermediate controls (known / detectable vulnerabilities, etc.), 92% were discovered by a third party and 85% took two weeks or more to discover. These statistics further demonstrate that networks are not being managed responsibly. People, and not the legitimate zero-day exploit market, are keeping themselves at risk by failing to responsibly address known vulnerabilities.  A focus on zero-day defense is an unnecessary distraction for most.

Another issue is the notion that security researchers should give their work away for free.  Initially it was risky for researchers to notify vendors about security flaws in their technology.  Some vendors attempted to quash the findings with legal threats and others would treat researchers with such hostility that it would drive the researchers to the black market.  Some vendors remain hostile even today, but most will happily accept a researchers hard work provided that its delivered free of charge.  To us the notion that security researchers should give their work away for free is absurd.

Programs like ZDI and what was once iDefense (acquired by VeriSign) offer relatively small bounties to researchers who provide vulnerability information.  When a new vulnerability is reported these programs notify their paying subscribers well in advance of the general public.  They do make it a point to work with the manufacturer to close the hole but only after they’ve made their bounty.  Once the vendors have been notified (and ideally a fix created) public disclosure ensues in the form of an email-based security advisory that is sent to various email lists.  At that point, those who have not applied the fix are at a significantly increased level of risk.

Companies like Google and Microsoft are stellar examples of what software vendors should do with regards to vulnerability bounty programs.  Their programs motivate the research community to find and report vulnerabilities back to the vendor.  The existence of these programs is a testament to how seriously both Google and Microsoft take product security. Although these companies (and possibly others) are moving in the right direction, they still have to compete with prices offered by other legitimate zero-day buyers.  In some cases those prices offered are as much as 50% higher.

Netragard is one of those entities. We operate the Exploit Acquisition Program (EAP), which was established in early 2000 as a way to provide ethical security researchers with top dollar for their work product. In 2011 Netragard’s minimum acquisition price (payment to researcher) was $20,000.00, which is significantly greater than the minimum payout from most other programs.  Netragard’s EAP buyer information, as with any business’ customer information, is kept in the highest confidence.  Netragard’s EAP does not practice public vulnerability disclosure for the reasons cited above.

Unlike VUPEN, Netragard will only sell its exploits to US based buyers under contract.  This decision was made to prevent the accidental sale of zero-day exploits to potentially hostile third parties and to prevent any distribution to the Black Market.  Netragard also welcomes the exclusive sale of vulnerability information to software vendors who wish fix their own products.  Despite this not one single vendor has approached Netragard with the intent to purchase vulnerability information.  This seems to indicate that most software vendors are sill more focused on revenue than they are end-user security.  This is unfortunate because software vendors are the source of vulnerabilities.

Most software vendors do not hire developers that are truly proficient at writing safe code (the proof is in the statistics). Additionally, very few software vendors have genuine security testing incorporated into their Quality Assurance process.  As a result, software vendors literally (and usually accidently) create the vulnerabilities that are exploited by hackers and used to compromise their customer’s networks. Yet software vendors continue to inaccurately tout their software as being secure when in fact t isn’t.

If software vendors begin to produce truly secure software then the zero-day exploit market will cease to exist or will be forced to make dramatic transformations. Malware however would continue to thrive because it is not exploit dependent.  We are hopeful that Google and Microsoft will be trend setters and that other software vendors will follow suit.  Finally, we are hopeful that people will do their own research about the zero-day exploit markets instead of blindly trusting the largely speculative articles that have been published recently.


PDF    Send article as PDF   

Thank You Anonymous

We (Netragard) have been meaning to say Thank You to Anonymous for a long time now. With that said, Netragard does not condone the actions of Anonymous, nor the damage they have caused.   What Anonymous has demonstrated, and continues to demonstrate, is just how poorly most network infrastructures are managed from a security perspective (globally, not just within the USA).  People need to wake up.

If you take the time to look at most of the hacks done by Anonymous, you’ll find that their primary points of entry are really quite basic.  They often involve the exploitation of simple SQL Injection vulnerabilities, poorly configured servers, or even basic Social Engineering.  We’re not convinced that Anonymous is talentless; we just think that they haven’t had to use their talent because the targets are so soft.

What Anonymous has really exposed here are issues with the security industry as a whole and with the customers that are being serviced. Many of Anonymous’s victims use third party Penetration Testing vendors and nightly Vulnerability Scanning services.  Many of them even use “best of breed” Intrusion Prevention Systems and “state of the art” firewalls.  Despite this, Anonymous still succeeds at Penetration with relative ease, without detection and by exploiting easy to identify vulnerabilities.  So the question becomes, why is Anonymous so successful?

Part of the reason is that regulatory requirements like PCI DSS 11.3 (and others) shift businesses from wanting Penetration Tests for security reasons to needing Penetration Tests in order to satisfy regulatory requirements.  What is problematic is that most regulatory requirements provide no minimum quality standard for Penetration Testing.  They also provide no incentive for quality testing.  As a result, anyone with an automated vulnerability scanner and the ability to vet results can deliver bare minimum services and satisfy the requirement.

We’ll drive this point home with an analogy.  Suppose you manufacture bulletproof vests for the military.  Regulations state that each version of your vest must pass a Penetration Test before you can sell it to the military.   The regulations do not define a quality standard against which the vests should be tested.  Since your only goal is to satisfy regulations, you hire the lowest bidder.  They perform Penetration Testing against your bulletproof vest using a squirt gun.  Once testing is complete you receive a report stating that your vests passed the test.  Would you want to wear that vest into battle?  (Remember, Anonymous uses bullets not water).

This need to receive the so-called “Passing” Penetration Test has degraded the cumulative quality of Penetration Testing services.  There are far more low-quality testing firms than there are high-quality.  Adding to the issue is that the low-quality firms advertise their services using the same language, song and dance as high-quality firms.  This makes it difficult for anyone interested in purchasing high-quality services to differentiate between vendors.  (In fact, we’ve written a white paper about this very thing).

A possible solution to this problem would be for the various regulatory bodies to define a minimum quality standard for Penetration Testing.  This standard should force Penetration Testing firms to test realistically.  This means that they do not ask their customers to add their IP addresses to the whitelist for their Intrusion Prevention Systems or Web Application Firewalls.  It means that they require Social Engineering (including but not limited to the use of home-grown pseudomalware), it means that they test realistically, period.   If a vendor can’t test realistically then the vendor shouldn’t be in the testing business.

What this also means is that people who need Penetration Tests will no longer be able to find a low-cost, low-quality vendor to provide them with a basic check in the box.  Instead, they will actually need to harden their infrastructures or they won’t be compliant.

We should mention that not all business are in the market of purchasing low quality Penetration Testing services.  Some businesses seek out high-quality vendors and truly want to know where there vulnerabilities are.  They not only allow but also expect their selected Penetration Testing vendor to test realistically and use the same tactics and talent as the actual threat.  These types of businesses tend to be more secure than average and successfully avoid Anonymous’s victim list (at least so far). They understand that the cost of good security is equal to a fraction of the cost in damages of a single successful compromise but unfortunately not everyone understands that.

So is it really any surprise that Anonymous has had such a high degree of success?  We certainly don’t think so.  And again, while we don’t condone their actions, we can say thank you for proving our point.

PDF Download    Send article as PDF   

Netragard on Exploit Brokering

Historically ethical researchers would provide their findings free of charge to software vendors for little more than a mention.  In some cases vendors would react and threaten legal action citing violations of poorly written copyright laws that include but are not limited to the DMCA.  To put this into perspective, this is akin to threatening legal action against a driver for pointing out that the breaks on a school bus are about to fail.

This unfriendliness (among various other things) caused some researchers to withdraw from the practice of full disclosure. Why risk doing a vendor the favor of free work when the vendor might try to sue you?

Organizations like CERT help to reduce or eliminate the risk to security researchers who wish to disclose vulnerabilities.  These organizations work as mediators between the researchers and the vendors to ensure safety for both parties.  Other organizations like iDefense and ZDI also work as middlemen but unlike CERT earn a profit from the vulnerabilities that they purchase. While they may pay a security researcher an average of $500-$5000 per vulnerability, they charge their customers significantly more for their early warning services.  Its also unclear (to us anyway) how quickly they notify vendors of the vulnerabilities that they buy.

The next level of exploit buyers are the brokers.  Exploit brokers may cater to one or more of three markets that include National, International, or Black.  While Netragard’s program only sells to National buyers, companies like VUPEN sell internationally.  Also unlike VUPEN, Netragard will sell exploits to software vendors willing to engage in an exclusive sale.   Netragard’s Exploit Acquisition Program was created to provide ethical researchers with the ability to receive fair pay for their hard work; it was not created to keep vulnerable software vulnerable.  Our bidding starts at $10,000 per exploit and goes up from there.

 

Its important to understandwhat a computer exploit is and is not.  It is a tool or technique that makes full use of and derives benefit from vulnerable computer software.   It is not malware despite the fact that malware may contain methods for exploitation.  The software vulnerabilities that exploits make use of are created by software vendors during the development process.  The idea that security researchers create vulnerability is absurd.  Instead, security researchers study software and find the already existing flaws.

The behavior of an exploit with regards to malevolence or benevolence is defined by the user and not the tool.  Buying an exploit is much like buying a hammer in that they can both be used to do something constructive or destructive.  For this reason it’s critically important that any ethical exploit broker thoroughly vet their customers before selling an exploit.  Any broker that does not thoroughly vet their customers is operating irresponsibly.

What our customers do with the exploits that they buy is none of our business just as what you do with your laptop is not its vendors business.   That being said, any computer system is far more dangerous than any exploit.  An exploit can only target one very specific thing in a very specific way and has a limited shelf life. It is not entirely uncommon for vulnerabilities to be accidentally fixed thus rendering a 0-day exploit useless.  A laptop on the other hand has an average shelf life of 3 years and can attack anything that’s connected to a network.   In either case,  its not the laptop or the exploit that represents danger it’s the intent of its user.

Finally, most of the concerns about malware, spyware, etc. are not only unfounded and unrealistic, but absolutely absurd.  Consider that businesses like VUPEN wants to prevent vendors from fixing vulnerabilities.  If VUPEN were to provide an exploit to a customer for the purpose of creating malware then that would guarantee the death of the exploit.  Specifically, when malware spreads antivirus companies capture and study it.  They would most certainly identify the method of propagation (the exploit) that in turn would result in the vendor fixing the vulnerability.

PDF Printer    Send article as PDF   

Hacking your car for fun and profit.

Our CEO (Adriel Desautels) recently spoke at the Green Hills Software Elite Users Technology Summit regarding automotive hacking.  During his presentation there were a series of reporters taking photographs, recording audio, etc.  Of all of the articles that came out, one in particular caught our eye.  We made the front page of “Elektronik iNorden” which is a Swedish technology magazine that focuses on hardware and embedded systems.  You can see the full article here but you’ll probably want to translate:

http://www.webbkampanj.com/ein/1011/?page=1&mode=50&noConflict=1

What really surprised us during the presentation was how many people were in disbelief about the level of risk associated with cars built after 2007.  For example, it really isn’t all that hard to program a car to kill the driver.  In fact, its far too easy due to the overall lack of security cars today.

Think of a car as an IT Infrastructure.  All of the servers in the infrastructure are critical systems that control things like breaks, seat belts, door locks, engine timing, airbags, lights, the radio, the dashboard display, etc.  Instead of these systems being plugged into a switched network they are plugged into a hub network lacking any segmentation with no security to speak of.  The only real difference between the car network and your business network is that the car doesn’t have  an Internet connection.

Enter the Chevrolet Volt, the first car to have its own IP address. Granted we don’t yet know how the Volt’s IP address will be protected.  We don’t know if each car will have a public IP address or if the cars will be connected to a private network controlled by Chevy (or someone else).  What we do know is that the car will be able to reach out to the Internet and so it will be vulnerable to client side attacks.

So what happens if someone is able to attack the car?

Realistically if someone is able to hack into the car then they will be able to take full control over almost any component of the car.  They can do anything from apply the brakes, accelerate the car, prevent the brakes from applying, kill (literally destroy) the engine, apply the breaks to one side of the car, lock the doors, pretension the seat belts, etc.  For those of you that think this is Science Fiction, it isn’t.  Here’s one of many research papers that demonstrates the risks.

Why is this possible?

This is possible because people adopt technology too quickly and don’t stop to think about the risks but instead are blinded by the continence that it introduces.  We see this in all industries not just automotive. IT managers, CIO’s, CSO’s, CEO’s, etc. are always purchasing and deploying new technologies without really evaluating the risks.  In fact just recently we had a client purchase a “secure email gateway” technology… it wasn’t too secure.  We were able to hack it and access every email on the system because it relied on outdated third party software.

Certainly another component that adds to this is that most software developers write vulnerable and buggy code (sorry guys but its true).  Their code isn’t written to be secure, its written to do a specific thing like handle network traffic, beep your horn, send emails, whatever.  Poor code + a lack of security awareness == high risks.

So what can you do ?

Before you decide to adopt new technology make sure that you understand the benefits and the risks associated with the adoption.  If you’re not technical enough (most people aren’t) to do a low-level security evaluation then hire someone (a security researcher) to do it for you.  If you don’t then you could very well be putting yourselves and your customers at serious risk.

Create PDF    Send article as PDF   

Bypassing Antivirus to Hack You

Many people assume that running antivirus software will protect them from malware (viruses, worms, trojans, etc), but in reality the software is only partially effective. This is true because antivirus software can only detect malware that it knows to look for. Anything that doesn’t match a known malware pattern will pass as a clean and trusted file.
Antivirus technologies use virus definition files to define known malware patterns. Those patterns are derived from real world malware variants that are captured in the wild. It is relatively easy to bypass most antivirus technologies by creating new malware or modifying existing malware so that it does not contain any identifiable patterns.
One of the modules that our customers can activate when purchasing Penetration Testing services from us, is the Pseudo Malware module. As far as we know, we are one of the few Penetration Testing companies to actually use Pseudo Malware during testing. This module enables our customers to test how effective their defenses are against real world malware threats but in a safe and controllable way.
Our choice of Pseudo Malware depends on the target that we intend to penetrate and the number of systems that we intend to compromise. Sometimes we’ll use Pseudo Malware that doesn’t automatically propagate and other times we’ll use auto-propagation. We should mention that this Pseudo Malware is only “Pseudo” because we don’t do anything harmful with it and we use it ethically. The fact of the matter is that this Pseudo Malware is very real and very capable technology.
Once we’ve determined what Pseudo Malware variant to go with, we need to augment the Pseudo Malware so that it is not detectable by antivirus scanners. We do this by encrypting the Pseudo Malware binary with a special binary encryption tool. This tool ensures that the binary no longer contains patters that are detectable by antivirus technologies.

Before Encryption:


After Encryption: (Still Infected)

As you can see from the scan results above, the Pseudo Malware was detected by most antivirus scanners before it was encrypted. We expected this because we chose a variant of Pseudo Malware that contained several known detectable patterns. The second image (after encryption) shows the same Pseudo Malware being scanned after encryption. As you can see, the Pseudo Malware passed all antivirus scanners as clean.

Now that we’ve prevented antivirus software from being able to detect our Pseudo Malware, we need to distribute it to our victims. Distribution can happen many ways that include but are not limited to infected USB drives, infected CD-ROM’s, Phishing emails augmented by IDN homograph attacks with the Pseudo Malware attached, Facebook, LinkedIn, MySpace, binding to PDF like files, etc.

Our preferred method for infection is email (or maybe not). This is because it is usually very easy to gather email addresses using various existing email harvesting technologies and we can hit a large number of people at the same time. When using email, we may embed a link that points directly to our Pseudo Malware, or we might just insert the malware directly into the email. Infection simply requires that the user click our link or run the attached executable. In either case, the Pseudo Malware is fast and quiet and the user doesn’t notice anything strange.

Once a computer is infected with our Pseudo Malware it connects back to our Command and Control server and grants us access to the system unbeknownst to the user. Once we have access we can do anything that the user can do including but not limited to seeing the users screen as if we were right there,
running programs, installing software, uninstalling software, activating web cam’s and microphones, accessing and manipulating hardware, etc. More importantly, we can use that computer to compromise the rest of the network through a process called Distributed Metastasis.

Despite how easy it is to bypass antivirus technologies, we still very strongly recommend using them as they keep you protected from known malware variants.

PDF Creator    Send article as PDF   

Exploit Acquisition Program – More Details

The recent news on Forbes about our Exploit Acquisition Program has generated a lot of interesting speculative controversy and curiosity. As a result, I’ve decided to take the time to follow up with this blog entry. Here I’ll make a best effort to explain what the Exploit Acquisition Program is, why we decided to launch the program, and how the program works.

What it is:
The Exploit Acquisition Program (“EAP“) officially started in May of 1999 and is currently being run by Netragard, LLC. EAP specifically designed to acquire “actionable research” in the form of working exploits from the security community. The Exploit Acquisition Program is different than other programs because participants receive significantly higher pay for their work and in most cases the exploits never become public knowledge.
The exploits that are acquired via the EAP are sold directly to specific US based clients that have a unique and justifiable need for such technologies. At no point does Netragard sell or otherwise export acquired exploits to any foreign entities. Nor do we disclose any information about our buyers or about participating researchers.
Why did we start the EAP?

Netragard launched the EAP to give security researchers the opportunity to receive fair value for their research product. Our bidding prices start at or around $15,000 per exploit. That price is affected by many different variables.
How does the EAP Work?
The EAP works as follows:
  1. Researcher contacts Netragard.
  2. Researcher and Netragard execute a Mutual Nondisclosure Agreement.
  3. Researcher provides a verifiable form of identification to Netragard.
  4. Researcher fills out an Exploit Acquisition Form (“EAF“).
  5. Netragard works with the buyer to determine exploit value based on the information provided in the EAF.
  6. Researcher accepts or rejects the price. Note: If rejected, the process stops here.
  7. Researcher submits the exploit code and vulnerability details to Netragard.
  8. Netragard verifies that the exploit works as advertised.
  9. If the exploit does not work as advertised then the researcher is given the opportunity to resolve the issue(s).
  10. If the exploit does work as advertised then the purchase agreement is delivered to the researcher.
  11. Researcher executes purchase agreement and transfers all rights and ownership of the exploit and any information related to the exploit to Netragard. At this point researcher loses all rights to the exploit and its respective information.
  12. Netragard begins the payment process.
  13. Payments are issued in three equal installments over the course of three months.

EAP Rules

  1. Netragard requires exclusivity for all exploits purchased through the EAP.
  2. Ownership of the exploit and its respective vulnerability information are transferred from researcher to Netragard at step 11 above. Prior to step 11 the exploit and its respective vulnerability information are the intellectual property of the researcher. If at any point before step 11 the researcher terminates the acquisition process then Netragard will destroy any and all information related to failed transaction. Termination of sale is not possible after step 11.
  3. Netragard will not identify its buyers.
  4. Netragard will not identify researchers.
  5. All transactions between buyer, Netragard and developer are done legally and contractually. At no point will Netragard engage in illegal activity or with unknown, untrusted, and/or unverifiable sources or entities.
If you are interested in selling your exploit to us, please contact us at eap@netragard.com.
Free PDF    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.