Archive for the ‘Hardware’ Category

Selling zero-day’s doesn’t increase your risk, here’s why.

The zero-day exploit market is secretive. People as a whole tend to fear what they don’t understand and substitute fact with speculation.  While very few facts about the zero-day exploit market are publicly available, there are many facts about zero-days that are available.  When those facts are studied it becomes clear that the legitimate zero-day exploit market presents an immeasurably small risk (if any), especially when viewed in contrast with known risks.

Many news outlets, technical reporters, freedom of information supporters, and even security experts have used the zero-day exploit market to generate Fear Uncertainty and Doubt (FUD).  While the concept of a zero-day exploit seems ominous reality is actually far less menacing.  People should be significantly more worried about vulnerabilities that exist in public domain than those that are zero-day.  The misrepresentations about the zero-day market create a dangerous distraction from the very real issues at hand.

One of the most common misrepresentations is that the zero-day exploit market plays a major role in the creation of malware and malware’s ability to spread.  Not only is this categorically untrue but the Microsoft Security Intelligence Report (SIRv11) provides clear statistics that show that malware almost never uses zero-day exploits.  According to SIRv11, less than 6% of malware infections are actually attributed to the exploitation of general vulnerabilities.  Of those successful infections nearly all target known and not zero-day vulnerabilities.

Malware targets and exploits gullibility far more frequently than technical vulnerabilities.  The “ILOVEYOU” worm is a prime example.  The worm would email its self to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs”. The attachment was actually a copy of the worm.  When a person attempted to read the attachment they would inadvertently run the copy and infect their own computer.  Once infected the worm would begin the process again and email copies of its self to the first 50 email addresses in the victims address book.  This technique of exploiting gullibility was so successful that in the first 10 days over 50 million infections were reported.  Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

The Morris worm is an example of a worm that did exploit zero-day vulnerabilities to help its spread.  The Morris was created in 1988 and proliferated by exploiting multiple zero-day vulnerabilities in various Internet connectable services.  The worm was not intended to be malicious but ironically a design flaw caused it to malfunction, which resulted in a Denial of Service condition of the infected systems.  The Morris worm existed well before zero-day exploit market was even a thought thus proving that both malware and zero-day exploits will exist with or without the market.  In fact, there is no evidence that shows the existence of any relationship between the legitimate zero-day exploit market and the creation of malware, there is only speculation.

Despite these facts, prominent security personalities have argued that the zero-day exploit market keeps people at risk by preventing the public disclosure of zero-day vulnerabilities. Bruce Schneier wrote, “a disclosed vulnerability is one that – at least in most cases – is patched”.  His opinion is both assumptive and erroneous yet shared by a large number of security professionals.  Reality is that when a vulnerability is disclosed it is unveiled to both ethical and malicious parties. Those who are responsible for applying patches don’t respond as quickly as those with malicious intent.

According to SIRv11, 99.88% of all compromises were attributed to the exploitation of known (publicly disclosed) and not zero-day vulnerabilities.  Of those vulnerabilities over 90% had been known for more than one year. Only 0.12% of compromises reported were attributed to the exploitation of zero-day vulnerabilities. Without the practice of public disclosure or with the responsible application of patches the number of compromises identified in SIRv11 would have been significantly reduced.

The Verizon 2012 Data Breach Investigations Report (DBIR) also provides some interesting insight into compromises.  According to DBIR 97% of breaches were avoidable through simple or intermediate controls (known / detectable vulnerabilities, etc.), 92% were discovered by a third party and 85% took two weeks or more to discover. These statistics further demonstrate that networks are not being managed responsibly. People, and not the legitimate zero-day exploit market, are keeping themselves at risk by failing to responsibly address known vulnerabilities.  A focus on zero-day defense is an unnecessary distraction for most.

Another issue is the notion that security researchers should give their work away for free.  Initially it was risky for researchers to notify vendors about security flaws in their technology.  Some vendors attempted to quash the findings with legal threats and others would treat researchers with such hostility that it would drive the researchers to the black market.  Some vendors remain hostile even today, but most will happily accept a researchers hard work provided that its delivered free of charge.  To us the notion that security researchers should give their work away for free is absurd.

Programs like ZDI and what was once iDefense (acquired by VeriSign) offer relatively small bounties to researchers who provide vulnerability information.  When a new vulnerability is reported these programs notify their paying subscribers well in advance of the general public.  They do make it a point to work with the manufacturer to close the hole but only after they’ve made their bounty.  Once the vendors have been notified (and ideally a fix created) public disclosure ensues in the form of an email-based security advisory that is sent to various email lists.  At that point, those who have not applied the fix are at a significantly increased level of risk.

Companies like Google and Microsoft are stellar examples of what software vendors should do with regards to vulnerability bounty programs.  Their programs motivate the research community to find and report vulnerabilities back to the vendor.  The existence of these programs is a testament to how seriously both Google and Microsoft take product security. Although these companies (and possibly others) are moving in the right direction, they still have to compete with prices offered by other legitimate zero-day buyers.  In some cases those prices offered are as much as 50% higher.

Netragard is one of those entities. We operate the Exploit Acquisition Program (EAP), which was established in early 2000 as a way to provide ethical security researchers with top dollar for their work product. In 2011 Netragard’s minimum acquisition price (payment to researcher) was $20,000.00, which is significantly greater than the minimum payout from most other programs.  Netragard’s EAP buyer information, as with any business’ customer information, is kept in the highest confidence.  Netragard’s EAP does not practice public vulnerability disclosure for the reasons cited above.

Unlike VUPEN, Netragard will only sell its exploits to US based buyers under contract.  This decision was made to prevent the accidental sale of zero-day exploits to potentially hostile third parties and to prevent any distribution to the Black Market.  Netragard also welcomes the exclusive sale of vulnerability information to software vendors who wish fix their own products.  Despite this not one single vendor has approached Netragard with the intent to purchase vulnerability information.  This seems to indicate that most software vendors are sill more focused on revenue than they are end-user security.  This is unfortunate because software vendors are the source of vulnerabilities.

Most software vendors do not hire developers that are truly proficient at writing safe code (the proof is in the statistics). Additionally, very few software vendors have genuine security testing incorporated into their Quality Assurance process.  As a result, software vendors literally (and usually accidently) create the vulnerabilities that are exploited by hackers and used to compromise their customer’s networks. Yet software vendors continue to inaccurately tout their software as being secure when in fact t isn’t.

If software vendors begin to produce truly secure software then the zero-day exploit market will cease to exist or will be forced to make dramatic transformations. Malware however would continue to thrive because it is not exploit dependent.  We are hopeful that Google and Microsoft will be trend setters and that other software vendors will follow suit.  Finally, we are hopeful that people will do their own research about the zero-day exploit markets instead of blindly trusting the largely speculative articles that have been published recently.


Create PDF    Send article as PDF   

Netragard’s Badge of Honor (Thank you McAfee)

Here at Netragard We Protect You From People Like Us™ and we mean it.  We don’t just run automated scans, massage the output, and draft you a report that makes you feel good.  That’s what many companies do.  Instead, we “hack” you with a methodology that is driven by hands on research, designed to create realistic and elevated levels of threat.  Don’t take our word for it though; McAfee has helped us prove it to the world.

Through their Threat Intelligence service, McAfee Labs listed Netragard as a “High Risk” due to the level of threat that we produced during a recent engagement.  Specifically, we were using a beta variant of our custom Meterbreter malware (not to be confused with Metasploit’s Meterpreter) during an Advanced Penetration Testing engagement.  The beta malware was identified and submitted to McAfee via our customers Incident Response process.  The result was that McAfee listed Netragard as a “High Risk”, which caught our attention (and our customers attention) pretty quickly.

McAfee Flags Netragard as a High Risk

Badge of Honor

McAfee was absolutely right; we are “High Risk”, or more appropriately, “High Threat”, which in our opinion is critically important when delivering quality Penetration Testing services.  After all, the purpose of a Penetration Test (with regards to I.T security) is to identify the presence of points where a real threat can make its way into or through your IT Infrastructure.  Testing at less than realistic levels of threat is akin to testing a bulletproof vest with a squirt gun.

Netragard uses a methodology that’s been dubbed Real Time Dynamic Testing™ (“RTDT”).  Real Time Dynamic Testing™ is a research driven methodology specifically designed to test the Physical, Electronic (networked and standalone) and Social attack surfaces at a level of threat that is slightly greater than what is likely to be faced in the real world.  Real Time Dynamic Testing™ requires that our Penetration Testers be capable of reverse engineering, writing custom exploits, building and modifying malware, etc.  In fact, the first rendition of our Meterbreter was created as a product of of this methodology.

Another important aspect of Real Time Dynamic Testing™ is the targeting of attack surfaces individually or in tandem.  The “Netragard’s Hacker Interface Device” article is an example of how Real Time Dynamic Testing™ was used to combine Social, Physical and Electronic attacks to achieve compromise against a hardened target.  Another article titled “Facebook from the hackers perspective” provides an example of socially augmented electronic attacks driven by our methodology.

It is important that we thank McAfee for two reasons.  First we thank McAfee for responding to our request to be removed from the “High Risk” list so quickly because it was preventing our customers from being able to access our servers.  Second and possibly more important, we thank McAfee for putting us on their “High Risk” list in the first place.  The mere fact that we were perceived as a “High Risk” by McAfee means that we are doing our job right.

Free PDF    Send article as PDF   

Netragard’s Hacker Interface Device (HID).

We (Netragard) recently completed an engagement for a client with a rather restricted scope. The scope included a single IP address bound to a firewall that offered no services what so ever. It also excluded the use of social attack vectors based on social networks, telephone, or email and disallowed any physical access to the campus and surrounding areas. With all of these limitations in place, we were tasked with penetrating into the network from the perspective of a remote threat, and succeeded.

The first method of attack that people might think of when faced with a challenge like this is the use of the traditional autorun malware on a USB stick. Just mail a bunch of sticks to different people within the target company and wait for someone to plug it in; when they do its game over, they’re infected. That trick worked great back in the day but not so much any more. The first issue is that most people are well aware of the USB stick threat due to the many published articles about the subject. The second is that more and more companies are pushing out group policies that disable the autorun feature in Windows systems. Those two things don’t eliminate the USB stick threat, but they certainly have a significant impact on its level of success and we wanted something more reliable.

Enter PRION, the evil HID.


 

A prion is an infectious agent composed of a protein in a misfolded form. In our case the prion isn’t composed of proteins but instead is composed of electronics which include a teensy microcontroller, a micro USB hub (small one from RadioShack), a mini USB cable (we needed the ends) a micro flash drive (made from one of our Netragard USB Streamers), some home-grown malware (certainly not designed to be destructive), and a USB device like a mouse, missile turret, dancing stripper, chameleon, or whatever else someone might be tempted to plug in. When they do plug it in, they will be infected by our custom malware and we will use that point of infection to compromise the rest of the network.

For the purposes of this engagement we choose to use a fancy USB logitech mouse as our Hacker Interface Device / Attack Platform. To turn our logitech Human Interface Device into a Hacker Interface Device, we had to make some modifications. The first step of course was to remove the screw from the bottom of the mouse and pop it open. Once we did that we disconnected the USB cable from the circuit board in the mouse and put that to the side. Then we proceed to use a drummel tool to shave away the extra plastic on the inside cover of the mouse. (There were all sorts of tabs that we could sacrifice). The removal of the plastic tabs was to make room for the new hardware.

Once the top of the mouse was gutted and all the unnecessary parts removed we began to focus on the USB hub. The first thing we had to do was to extract the board from the hub. Doing that is a lot harder than it sounds because the hub that we chose was glued together and we didn’t want to risk breaking the internals by being too rough. After about 15 minutes of prying with a small screwdriver (and repeated accidental hand stabbing) we were able to pull the board out from the plastic housing. We then proceeded to strip the female USB connectors off of the board by heating their respective pins to melt the solder (careful not to burn the board). Once those were extracted we were left with a naked USB hub circuit board that measured about half an inch long and was no wider than a small bic lighter.

With the mouse and the USB board prepared we began the process of soldering. The first thing that we did was to take the mini USB cable, cut one of the ends off leaving about 1 inch of wire near the connector. Then we stripped all plastic off of the connector and stripped a small amount of wire from the 4 internal wires. We soldered those four wires to the USB board making sure to follow theright pinout pattern. This is the cable that will plug into the teensy mini USB port when we insert the teensy microcontroller.

Once that was finished we took the USB cable that came with the mouse and cut the circuit board connector off of the end leaving 2 inchs of wire attached. We stripped the tips of the 4 wires still attached to the connector and soldered those to the USB hub making sure to follow the right pinout patterns mentioned above. This is an important cable as its the one that connects the USB hub to the mouse. If this cable is not soldered properly and the connections fail, then the mouse will not work. We then took the other piece of the mouse cable (the longer part) and soldered that to the USB board. This is the cable that will connect the mouse to the USB port on the computer.

At this point we have three cables soldered to the USB hub. Just to recap those cables are the mouse connector cable, the cable that goes from the mouse to the computer, and the mini USB adapter cable for the teensy device. The next and most challenging part of this is to solder the USB flash drive to the USB hub. This is important because the USB flash drive is where we store our malware. If the drive isn’t soldered on properly then we won’t be able to store our malware on the drive and the the attack would be mostly moot. ( We say mostly because we could still instruct the mouse to fetch the malware from a website, but that’s not covert.)

To solder the flash drive to the USB hub we cut about 2 inches of cable from the mini USB connector that we stole the end from previously. We stripped the ends of the wires in the cable and carefully soldered the ends to the correct points on the flash drive. Once that was done we soldered the other ends of the cable to the USB hub. At that point we had everything soldered together and had to fit it all back into the mouse. Assembly was pretty easy because we were careful to use as little material as possible while still giving us the flexibility that we needed. We wrapped the boards and wires in single layers of electrical tape as to avoid any shorts. Once everything was we plugged in we tested the devices. The USB drive mounted, the teensy card was programmable, and the mouse worked.

Time to give prion the ability to infect…

We learned that the client was using Mcafee as their antivirus solution because one of their employees was complaining about it on Facebook. Remember, we weren’t allowed to use social networks for social engineering but we certainly were allowed to do reconnaissance against social networks. With Mcafee in our sights we set out to create custom malware for the client (as we do for any client and their respective antivirus solution when needed). We wanted our malware to be able to connect back to Metasploit because we love the functionality, we also wanted the capabilities provided by meterpreter, but we needed more than that. We needed our malware to be fully undetectable and to subvert the “Do you want to allow this connection” dialogue box entirely. You can’t do that with encoding…

Update: As of 06/29/2011 9AM EST: this variant of our pseudomalware is being detected by Mcafee.

Update: As of 06/29/2011 10:47AM EST: we’ve created a new variant that seems to bypass any AV.

To make this happen we created a meterpreter C array with the windows/meterpreter/reverse_tcp_dns payload. We then took that C array, chopped it up and injected it into our own wrapper of sorts. The wrapper used an undocumented (0-day) technique to completely subvert the dialogue box and to evade detection by Mcafee. When we ran our tests on a machine running Mcafee, the malware ran without a hitch. We should point out that our ability to evade Mcafee isn’t any indication of quality and that we can evade any Antivirus solution using similar custom attack methodologies. After all, its impossible to detect something if you don’t know what it is that you are looking for (It also helps to have a team of researchers at our disposal).

Once we had our malware built we loaded it onto the flash drive that we soldered into our mouse. Then we wrote some code for the teensy microcontroller to launch the malware 60 seconds after the start of user activity. Much of the code was taken from Adrian Crenshaw’s website who deserves credit for giving us this idea in the first place. After a little bit of debugging, our evil mouse named prion was working flawlessly.

Usage: Plug mouse into computer, get pwned.

The last and final step here was to ship the mouse to our customer. One of the most important aspects of this was to repack the mouse in its original package so that it appeared unopened. Then we used Jigsaw to purchase a list of our client’s employes. We did a bit of reconnaissance on each employee and found a target that looked ideal. We packaged the mouse and made it look like a promotional gadget, added fake marketing flyers, etc. then shipped the mouse. Sure enough, three days later the mouse called home.

 

PDF Download    Send article as PDF   

Netragard Signage Snatching

Recently Netragard has had a few discussions with owners and operators of sports arenas, with the purpose of identifying methods in which a malicious hacker could potentially disrupt a sporting event, concert, or other large scale and highly visible event.

During the course of the these conversations, the topic of discussion shifted from network exploitation to social engineering, with a focus on compromise of the digital signage systems.  Until recently, even I hadn’t thought about how extensively network controlled signage systems are used in facilities like casinos, sports arenas, airports, and roadside billboards.  That is, until our most recent casino project.

Netragard recently completed a Network Penetration Test and Social Engineering Test for a large west coast casino, with spectacular results. Not only were our engineers able to gain the keys to the kingdom, they were also able to gain access to the systems that had supervisory control for every single digital sign in the facility.  Some people may think to themselves, “ok, what’s the big deal with that?”.  The answer is simple:  Customer perception and corporate image.

Before I continue on, let me provide some background; Early in 2008, there were two incidents in California where two on-highway digital billboards were compromised, and their displays changed from the intended display.  While both of these incidents were small pranks in comparison to what they could have done, the effect was remembered by those who drove by and saw the signs.  (Example A, Example B)

Another recent billboard hack in Moscow, Russia, wasn’t as polite as the pranksters in California.  A hacker was able to gain control of a billboard in downtown Moscow (worth noting, Moscow is the 7th largest city in the world), and after subsequently gaining access, looped a video clip of pornographic material. (Example C) Imagine if this was a sports organization, and this happened during a major game.

Brining this post back on track, let’s refocus on the casino and the potential impact of signage compromise.  After spending time in the signage control server, we determined that there were over 40 unique displays available to control, some of which were over 100″ in display size.  WIth customer permission, we placed a unique image on a small sign for proof of concept purposes (go google “stallowned”).  This test, coupled with an impact audit, clearly highlighted to the casino that ensuring the security of their signage systems was nearly as paramount to securing their security systems, cage systems, and domain controllers.   All the domain security in the world means little to a customer if they’re presented with disruptive material on the signage during their visit to the casino.  A compromise of this nature could cause significant loss or revenue, and cause a customer to never re-visit the casino.

I also thought it pertinent for the purpose of this post to share another customer engagement story.  This story highlights how physical security can be compromised by a combination of social engineering and network exploitation, thus opening an additional risk vector that could allow for compromise of the local network running the digital display systems.

Netragard was engaged by a large bio-sciences company in late 2010 to assess the network and physical security of multiple locations belonging to a business unit that was a new acquisition.   During the course of this engagement, Netragard was able to take complete control of their network infrastructure remotely, as is the case in most of our engagements.  More so, our engineers were able to utilize the social engineering skills and “convince” the physical site staff to grant them building access.  Once passing this first layer of physical access, by combining social and network exploitation, they were subsequently able to gain access to sensitive labs and document storage rooms.  These facilities/rooms were key to the organizations intellectual property, and on-going research.  Had our engineers been hired by a competing company or other entity, there would have been a 100% chance that the IP (research data, trials data, and so forth) could have been spirited off company property and into hands unknown.

By combining network exploitation and social engineering, we’ve postulated to the sports arena operators that Netragard has a high probability of gaining access to the control systems for their digital signage.  Inevitably, during these discussions the organizations push back stating that their facilities have trained security staff and access control systems.  To that we inform them that the majority of sports facilities staff are more attuned to illicit access attempts in controlled areas, but only during certain periods of operation, such as active games, concerts, and other large scale events.   During non-public usage hours though, there’s a high probability that a skilled individual could gain entry to access controlled areas during a private event, or through beach of trust, such as posing as a repair technician, emergency services employee, or even a facility employee.

One area of concern for any organization, whether they be a football organization, Fortune 100 company, or a mid-size business, is breach of trust with their consumer base.  For a major sports organization, the level of national exposure and endearment far exceeds the exposure most Netragard customers have to the public.  Because of this extremely high national exposure, a sports organization and its arena are a prime target for those who may consider highly visible public disruption of games a key tool in furthering an socio-political agenda.  We’re hopeful that these organizations will continue to take a more serious stance to ensure that their systems and public image are as protected as possible.

– Mike Lockhart, VP of Operations

PDF Printer    Send article as PDF   

Netragard: Connect to chaos

The Chevy Volt will be the first car of its type: not because it is a hybrid electric/petrol vehicle, but because GM plans to give each one the company sells its own IP address. The Volt will have no less than 100 microcontrollers running its systems from some 10 million lines of code. This makes some hackers very excited and Adriel Desautels, president of security analysis firm Netragard, very worried.  Before now, you needed physical access to reprogram the software inside a car: an ‘air gap’ protected vehicles from remote tampering. The Volt will have no such physical defence. Without some kind of electronic protection, Desautels sees cars such as the Volt and its likely competitors becoming ‘hugely vulnerable 5000lb pieces of metal’.

Desautels adds: “We are taking systems that were not meant to be exposed to the threats that my team produces and plug it into the internet. Some 14 year old kid will be able to attack your car while you’re driving.

The full article can be found here.

PDF Creator    Send article as PDF   

Netragard’s thoughts on Pentesting IPv6 vs IPv4

We’ve heard a bit of “noise” about how IPv6 may impact network penetration testing and how networks may or may not be more secure because of IPv6.  Lets be clear, anyone telling you that IPv6 makes penetration testing harder doesn’t understand the first thing about real penetration testing.

Whats the point of IPv6?

IPv6 was designed by the Internet Engineering Task Force (“IETF”) to address the issue of IPv4 address space exhaustion.  IPv6 uses a 128-bit address space while IPv4 is only 32 bits.  This means that there are 2128 possible addresses with IPv6, which is far more than the 232 addresses available with IPv4.  This means that there are going to be many more potential targets for a penetration tester to focus on when IPv6 becomes the norm.

What about increased security with IPv6?

The IPv6 specification mandates support for the Internet Protocol Security (“IPSec”) protocol suite, which is designed to secure IP communications by authenticating and encrypting each IP Packet. IPSec operates at the Internet Layer of the Internet Protocol suite and so differs from other security systems like the Secure Socket Layer, which operates at the application layer. This is the only significant security enhancement that IPv6 brings to the table and even this has little to no impact on penetration testing.

What some penetration testers are saying about IPv6.

Some penetration testers argue that IPv6 will make the job of a penetration testing more difficult because of the massive increase in potential targets. They claim that the massive increase in potential targets will make the process of discovering live targets impossibly time consuming. They argue that scanning each port/host in an entire IPv6 range could take as long as 13,800,523,054,961,500,000 years.  But why the hell would anyone waste their time testing potential targets when they could be testing actual live targets?

The very first step in any penetration test is effective and efficient reconnaissance. Reconnaissance is the military term for the passive gathering of intelligence about an enemy prior to attacking an enemy.  There are countless ways to perform reconnaissance, all of which must be adapted to the particular engagement.  Failure to adapt will result bad intelligence as no two targets are exactly identical.

A small component of reconnaissance is target identification.  Target identification may or may not be done with scanning depending on the nature of the penetration test.  Specifically, it is impossible to deliver a true stealth / covert penetration test with automated scanners.  Likewise it is very difficult to use a scanner to accuratley identify targets in a network that is protected by reactive security systems (like a well configured IPS that supports black-listing).  So in some/many cases doing discovery by scanning an entire block of addresses is ineffective.

A few common methods for target identification include Social Engineering, DNS enumeration, or maybe something as simple as asking the client to provide you with a list of targets.  Not so common methods involve more aggressive social reconnaissance, continued reconnaissance after initial penetration, etc.  Either way, it will not take 13,800,523,054,961,500,000 years to identify all of the live and accessible targets in an IPv6 network if you know what you are doing.

Additionally, penetration testing against 12 targets in an IPv6 network will take the same amount of time as testing 12 targets in an IPv4 network.  The number of real targets is what is important and not the number of potential targets.  It would be a ridiculous waste of time to test 2128 IPv6 Addresses when only 12 IP addresses are live.  Not to mention that increase in time would likely translate to an increase in project cost.

So in reality, for those who are interested, hacking an IPv6 network won’t be any more or less difficult than hacking an IPv4 network.  Anyone that argues otherwise either doesn’t know what they are doing or they are looking to charge you more money for roughly the same amount of work.

PDF    Send article as PDF   

Hacking your car for fun and profit.

Our CEO (Adriel Desautels) recently spoke at the Green Hills Software Elite Users Technology Summit regarding automotive hacking.  During his presentation there were a series of reporters taking photographs, recording audio, etc.  Of all of the articles that came out, one in particular caught our eye.  We made the front page of “Elektronik iNorden” which is a Swedish technology magazine that focuses on hardware and embedded systems.  You can see the full article here but you’ll probably want to translate:

http://www.webbkampanj.com/ein/1011/?page=1&mode=50&noConflict=1

What really surprised us during the presentation was how many people were in disbelief about the level of risk associated with cars built after 2007.  For example, it really isn’t all that hard to program a car to kill the driver.  In fact, its far too easy due to the overall lack of security cars today.

Think of a car as an IT Infrastructure.  All of the servers in the infrastructure are critical systems that control things like breaks, seat belts, door locks, engine timing, airbags, lights, the radio, the dashboard display, etc.  Instead of these systems being plugged into a switched network they are plugged into a hub network lacking any segmentation with no security to speak of.  The only real difference between the car network and your business network is that the car doesn’t have  an Internet connection.

Enter the Chevrolet Volt, the first car to have its own IP address. Granted we don’t yet know how the Volt’s IP address will be protected.  We don’t know if each car will have a public IP address or if the cars will be connected to a private network controlled by Chevy (or someone else).  What we do know is that the car will be able to reach out to the Internet and so it will be vulnerable to client side attacks.

So what happens if someone is able to attack the car?

Realistically if someone is able to hack into the car then they will be able to take full control over almost any component of the car.  They can do anything from apply the brakes, accelerate the car, prevent the brakes from applying, kill (literally destroy) the engine, apply the breaks to one side of the car, lock the doors, pretension the seat belts, etc.  For those of you that think this is Science Fiction, it isn’t.  Here’s one of many research papers that demonstrates the risks.

Why is this possible?

This is possible because people adopt technology too quickly and don’t stop to think about the risks but instead are blinded by the continence that it introduces.  We see this in all industries not just automotive. IT managers, CIO’s, CSO’s, CEO’s, etc. are always purchasing and deploying new technologies without really evaluating the risks.  In fact just recently we had a client purchase a “secure email gateway” technology… it wasn’t too secure.  We were able to hack it and access every email on the system because it relied on outdated third party software.

Certainly another component that adds to this is that most software developers write vulnerable and buggy code (sorry guys but its true).  Their code isn’t written to be secure, its written to do a specific thing like handle network traffic, beep your horn, send emails, whatever.  Poor code + a lack of security awareness == high risks.

So what can you do ?

Before you decide to adopt new technology make sure that you understand the benefits and the risks associated with the adoption.  If you’re not technical enough (most people aren’t) to do a low-level security evaluation then hire someone (a security researcher) to do it for you.  If you don’t then you could very well be putting yourselves and your customers at serious risk.

Create PDF    Send article as PDF   

Fox 25 News Interview

Our (Netragard’s) founder and president (Adriel Desautels) was recently interviewed by the local news (Fox 25) about car hacking.  We thought that we’d write a quick entry and share this with you. Thank you to Fox 25 for doing such a good job with the interview.  Note for the AAA guy though, once cars have IP addresses (which is now) hackers won’t need to “pull up next to you to hack [your car]” and turning the car off is the least of the problems.  Hackers will be able to do it from their location of choice and trust us when we say that “firewalls” don’t pose much of a challenge at all.  Anyway, enjoy the video and please feel free to comment.

http://www.myfoxboston.com/dpp/news/special_reports/could-your-car-be-a-hackers-target-20101111

Free PDF    Send article as PDF   

That nice, new computerized car you just bought could be hackable

Link: http://news.cnet.com/8301-27080_3-20015184-245.html

Of course, your car is probably not a high-priority target for most malicious hackers. But security experts tell CNET that car hacking is starting to move from the realm of the theoretical to reality, thanks to new wireless technologies and evermore dependence on computers to make cars safer, more energy efficient, and modern.

“Now there are computerized systems and they have control over critical components of cars like gas, brakes, etc.,” said Adriel Desautels, chief technology officer and president of Netragard, which does vulnerability assessments and penetration testing on all kinds of systems. “There is a premature reliance on technology.”

Illustration for a tire pressure monitoring system, with four antennas, from a report detailing how researchers were able to hack the wireless system.

(Credit: University of South Carolina, Rutgers University (PDF))

Often the innovations are designed to improve the safety of the cars. For instance, after a recall of Firestone tires that were failing in Fords in 2000, Congress passed the TREAD (Transportation Recall Enhancement, Accountability and Documentation) Act that required that tire pressure monitoring systems (TPMS) be installed in new cars to alert drivers if a tire is underinflated.

Wireless tire pressure monitoring systems, which also were touted as a way to increase fuel economy, communicate via a radio frequency transmitter to a tire pressure control unit that sends commands to the central car computer over the Controller-Area Network (CAN). The CAN bus, which allows electronics to communicate with each other via the On-Board Diagnostics systems (OBD-II), is then able to trigger a warning message on the vehicle dashboard.

Researchers at the University of South Carolina and Rutgers University tested two tire pressure monitoring systems and found the security to be lacking. They were able to turn the low-tire-pressure warning lights on and off from another car traveling at highway speeds from 40 meters (120 feet) away and using low-cost equipment.

“While spoofing low-tire-pressure readings does not appear to be critical at first, it will lead to a dashboard warning and will likely cause the driver to pull over and inspect the tire,” said the report (PDF). “This presents ample opportunities for mischief and criminal activities, if past experience is any indication.”

“TPMS is a major safety system on cars. It’s required by law, but it’s insecure,” said Travis Taylor, one of the researchers who worked on the report. “This can be a problem when considering other wireless systems added to cars. What does that mean about future systems?”

The researchers do not intend to be alarmist; they’re merely trying to figure out what the security holes are and to alert the industry to them so they can be fixed, said Wenyuan Xu, another researcher on the project. “We are trying to raise awareness before things get really serious,” she said.

Another report in May highlighted other risks with the increased use of computers coordinated via internal car networks. Researchers from the University of Washington and University of California, San Diego, tested how easy it would be to compromise a system by connecting a laptop to the onboard diagnostics port that they then wirelessly controlled via a second laptop in another car. Thus, they were able to remotely lock the brakes and the engine, change the speedometer display, as well as turn on the radio and the heat and honk the horn.

Granted, the researchers needed to have physical access to the inside of the car to accomplish the attack. Although that minimizes the likelihood of an attack, it’s not unthinkable to imagine someone getting access to a car dropped off at the mechanic or parking valet.

“The attack surface for modern automobiles is growing swiftly as more sophisticated services and communications features are incorporated into vehicles,” that report (PDF) said. “In the United States, the federally-mandated On-Board Diagnostics port, under the dash in virtually all modern vehicles, provides direct and standard access to internal automotive networks. User-upgradable subsystems such as audio players are routinely attached to these same internal networks, as are a variety of short-range wireless devices (Bluetooth, wireless tire pressure sensors, etc.).”

Engine Control Units
The ubiquitous Engine Control Units themselves started arriving in cars in the late 1970s as a result of the California Clean Air Act and initially were designed to boost fuel efficiency and reduce pollution by adjusting the fuel and oxygen mixture before combustion, the paper said. “Since then, such systems have been integrated into virtually every aspect of a car’s functioning and diagnostics, including the throttle, transmission, brakes, passenger climate and lighting controls, external lights, entertainment, and so on,” the report said.

It’s not just that there are so many embedded computers, it’s that safety critical systems are not isolated from non-safety critical systems, such as entertainment systems, but are “bridged” together to enable “subtle” interactions, according to the report. In addition, automakers are linking Engine Control Units with outside networks like global positioning systems. GM’s OnStar system, for example, can detect problems with systems in the car and warn drivers, place emergency calls, and even allow OnStar personnel to r emotely unlock cars or stop them, the report said.

In an article entitled “Smart Phone + Car = Stupid?” on the EETimes site in late July, Dave Kleidermacher noted that GM is adding smartphone connectivity to most of its 2011 cars via OnStar. “For the first time, engines can now be started and doors locked by ordinary consumers, from anywhere on the planet with a cell signal,” he wrote.

Car manufacturers need to design the systems with security in mind, said Kleidermacher, who is chief technology officer at Green Hills Software, which builds operating system software that goes into cars and other embedded systems.

“You can not retrofit high-level security to a system that wasn’t designed for it,” he told CNET. “People are building this sophisticated software into cars and not designing security in it from the ground up, and that’s a recipe for disaster.”

Representatives from GM OnStar were not available for comment late last week or this week, a spokesman said.

“Technology in cars is not designed to be secure because there’s no perceived threat. They don’t think someone is going to hack a car like they’re going to hack a bank,” said Desautels of Netragard. “For the interim, network security in cars won’t be a primary concern for manufacturers. But once they get connected to the Internet and have IP addresses, I think they’ll be targeted just for fun.”

The threat is primarily theoretical at this point for a number of reasons. First, there isn’t the same financial incentive to hacking cars as there is to hacking online bank accounts. Secondly, there isn’t one dominant platform used in cars that can give attackers the same bang for their buck to target as there is on personal computers.

“The risks are certainly increasing because there are more and more computers in the car, but it will be much tougher to (attack) than with the PC,” said Egil Juliussen, a principal analyst at market researcher firm iSuppli. “There is no equivalent to Windows in the car, at least not yet, so (a hacker) will be dealing with a lot of different systems and have to have some knowledge about each one. It doesn’t mean a determined hacker couldn’t do it.”

But Juliussen said drivers don’t need to worry about anything right now. “This is not a problem this year or next year,” he said. “Its five years down the road, but the way to solve it is to build security into the systems now.”

Infotainment systems
In the meantime, the innovations in mobile communications and entertainment aren’t limited to smartphones and iPads. People want to use their devices easily in their cars and take advantage of technology that will let them make calls and listen to music without having to push any buttons or touch any track wheels. Hands-free telephony laws in states are requiring this.

Millions of drivers are using the SYNC system that has shipped in more than 2 million Ford cars that allows people to connect digital media players and Bluetooth-enabled mobile phones to their car entertainment system and use voice commands to operate them. The system uses Microsoft Auto as the operating system. Other cars offer less-sophisticated mobile device connectivity.

“A lot of cars have Bluetooth car kits built into them so you can bring the cell phone into your car and use your phone through microphones and speakers built into the car,” said Kevin Finisterre, lead researcher at Netragard. “But vendors often leave default passwords.”

Ford uses a variety of security measures in SYNC, including only allowing Ford-approved software to be installed at the factory and default security set to Wi-Fi Protected Access 2 (WPA2), which requires users to enter a randomly chosen password to connect to the Internet. To protect customers when the car is on the road and the Mobile Wi-Fi Hot Spot feature is enabled, Ford also uses two firewalls on SYNC, a network firewall similar to a home Wi-Fi router and a separate central processing unit that prevents unauthorized messages from bei ng sent to other modules within the car.

“We use the security models that normal IT folks use to protect an enterprise network,” said Jim Buczkowski, global director of electrical and electronics systems engineering for Ford SYNC.

Not surprisingly, there is a competing vehicle “infotainment” platform being developed that is based on open-source technology. About 80 companies have formed the Genivi Alliance to create open standards and middleware for information and entertainment solutions in cars.

Asked if Genivi is incorporating security into its platform from the get-go, Sebastian Zimmermann, chair of the consortium’s product definition and planning group, said it is up to the manufacturers that are creating the branded devices and custom apps to build security in and to take advantage of security mechanisms provided in Linux, the open-source operating system the platform is based on.

“Automakers are aware of security and have taken it seriously…It’s increasingly important as the vehicle opens up new interfaces to the outside world,” Zimmermann said. “They are trying to find a balance between openness and security.”

Another can of security worms being opened is the fact that cars may follow the example of smart phones and Web services by getting their own customized third-party apps. Hughes Telematics reportedly is working with automakers on app stores for drivers.

This is already happening to some extent, for instance, with video cameras becoming standard in police cars and school buses, bringing up a host of security and privacy issues.

“We did a penetration test where we had a police agency that has some in-car cameras,” Finisterre of Netragard said, “and we were able to access the cameras remotely and have live audio and video streams from the police car due to vulnerabilities in the manufacturing systems.”

“I’m sure (eventually) there is going to be smart pavement and smart lighting and other dumb stuff that has the capability of interacting with the car in the future,” he said. “Technology is getting pushed out the door with bells and whistles and security gets left behind.”


PDF Download    Send article as PDF   

Security Vulnerability Penetration Assessment Test?

Our philosophy here at Netragard is that security-testing services must produce a threat that is at least equal to the threat that our customers are likely to face in the real world. If we test our customers at a lesser threat level and a higher-level threat attempts to align with their risks, then they will likely suffer a compromise. If they do suffer a compromise, then the money that they spent on testing services might as well be added to the cost in damages that result from the breach.
This is akin to how armor is tested. Armor is designed to protect something from a specific threat. In order to be effective, the armor is exposed to a level of threat that is slightly higher than what it will likely face in the real world. If the armor is penetrated during testing, it is enhanced and hardened until the threat cannot defeat the armor. If armor is penetrated in battle then there are casualties. That class of testing is called Penetration Testing and the level of threat produced has a very significant impact on test quality and results.

What is particularly scary is that many of the security vendors who offer Penetration Testing services either don’t know what Penetration Testing is or don’t know the definitions for the terms. Many security vendors confuse Penetration Testing with Vulnerability Assessments and that confusion translates to the customer. The terms are not interchangeable and they do not define methodology, they only define testing class. So before we can explain service quality and threat, we must first properly define services.

Based on the English dictionary the word “Vulnerability” is best defined as susceptibility to harm or attack. Being vulnerable is the state of being exposed. The word “Assessment” is best defined as the means by which the value of something is estimated or determined usually through the process of testing. As such, a “Vulnerability Assessment” is a best estimate as to how susceptible something is to harm or attack.

Lets do the same for “Penetration Test”. The word “Penetration” is best defined as the act of entering into or through something, or the ability to make way into or through something. The word “Test” is best defined as the means by which the presence, quality or genuineness of anything is determined. As such the term “Penetration Test” means to determine the presence of points where something can make its way through or into something else.

Despite what many people think, neither term is specific to Information Technology. Penetration Tests and Vulnerability Assessments existed well before the advent of the microchip. In fact, the ancient Romans used a form of penetration testing to test their armor against various types of projectiles. Today, we perform Structural Vulnerability Assessments against things like the Eiffel Tower, and the Golden Gate Bridge. Vulnerability Assessments are chosen because Structural Penetration Tests would cause damage to, or possibly destroy the structure.

In the physical world Penetration Testing is almost always destructive (at least to a degree), but in the digital world it isn’t destructive when done properly. This is mostly because in the digital world we’re penetrating a virtual boundary and in the physical world we’re penetrating a physical boundary. When you penetrate a virtual boundary you’re not really creating a hole, you’re usually creating a process in memory that can be killed or otherwise removed.

When applied to IT Security, a Vulnerability Assessment isn’t as accurate as a Penetration Test. This is because Vulnerability Assessments are best estimates and Penetration Tests either penetrate or they don’t. As such, a quality Vulnerability Assessment report will contain few false positives (false findings) while a quality Penetration Testing report should contain absolutely no false positives. (though they do sometimes contain theoretical findings).

The quality of service is determined by the talent of the team delivering services and by the methodology used for service delivery. A team of research capable ethical hackers that have a background in exploit development and system / network penetration will usually deliver higher quality services than a team of people who are not research capable. If a team claims to be research capable, ask them for example exploit code that they’ve written and ask them for advisories that they’ve published.

Service quality is also directly tied to threat capability. The threat in this case is defined by the capability of real world malicious hackers. If testing services do not produce a threat level that is at least equal to the real world threat, then the services are probably not worth buying. After all, the purpose for security testing is to identify risks so that they can be fixed / patched / eliminated before malicious hackers exploit them. But if the security testing services are less capable than the malicious hacker, then chances are the hacker will find something that the service missed.

PDF Printer    Send article as PDF   
Return top

INFORMATION

Change this sentence and title from admin Theme option page.