Marissa Mayer made Yahoo’s VPN famous by using it to check on the work habits of her employees. Lost amid today’s VPN conversation, however, is the fact that virtual private networks are much more than just pipelines for connecting remote employees to central work servers.
And that’s a damn shame, because VPNs can be helpful tools for protecting online privacy, and you need not be an office drone to enjoy their benefits.
A VPN, as its name suggests, is just a virtual version of a secure, physical network—a web of computers linked together to share files and other resources. But VPNs connect to the outside world over the Internet, and they can serve to secure general Internet traffic in addition to corporate assets. In fact, the lion’s share of modern VPNs are encrypted, so computers, devices, and other networks that connect to them do so via encrypted tunnels.
Why you want a VPN
You have at least four great reasons to start using a VPN. First, you can use it to connect securely to a remote network via the Internet. Most companies maintain VPNs so that employees can access files, applications, printers, and other resources on the office network without compromising security, but you can also set up your own VPN to safely access your secure home network while you’re on the road.
Second, VPNs are particularly useful for connecting multiple networks together securely. For this reason, most businesses big and small rely on a VPN to share servers and other networked resources among multiple offices or stores across the globe. Even if you don’t have a chain of offices to worry about, you can use the same trick to connect multiple home networks or other networks for personal use.
Third, if you’re concerned about your online privacy, connecting to an encrypted VPN while you’re on a public or untrusted network—such as a Wi-Fi hotspot in a hotel or coffee shop—is a smart, simple security practice. Because the VPN encrypts your Internet traffic, it helps to stymie other people who may be trying to snoop on your browsing via Wi-Fi to capture your passwords.
Fourth and finally, one of the best reasons to use a VPN is to circumvent regional restrictions—known as geoblocking—on certain websites. Journalists and political dissidents use VPNs to get around state-sponsored censorship all the time, but you can also use a VPN for recreational purposes, such as connecting to a British VPN to watch the BBC iPlayer outside the UK. Because your Internet traffic routes through the VPN, it looks as if you’re just another British visitor.
Pick your protocol
When choosing a networking protocol for your VPN, you need worry only about the four most popular ones. Here’s a quick rundown, including the strengths and weaknesses of each.
Point-to-Point Tunneling Protocol (PPTP) is the least secure VPN method, but it’s a great starting point for your first VPN because almost every operating system supports it, including Windows, Mac OS, and even mobile OSs.
Layer 2 Tunneling Protocol (L2TP) and Internet Protocol Security (IPsec) are more secure than PPTP and are almost as widely supported, but they are also more complicated to set up and are susceptible to the same connection issues as PPTP is.
Secure Sockets Layer (SSL) VPN systems provide the same level of security that you trust when you log on to banking sites and other sensitive domains. Most SSL VPNs are referred to as “clientless,” since you don’t need to be running a dedicated VPN client to connect to one of them. They’re my favorite kind of VPN because the connection happens via a Web browser and thus is easier and more reliable to use than PPTP, L2TP, or IPsec.
OpenVPN is exactly what it sounds like: an open-source VPN system that’s based on SSL code. It’s free and secure, and it doesn’t suffer from connection issues, but using OpenVPN does require you to install a client since Windows, Mac OS X, and mobile devices don’t natively support it.
In short: When in doubt, try to use SSL or OpenVPN. Keep in mind that some of the services highlighted in the next section don’t use these protocols. Instead, they use their own proprietary VPN technology.
Now, let’s talk about how to create and connect to your own VPN. If you want simple remote access to a single computer, consider using the VPN software built into Windows. If you’d like to network multiple computers together quickly through a VPN, consider installing stand-alone VPN server software.
If you need a more reliable and robust arrangement (one that also supports site-to-site connections), consider using a dedicated VPN router. And if you just want to use a VPN to secure your Internet traffic while you’re on public Wi-Fi hotspots and other untrusted networks—or to access regionally restricted sites—consider subscribing to a third-party hosted VPN provider.
Set up a simple VPN with Windows
Windows comes loaded with a VPN client that supports the PPTP and L2TP/IPsec protocols. The setup process is simple: If you’re using Windows 8, just bring up the Search charm, type VPN, and then launch the VPN wizard by clicking Set up a virtual private network (VPN) connection.
You can use this client to connect securely to other Windows computers or to other VPN servers that support the PPTP and L2TP/IPsec protocols—you just need to provide the IP address or domain name of the VPN server to which you want to connect. If you’re connecting to a corporate or commercial VPN, you can contact the administrator to learn the proper IP address. If you’re running your own VPN server via Windows, you can figure out the server’s IP address by typing CMD in the Search charm, launching the Command Prompt, and typing ipconfig. This simple trick comes in handy when you’re setting up your Windows PC as a VPN server, and then connecting to it so that you can securely, remotely access your files from anywhere.
Quick note: When setting up incoming PPTP VPN connections in Windows, youmust configure your network router to forward VPN traffic to the Windows computer you want to access remotely. You can do this by logging in to the router’s control panel—consult the manufacturer’s instructions on how to do this—and configuring the port-forwarding or virtual-server settings to forward port 1723 to the IP address of the computer you wish to access. In addition, PPTP or VPN pass-through options need to be enabled in the firewall settings, but usually they’re switched on by default.
If you’re using Windows 7 and you need to connect to a VPN or to accept incoming VPN connections in that OS, check out our guide to setting up a VPN in Windows 7.
Use third-party software to create a VPN server
If you’d like to create a VPN between multiple computers to share files and network resources without having to configure your router or to dedicate a PC to act as the VPN server, consider using third-party VPN software. Comodo Unite, Gbridge, andTeamViewer are all decent, reliable, and (most important) free.
You can also use LogMeIn Hamachi for free with five or fewer users, but it’s good enough that if you have more than five PCs you want to link up securely—say, as part of your small-but-growing business—you should consider paying for the full service.
Go whole hog with your own VPN router
If you want to get your hands dirty while providing robust remote access to an entire network, or if you wish to create site-to-site connections, try setting up a router on your network with a VPN server and client. If you’re working on a budget, the cheapest way to set up your own dedicated VPN router is to upload aftermarket firmware that enables VPN functionality, such as DD-WRT or Tomato, to an inexpensive consumer-level router.
When you’re choosing a VPN router and third-party router firmware, make sure they support the VPN networking protocol you need for your devices. In addition, check the VPN router to verify how many simultaneous VPN users it supports.
Let a third-party VPN provider worry about it
If you merely want VPN access to cloak your Internet traffic while you’re using public Wi-Fi or another untrusted network, or to access regionally restricted sites, the simplest solution is to use a hosted VPN provider. Hotspot Shield is my favorite, as it offers both free and paid VPN services for Windows, Mac, iOS, and Android. HotSpotVPN,StrongVPN, and WiTopia are other paid services we’ve reviewed in the past.
If you want to keep your browsing activity anonymous but can’t spare the cash for a paid VPN, check out the Onion Router, a network of servers that can anonymize your Internet traffic for free. Visit the TOR website and download the latest browser bundle, and then start browsing with the TOR extensions enabled. The software will encrypt your connection to the TOR server before routing your Internet traffic through a randomized series of servers across the globe, slowing your browsing speed but cloaking your online activity from prying eyes.
No matter how you choose to go about it, start using a VPN today. It takes a bit of work up front, but spending the time to get on a VPN is one of the smartest, simplest steps you can take toward making your online activities more secure.
Eric Geier Contributor
Eric Geier is a freelance tech writer as well as the founder of NoWiresSecurity, a cloud-based Wi-Fi security service, and On Spot Techs, an on-site computer services company.
Law-Enforcement Officials Expand Use of Tools Such as Spyware as People Under Investigation ‘Go Dark,’ Evading Wiretaps
By JENNIFER VALENTINO-DEVRIES and DANNY YADRON
Updated Aug. 3, 2013 3:17 p.m. ET
Law-enforcement officials in the U.S. are expanding the use of tools routinely used by computer hackers to gather information on suspects, bringing the criminal wiretap into the cyber age.
Federal agencies have largely kept quiet about these capabilities, but court documents and interviews with people involved in the programs provide new details about the hacking tools, including spyware delivered to computers and phones through email or Web links—techniques more commonly associated with attacks by criminals.
People familiar with the Federal Bureau of Investigation’s programs say that the use of hacking tools under court orders has grown as agents seek to keep up with suspects who use new communications technology, including some types of online chat and encryption tools. The use of such communications, which can’t be wiretapped like a phone, is called “going dark” among law enforcement.
The FBI develops some hacking tools internally and purchases others from the private sector. With such technology, the bureau can remotely activate the microphones in phones running Google Inc.’s Android software to record conversations, one former U.S. official said. It can do the same to microphones in laptops without the user knowing, the person said. Google declined to comment.
The bureau typically uses hacking in cases involving organized crime, child pornography or counterterrorism, a former U.S. official said. It is loath to use these tools when investigating hackers, out of fear the suspect will discover and publicize the technique, the person said.
The FBI has been developing hacking tools for more than a decade, but rarely discloses its techniques publicly in legal cases.
Earlier this year, a federal warrant application in a Texas identity-theft case sought to use software to extract files and covertly take photos using a computer’s camera, according to court documents. The judge denied the application, saying, among other things, that he wanted more information on how data collected from the computer would be minimized to remove information on innocent people.
Since at least 2005, the FBI has been using “web bugs” that can gather a computer’s Internet address, lists of programs running and other data, according to documents disclosed in 2011. The FBI used that type of tool in 2007 to trace a person who was eventually convicted of emailing bomb threats in Washington state, for example.
The FBI “hires people who have hacking skill, and they purchase tools that are capable of doing these things,” said a former official in the agency’s cyber division. The tools are used when other surveillance methods won’t work: “When you do, it’s because you don’t have any other choice,” the official said.
Surveillance technologies are coming under increased scrutiny after disclosures about data collection by the National Security Agency. The NSA gathers bulk data on millions of Americans, but former U.S. officials say law-enforcement hacking is targeted at very specific cases and used sparingly.
Still, civil-liberties advocates say there should be clear legal guidelines to ensure hacking tools aren’t misused. “People should understand that local cops are going to be hacking into surveillance targets,” said Christopher Soghoian, principal technologist at the American Civil Liberties Union. “We should have a debate about that.”
Mr. Soghoian, who is presenting on the topic Friday at the DefCon hacking conference in Las Vegas, said information about the practice is slipping out as a small industry has emerged to sell hacking tools to law enforcement. He has found posts and resumes on social networks in which people discuss their work at private companies helping the FBI with surveillance.
A search warrant would be required to get content such as files from a suspect’s computer, said Mark Eckenwiler, a senior counsel at Perkins Coie LLP who until December was the Justice Department’s primary authority on federal criminal surveillance law. Continuing surveillance would necessitate an even stricter standard, the kind used to grant wiretaps.
But if the software gathers only communications-routing “metadata”—like Internet protocol addresses or the “to” and “from” lines in emails—a court order under a lower standard might suffice if the program is delivered remotely, such as through an Internet link, he said. That is because nobody is physically touching the suspect’s property, he added.
An official at the Justice Department said it determines what legal authority to seek for such surveillance “on a case-by-case basis.” But the official added that the department’s approach is exemplified by the 2007 Washington bomb-threat case, in which the government sought a warrant even though no agents touched the computer and the spyware gathered only metadata.
In 2001, the FBI faced criticism from civil-liberties advocates for declining to disclose how it installed a program to record the keystrokes on the computer of mobster Nicodemo Scarfo Jr. to capture a password he was using to encrypt a document. He was eventually convicted.
A group at the FBI called the Remote Operations Unit takes a leading role in the bureau’s hacking efforts, according to former officials.
Officers often install surveillance tools on computers remotely, using a document or link that loads software when the person clicks or views it. In some cases, the government has secretly gained physical access to suspects’ machines and installed malicious software using a thumb drive, a former U.S. official said.
The bureau has controls to ensure only “relevant data” are scooped up, the person said. A screening team goes through all of the data pulled from the hack to determine what is relevant, then hands off that material to the case team and stops working on the case.
The FBI employs a number of hackers who write custom surveillance software, and also buys software from the private sector, former U.S. officials said.
Italian company HackingTeam SRL opened a sales office in Annapolis, Md., more than a year ago to target North and South America. HackingTeam provides software that can extract information from phones and computers and send it back to a monitoring system. The company declined to disclose its clients or say whether any are in the U.S.
U.K.-based Gamma International offers computer exploits, which take advantage of holes in software to deliver spying tools, according to people familiar with the company. Gamma has marketed “0 day exploits”—meaning that the software maker doesn’t yet know about the security hole—for software including Microsoft Corp.’s Internet Explorer, those people said. Gamma, which has marketed its products in the U.S., didn’t respond to requests for comment, nor did Microsoft.
Write to Jennifer Valentino-DeVries at Jennifer.Valentino-DeVries@wsj.com and Danny Yadron at firstname.lastname@example.org
If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers. This might be the kind of software bug most likely to kill someone. – CHARLIE MILLER
I WAS DRIVING 70 mph on the edge of downtown St. Louis when the exploit began to take hold.
Though I hadn’t touched the dashboard, the vents in the Jeep Cherokee started blasting cold air at the maximum setting, chilling the sweat on my back through the in-seat climate control system. Next the radio switched to the local hip hop station and began blaring Skee-lo at full volume. I spun the control knob left and hit the power button, to no avail. Then the windshield wipers turned on, and wiper fluid blurred the glass.
As I tried to cope with all this, a picture of the two hackers performing these stunts appeared on the car’s digital display: Charlie Miller and Chris Valasek, wearing their trademark track suits. A nice touch, I thought.
The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.
To better simulate the experience of driving a vehicle while it’s being hijacked by an invisible, virtual force, Miller and Valasek refused to tell me ahead of time what kinds of attacks they planned to launch from Miller’s laptop in his house 10 miles west. Instead, they merely assured me that they wouldn’t do anything life-threatening. Then they told me to drive the Jeep onto the highway. “Remember, Andy,” Miller had said through my iPhone’s speaker just before I pulled onto the Interstate 64 on-ramp, “no matter what happens, don’t panic.”1
As the two hackers remotely toyed with the air-conditioning, radio, and windshield wipers, I mentally congratulated myself on my courage under pressure. That’s when they cut the transmission.
Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.
At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.
“You’re doomed!” Valasek shouted, but I couldn’t make out his heckling over the blast of the radio, now pumping Kanye West. The semi loomed in the mirror, bearing down on my immobilized Jeep.
I followed Miller’s advice: I didn’t panic. I did, however, drop any semblance of bravery, grab my iPhone with a clammy fist, and beg the hackers to make it stop.
This wasn’t the first time Miller and Valasek had put me behind the wheel of a compromised car. In the summer of 2013, I drove a Ford Escape and a Toyota Prius around a South Bend, Indiana, parking lot while they sat in the backseat with their laptops, cackling as they disabled my brakes, honked the horn, jerked the seat belt, and commandeered the steering wheel. “When you lose faith that a car will do what you tell it to do,” Miller observed at the time, “it really changes your whole view of how the thing works.” Back then, however, their hacks had a comforting limitation: The attacker’s PC had been wired into the vehicles’ onboard diagnostic port, a feature that normally gives repair technicians access to information about the car’s electronically controlled systems.
A mere two years later, that carjacking has gone wireless. Miller and Valasek plan to publish a portion of their exploit on the Internet, timed to a talk they’re giving at the Black Hat security conference in Las Vegas next month. It’s the latest in a series of revelations from the two hackers that have spooked the automotive industry and even helped to inspire legislation; WIRED has learned that senators Ed Markey and Richard Blumenthal plan to introduce an automotive security bill today to set new digital security standards for cars and trucks, first sparked when Markey took note of Miller and Valasek’s work in 2013.
As an auto-hacking antidote, the bill couldn’t be timelier. The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.
Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they’re working on perfecting their steering control—for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep’s GPS coordinates, measure its speed, and even drop pins on a map to trace its route.
All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element, which Miller and Valasek won’t identify until their Black Hat talk, Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country. “From an attacker’s perspective, it’s a super nice vulnerability,” Miller says.
From that entry point, Miller and Valasek’s attack pivots to an adjacent chip in the car’s head unit—the hardware for its entertainment system—silently rewriting the chip’s firmware to plant their code. That rewritten firmware is capable of sending commands through the car’s internal computer network, known as a CAN bus, to its physical components like the engine and wheels. Miller and Valasek say the attack on the entertainment system seems to work on any Chrysler vehicle with Uconnect from late 2013, all of 2014, and early 2015. They’ve only tested their full set of physical hacks, including ones targeting transmission and braking systems, on a Jeep Cherokee, though they believe that most of their attacks could be tweaked to work on any Chrysler vehicle with the vulnerable Uconnect head unit. They have yet to try remotely hacking into other makes and models of cars.
After the researchers reveal the details of their work in Vegas, only two things will prevent their tool from enabling a wave of attacks on Jeeps around the world. First, they plan to leave out the part of the attack that rewrites the chip’s firmware; hackers following in their footsteps will have to reverse-engineer that element, a process that took Miller and Valasek months. But the code they publish will enable many of the dashboard hijinks they demonstrated on me as well as GPS tracking.
Second, Miller and Valasek have been sharing their research with Chrysler for nearly nine months, enabling the company to quietly release a patch ahead of the Black Hat conference. On July 16, owners of vehicles with the Uconnect feature were notified of the patch in a post on Chrysler’s website that didn’t offer any details or acknowledge Miller and Valasek’s research. “[Fiat Chrysler Automobiles] has a program in place to continuously test vehicles systems to identify vulnerabilities and develop solutions,” reads a statement a Chrysler spokesperson sent to WIRED. “FCA is committed to providing customers with the latest software updates to secure vehicles against any potential vulnerability.”
Unfortunately, Chrysler’s patch must be manually implemented via a USB stick or by a dealership mechanic. (Download the update here.) That means many—if not most—of the vulnerable Jeeps will likely stay vulnerable.
Chrysler stated in a response to questions from WIRED that it “appreciates” Miller and Valasek’s work. But the company also seemed leery of their decision to publish part of their exploit. “Under no circumstances does FCA condone or believe it’s appropriate to disclose ‘how-to information’ that would potentially encourage, or help enable hackers to gain unauthorized and unlawful access to vehicle systems,” the company’s statement reads. “We appreciate the contributions of cybersecurity advocates to augment the industry’s understanding of potential vulnerabilities. However, we caution advocates that in the pursuit of improved public safety they not, in fact, compromise public safety.”
The two researchers say that even if their code makes it easier for malicious hackers to attack unpatched Jeeps, the release is nonetheless warranted because it allows their work to be proven through peer review. It also sends a message: Automakers need to be held accountable for their vehicles’ digital security. “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers,” Miller says. “This might be the kind of software bug most likely to kill someone.”
In fact, Miller and Valasek aren’t the first to hack a car over the Internet. In 2011 a team of researchers from the University of Washington and the University of California at San Diego showed that they could wirelessly disable the locks and brakes on a sedan. But those academics took a more discreet approach, keeping the identity of the hacked car secret and sharing the details of the exploit only with carmakers.
Miller and Valasek represent the second act in a good-cop/bad-cop routine. Carmakers who failed to heed polite warnings in 2011 now face the possibility of a public dump of their vehicles’ security flaws. The result could be product recalls or even civil suits, says UCSD computer science professor Stefan Savage, who worked on the 2011 study. “Imagine going up against a class-action lawyer after Anonymous decides it would be fun to brick all the Jeep Cherokees in California,” Savage says.2
For the auto industry and its watchdogs, in other words, Miller and Valasek’s release may be the last warning before they see a full-blown zero-day attack. “The regulators and the industry can no longer count on the idea that exploit code won’t be in the wild,” Savage says. “They’ve been thinking it wasn’t an imminent danger you needed to deal with. That implicit assumption is now dead.”
471,000 Hackable Automobiles
Sitting on a leather couch in Miller’s living room as a summer storm thunders outside, the two researchers scan the Internet for victims.
Uconnect computers are linked to the Internet by Sprint’s cellular network, and only other Sprint devices can talk to them. So Miller has a cheap Kyocera Android phone connected to his battered MacBook. He’s using the burner phone as a Wi-Fi hot spot, scouring for targets using its thin 3G bandwidth.
A set of GPS coordinates, along with a vehicle identification number, make, model, and IP address, appears on the laptop screen. It’s a Dodge Ram. Miller plugs its GPS coordinates into Google Maps to reveal that it’s cruising down a highway in Texarkana, Texas. He keeps scanning, and the next vehicle to appear on his screen is a Jeep Cherokee driving around a highway cloverleaf between San Diego and Anaheim, California. Then he locates a Dodge Durango, moving along a rural road somewhere in the Upper Peninsula of Michigan. When I ask him to keep scanning, he hesitates. Seeing the actual, mapped locations of these unwitting strangers’ vehicles—and knowing that each one is vulnerable to their remote attack—unsettles him.
When Miller and Valasek first found the Uconnect flaw, they thought it might only enable attacks over a direct Wi-Fi link, confining its range to a few dozen yards. When they discovered the Uconnect’s cellular vulnerability earlier this summer, they still thought it might work only on vehicles on the same cell tower as their scanning phone, restricting the range of the attack to a few dozen miles. But they quickly found even that wasn’t the limit. “When I saw we could do it anywhere, over the Internet, I freaked out,” Valasek says. “I was frightened. It was like, holy fuck, that’s a vehicle on a highway in the middle of the country. Car hacking got real, right then.”
That moment was the culmination of almost three years of work. In the fall of 2012, Miller, a security researcher for Twitter and a former NSA hacker, and Valasek, the director of vehicle security research at the consultancy IOActive, were inspired by the UCSD and University of Washington study to apply for a car-hacking research grant from Darpa. With the resulting $80,000, they bought a Toyota Prius and a Ford Escape. They spent the next year tearing the vehicles apart digitally and physically, mapping out their electronic control units, or ECUs—the computers that run practically every component of a modern car—and learning to speak the CAN network protocol that controls them.
When they demonstrated a wired-in attack on those vehicles at the DefCon hacker conference in 2013, though, Toyota, Ford, and others in the automotive industry downplayed the significance of their work, pointing out that the hack had required physical access to the vehicles. Toyota, in particular, argued that its systems were “robust and secure” against wireless attacks. “We didn’t have the impact with the manufacturers that we wanted,” Miller says. To get their attention, they’d need to find a way to hack a vehicle remotely.
Congress Takes on the
So the next year, they signed up for mechanic’s accounts on the websites of every major automaker and downloaded dozens of vehicles’ technical manuals and wiring diagrams. Using those specs, they rated 24 cars, SUVs, and trucks on three factors they thought might determine their vulnerability to hackers: How many and what types of radios connected the vehicle’s systems to the Internet; whether the Internet-connected computers were properly isolated from critical driving systems, and whether those critical systems had “cyberphysical” components—whether digital commands could trigger physical actions like turning the wheel or activating brakes.
Based on that study, they rated Jeep Cherokee the most hackable model. Cadillac’s Escalade and Infiniti’s Q50 didn’t fare much better; Miller and Valasek ranked them second- and third-most vulnerable. When WIRED told Infiniti that at least one of Miller and Valasek’s warnings had been borne out, the company responded in a statement that its engineers “look forward to the findings of this [new] study” and will “continue to integrate security features into our vehicles to protect against cyberattacks.” Cadillac emphasized in a written statement that the company has released a new Escalade since Miller and Valasek’s last study, but that cybersecurity is “an emerging area in which we are devoting more resources and tools,” including the recent hire of a chief product cybersecurity officer.
After Miller and Valasek decided to focus on the Jeep Cherokee in 2014, it took them another year of hunting for hackable bugs and reverse-engineering to prove their educated guess. It wasn’t until June that Valasek issued a command from his laptop in Pittsburgh and turned on the windshield wipers of the Jeep in Miller’s St. Louis driveway.
Since then, Miller has scanned Sprint’s network multiple times for vulnerable vehicles and recorded their vehicle identification numbers. Plugging that data into an algorithm sometimes used for tagging and tracking wild animals to estimate their population size, he estimated that there are as many as 471,000 vehicles with vulnerable Uconnect systems on the road.
Pinpointing a vehicle belonging to a specific person isn’t easy. Miller and Valasek’s scans reveal random VINs, IP addresses, and GPS coordinates. Finding a particular victim’s vehicle out of thousands is unlikely through the slow and random probing of one Sprint-enabled phone. But enough phones scanning together, Miller says, could allow an individual to be found and targeted. Worse, he suggests, a skilled hacker could take over a group of Uconnect head units and use them to perform more scans—as with any collection of hijacked computers—worming from one dashboard to the next over Sprint’s network. The result would be a wirelessly controlled automotive botnet encompassing hundreds of thousands of vehicles.
“For all the critics in 2013 who said our work didn’t count because we were plugged into the dashboard,” Valasek says, “well, now what?”
Congress Takes on Car Hacking
Now the auto industry needs to do the unglamorous, ongoing work of actually protecting cars from hackers. And Washington may be about to force the issue.
Later today, senators Markey and Blumenthal intend to reveal new legislation designed to tighten cars’ protections against hackers. The bill (which a Markey spokesperson insists wasn’t timed to this story) will call on the National Highway Traffic Safety Administration and the Federal Trade Commission to set new security standards and create a privacy and security rating system for consumers. “Controlled demonstrations show how frightening it would be to have a hacker take over controls of a car,” Markey wrote in a statement to WIRED. “Drivers shouldn’t have to choose between being connected and being protected…We need clear rules of the road that protect cars from hackers and American families from data trackers.”
Markey has keenly followed Miller and Valasek’s research for years. Citing their 2013 Darpa-funded research and hacking demo, he sent a letter to 20 automakers, asking them to answer a series of questions about their security practices. The answers, released in February, show what Markey describes as “a clear lack of appropriate security measures to protect drivers against hackers who may be able to take control of a vehicle.” Of the 16 automakers who responded, all confirmed that virtually every vehicle they sell has some sort of wireless connection, including Bluetooth, Wi-Fi, cellular service, and radios. (Markey didn’t reveal the automakers’ individual responses.) Only seven of the companies said they hired independent security firms to test their vehicles’ digital security. Only two said their vehicles had monitoring systems that checked their CAN networks for malicious digital commands.
UCSD’s Savage says the lesson of Miller and Valasek’s research isn’t that Jeeps or any other vehicle are particularly vulnerable, but that practically any modern vehicle could be vulnerable. “I don’t think there are qualitative differences in security between vehicles today,” he says. “The Europeans are a little bit ahead. The Japanese are a little bit behind. But broadly writ, this is something everyone’s still getting their hands around.”
Aside from wireless hacks used by thieves to open car doors, only one malicious car-hacking attack has been documented: In 2010 a disgruntled employee in Austin, Texas, used a remote shutdown system meant for enforcing timely car payments to brick more than 100 vehicles. But the opportunities for real-world car hacking have only grown, as automakers add wireless connections to vehicles’ internal networks. Uconnect is just one of a dozen telematics systems, including GM Onstar, Lexus Enform, Toyota Safety Connect, Hyundai Bluelink, and Infiniti Connection.
In fact, automakers are thinking about their digital security more than ever before, says Josh Corman, the cofounder of I Am the Cavalry, a security industry organization devoted to protecting future Internet-of-things targets like automobiles and medical devices. Thanks to Markey’s letter, and another set of questions sent to automakers by the House Energy and Commerce Committee in May, Corman says, Detroit has known for months that car security regulations are coming.
But Corman cautions that the same automakers have been more focused on competing with each other to install new Internet-connected cellular services for entertainment, navigation, and safety. (Payments for those services also provide a nice monthly revenue stream.) The result is that the companies have an incentive to add Internet-enabled features—but not to secure them from digital attacks. “They’re getting worse faster than they’re getting better,” he says. “If it takes a year to introduce a new hackable feature, then it takes them four to five years to protect it.”
Corman’s group has been visiting auto industry events to push five recommendations: safer design to reduce attack points, third-party testing, internal monitoring systems, segmented architecture to limit the damage from any successful penetration, and the same Internet-enabled security software updates that PCs now receive. The last of those in particular is already catching on; Ford announced a switch to over-the-air updates in March, and BMW used wireless updates to patch a hackable security flaw in door locks in January.
Corman says carmakers need to befriend hackers who expose flaws, rather than fear or antagonize them—just as companies like Microsoft have evolved from threatening hackers with lawsuits to inviting them to security conferences and paying them “bug bounties” for disclosing security vulnerabilities. For tech companies, Corman says, “that enlightenment took 15 to 20 years.” The auto industry can’t afford to take that long. “Given that my car can hurt me and my family,” he says, “I want to see that enlightenment happen in three to five years, especially since the consequences for failure are flesh and blood.”
As I drove the Jeep back toward Miller’s house from downtown St. Louis, however, the notion of car hacking hardly seemed like a threat that will wait three to five years to emerge. In fact, it seemed more like a matter of seconds; I felt the vehicle’s vulnerability, the nagging possibility that Miller and Valasek could cut the puppet’s strings again at any time.
The hackers holding the scissors agree. “We shut down your engine—a big rig was honking up on you because of something we did on our couch,” Miller says, as if I needed the reminder. “This is what everyone who thinks about car security has worried about for years. This is a reality.”
Update 3:30 7/24/2015: Chrysler has issued a recall for 1.4 million vehicles as a result of Miller and Valasek’s research. The company has also blocked their wireless attack on Sprint’s network to protect vehicles with the vulnerable software.
1Correction 10:45 7/21/2015: An earlier version of the story stated that the hacking demonstration took place on Interstate 40, when in fact it was Route 40, which coincides in St. Louis with Interstate 64.
2Correction 1:00pm 7/27/2015: An earlier version of this story referenced a Range Rover recall due to a hackable software bug that could unlock the vehicles’ doors. While the software bug did lead to doors unlocking, it wasn’t publicly determined to exploitable by hackers.
In this episode of “Phreaked Out,” top security researchers in the the field of car hacking highlight security holes in automobile technology. In the simplest terms, hackers have discovered how to unlock a vehicles doors and to relieve you of documents or valuables. However, the researchers show how an experienced hacker can access your vehicles main computer system and take remote access of the automobile, including steering, accelerating, braking and cutting the ignition. These exploits have gone unaddressed by American auto manufacturers who are becoming increasingly aware of the threat.
The federal government has been fighting hard for years to hide details about its use of so-called stingray surveillance technology from the public.
The surveillance devices simulate cell phone towers in order to trick nearby mobile phones into connecting to them and revealing the phones’ locations.
Now documents recently obtained by the ACLU confirm long-held suspicions that the controversial devices are also capable of recording numbers for a mobile phone’s incoming and outgoing calls, as well as intercepting the content of voice and text communications. The documents also discuss the possibility of flashing a phone’s firmware “so that you can intercept conversations using a suspect’s cell phone as a bug.”
The information appears in a 2008 guideline prepared by the Justice Department to advise law enforcement agents on when and how the equipment can be legally used.
The Department of Justice ironically acknowledges in the documents that the use of the surveillance technology to locate cellular phones ‘is an issue of some controversy.’
The American Civil Liberties Union of Northern California obtained the documents (.pdf) after a protracted legal battleinvolving a two-year-old public records request. The documents include not only policy guidelines, but also templates for submitting requests to courts to obtain permission to use the technology.
The DoJ ironically acknowledges in the documents that the use of the surveillance technology to locate cellular phones “is an issue of some controversy,” but it doesn’t elaborate on the nature of the controversy. Civil liberties groups have been fighting since 2008 to obtain information about how the government uses the technology, and under what authority.
Stingrays go by a number of different names, including cell-site simulator, triggerfish, IMSI-catcher, Wolfpack, Gossamer, and swamp box, according to the documents. They can be used to determine the location of phones, computers using open wireless networks, and PC wireless data cards, also known as air cards.
The devices, generally the size of a suitcase, work by emitting a stronger signal than nearby towers in order to force a phone or mobile device to connect to them instead of a legitimate tower. Once a mobile device connects, the phone reveals its unique device ID, after which the stingray releases the device so that it can connect to a legitimate cell tower, allowing data and voice calls to go through. Assistance from a cell phone carrier isn’t required to use the technology, unless law enforcement doesn’t know the general location of a suspect and needs to pinpoint a geographical area in which to deploy the stingray. Once a phone’s general location is determined, investigators can use a handheld device that provides more pinpoint precision in the location of a phone or mobile device—this includes being able to pinpoint an exact office or apartment where the device is being used.
In addition to the device ID, the devices can collect additional information.
Investigators also seldom tell judges that the devices collect data from all phones in the vicinity of a stingray—not just a targeted phone—and can disrupt regular cell service.
“If the cellular telephone is used to make or receive a call, the screen of the digital analyzer/cell site simulator/triggerfish would include the cellular telephone number (MIN), the call’s incoming or outgoing status, the telephone number dialed, the cellular telephone’s ESN, the date, time, and duration of the call, and the cell site number/sector (location of the cellular telephone when the call was connected),” the documents note.
In order to use the devices, agents are instructed to obtain a pen register/trap and trace court order. Pen registers are traditionally used to obtain phone numbers called and the “to” field of emails, while trap and trace is used to collect information about received calls and the “from” information of emails.
When using a stingray to identify the specific phone or mobile device a suspect is using, “collection should be limited to device identifiers,” the DoJ document notes. “It should not encompass dialed digits, as that would entail surveillance on the calling activity of all persons in the vicinity of the subject.”
The documents add, however, that the devices “may be capable of intercepting the contents of communications and, therefore, such devices must be configured to disable the interception function, unless interceptions have been authorized by a Title III order.”
Title III is the federal wiretapping law that allows law enforcement, with a court order, to intercept communications in real time.
Civil liberties groups have long suspected that some stingrays used by law enforcement have the ability to intercept the content of voice calls and text messages. But law enforcement agencies have insisted that the devices they use are not configured to do so. Another controversial capability involves the ability to block mobile communications, such as in war zones to prevent attackers from using a mobile phone to trigger an explosive, or during political demonstrations to prevent activists from organizing by mobile phone. Stingray devices used by police in London have both of these capabilities, but it’s not known how often or in what capacity they have been used.
The documents also note that law enforcement can use the devices without a court order under “exceptional” circumstances. Most surveillance laws include such provisions to give investigators the ability to conduct rapid surveillance under emergency circumstances, such as when lives are at stake. Investigators are then to apply for a court order within 24 hours after the emergency surveillance begins. But according to the documents, the DoJ considers “activity characteristic of organized crime” and “an ongoing attack of a protected computer (one used by a financial institution or U.S. government) where violation is a felony” to be considered an exception, too. In other words, an emergency situation could be a hack involving a financial institution.
“While such crimes are potentially serious, they simply do not justify bypassing the ordinary legal processes that were designed to balance the government’s need to investigate crimes with the public’s right to a government that abides by the law,” Linda Lye, senior staff attorney for the ACLU of Northern California, notes in a blog post about the documents.
Another issue of controversy relates to the language that investigators use to describe the stingray technology. Templates for requesting a court order from judges advise the specific terminology investigators should use and never identify the stingray by name. They simply describe the tool as either a pen register/trap and trace device or a device used “to detect radio signals emitted from wireless cellular telephones in the vicinity of the Subject that identify the telephones.”
The ACLU has long accused the government of misleading judges in using the pen register/trap and trace term—since stingrays are primarily used not to identify phone numbers called and received, but to track the location and movement of a mobile device.
Investigators also seldom tell judges that the devices collect data from all phones in the vicinity of a stingray—not just a targeted phone—and can disrupt regular cell service.
It’s not known how quickly stingrays release devices that connect to them, allowing them to then connect to a legitimate cell tower. During the period that devices are connected to a stingray, disruption can occur for anyone in the vicinity of the technology.
Disruption can also occur from the way stingrays force-downgrade mobile devices from 3G and 4G connectivity to 2G if they are being used to intercept the concept of communications.
In order for the kind of stingray used by law enforcement to work for this purpose, it exploits a vulnerability in the 2G protocol. Phones using 2G don’t authenticate cell towers, which means that a rogue tower can pass itself off as a legitimate cell tower. But because 3G and 4G networks have fixed this vulnerability, the stingray will jam these networks to force nearby phones to downgrade to the vulnerable 2G network to communicate.
“Depending on how long the jamming is taking place, there’s going to be disruption,” Chris Soghoian, chief technology for the ACLU has told WIRED previously. “When your phone goes down to 2G, your data just goes to hell. So at the very least you will have disruption of internet connectivity. And if and when the phones are using the stingray as their only tower, there will likely be an inability to receive or make calls.”
Concerns about the use of stingrays is growing. Last March, Senator Bill Nelson (D—Florida) sent a letter to the FCC calling on the agency to disclose information about its certification process for approving stingrays and any other tools with similar functionality. Nelson asked in particular for information about any oversight put in place to make sure that use of the devices complies with the manufacturer’s representations to the FCC about how the technology works and is used.
Nelson also raised concerns about their use in a remarkable speech on the Senate floor. The Senator said the technology “poses a grave threat to consumers’ cellphone and Internet privacy,” particularly when law enforcement agencies use them without a warrant.
The increased attention prompted the Justice Department this month to release a new federal policy on the use of stingrays, requiring a warrant any time federal investigators use them. The rules, however, don’t apply to local police departments, which are among the most prolific users of the technology and have been using them for years without obtaining a warrant.
Smartphones are vulnerable to hacks when connected to a network—whether cellular or wi-fi. In the third and final episode of Phreaked Out, they examine three real-time phone hacks – man-in-the middle attacks, the Snoopy exploit and intercepting cellular call data using an IMSI catcher.
The National Security Agency isn’t the only group with the technology that can look into wireless data, but there are ways users can protect themselves from Snoopy.
by Michael Kerner
Every day, billions of people around the globe connect wirelessly, leaving a veritable trail of identifiable breadcrumbs that can be followed, tracked and analyzed by security researchers. At the upcoming Black Hat Brazil event in November, researchers from security firm SensePost will debut an updated version of their distributed mobile tracking and analysis project, dubbed Snoopy.Glenn Wilkinson, lead security analyst at SensePost, explained to eWEEK that Snoopy is a distributed tracking, data interception and profiling framework. SensePost researchers first built Snoopy in 2012 as a very rough proof of concept and have now rewritten the framework to be more modular and scalable.The Snoopy system involves endpoint sensor devices that serve as data collection nodes, and then there is a back-end infrastructure that collects and helps make sense of all the collected data. The Snoopy node software, or Drone, can run on small Linux devices, including a BeagleBone Black, and the back end can run on Linux servers.”Snoopy can be run on multiple devices over a large area, say the entire city of London, UK,” Wilkinson said. “The Snoopy framework can then also synchronize all the data in a centralized database.”
The first iteration of Snoopy specifically looked at WiFi signals but is now being expanded to include other types of wireless signals, including Bluetooth and near-field communications (NFC). At a basic level, Snoopy is looking for any kind signal emitted by an electronic device that can then be used to uniquely identify the device and perhaps the individual who owns the device.
Snoopy collects the data by abusing functionality that is part of most WiFi stacks on mobile devices. The way that WiFi works in nearly all cases is the system will always be probing for signals from access points it has previously connected to. As a feature, that means if a user has previously connected to his or her own office access point, then whenever the device is in range of the office access point, the device is connected.
“When your smartphone is looking for all of the access points it has previously connected to, it is revealing your wireless adapter’s MAC (Media Access Control) address,” Wilkinson said. “That’s a unique number for the device, so we can identify the device as being at a particular location at a point in time.”
So in a large-scale Snoopy deployment with nodes over a distributed area, Snoopy could track the movement of a device over time.
Snoopy also includes the Karma attack, a wireless attack that impersonates the name of previously connected access points. In a Karma attack, when the wireless device is looking for its previously connected access points, Karma responds, identifying itself as one of those access points, and tricks the user into connecting. Once the victim has been connected to the rogue access point via Karma, Snoopy can then intercept data and also manipulate the data the user sees.
From an analysis perspective, the new Snoopy Framework makes use of the open-source Maltego data visualization project to provide a graphical front end and tools to understand all the data that the Snoopy node can collect.
Daniel Cuthbert, chief operating officer at SensePost, told eWEEK that from a business standpoint, his company is still figuring out the best license and approach for the Snoopy project. Cuthbert said he would like to emulate the approach taken by the open-source Metasploit penetration testing framework. Metasploit has a core open-source project and then layers enterprise editions with additional reporting functionality and support on top.
There are a number of things individuals can do to limit the risk of being snooped on by Snoopy. Wilkinson suggests that users flush the recently connected networks list on their mobile devices. He noted that the Karma-style attacks only work effectively for recently connected open networks.
Wilkinson also suggests that users keep WiFi turned off until such time as they need to connect.
“People are carrying devices in their pockets that are emitting signals that allow them to be uniquely identified,” Wilkinson said. “So I suspect the bigger message going forward is for people to be aware of what they are carrying that might give away some unique identifier and leak information.”
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter@TechJournalist.
In my October 23 blog, I mentioned that iOS 4.3.4 was susceptible to a man-in-the-middle attack that was later corrected in iOS 4.3.5. These attacks are frequently mentioned in the security literature, but many of you may still be wondering what they are exactly and how they work. With this article, I’ll explain what man-in-the-middle attacks are and how you can avoid falling prey to them.
How the Attack Works
To see how man-in-the-middle attacks work, consider the illustration below. Network traffic normally travels directly between two computers that communicate with each other over the Internet, in this case the computers belonging to User 1 and User 2.
The man-in-the-middle attack uses a technique called ARP spoofing to trick User 1’s computer into thinking that it is communicating with User 2’s computer and User 2’s computer into thinking that it is communicating with User 1’s computer. This causes network traffic between the two computers to flow through the attacker’s system, which enables the attacker to inspect all the data that is sent between the victims, including user names, passwords, credit card numbers, and any other information of interest.Man-in-the-middle attacks can be particularly effective at cafes and libraries that offer their patrons Wi-Fi access to the Internet. In open networking environments such as these, you are directly exposed to computers over unencrypted networks, where your network traffic can be readily snatched.
How to Avoid Being Attacked
In practice, ARP spoofing is difficult to prevent with the conventional security tools that come with your PC or Mac. However, you can make it difficult for people to view your network traffic by using encrypted network connections provided by HTTPS or VPN (virtual private network) technology.
HTTPS uses the secure sockets layer (SSL) capability in your browser to mask your web-based network traffic from prying eyes. VPN client software works in a similar fashion – some VPNs also use SSL – but you must connect to a VPN access point like your company network, if it supports VPN. To decrypt HTTPS and VPN, a man-in-the-middle attacker would have to obtain the keys used to encrypt the network traffic which is difficult, but not impossible to do.
When communicating over HTTPS, your web browser uses certificates to verify the identity of the servers you are connecting to. These certificates are verified by reputable third party authority companies like VeriSign.
If your browser does not recognize the authority of the certificate sent from a particular server, it will display a message indicating that the server’s certificate is not trusted, which means it may be coming from a man-in-the-middle-attacker. In this situation you should not proceed with the HTTPS session, unless you already know that the server can be trusted – like when you or the company you work for set up the server for employees only.