The Great Tech Panic: What You Should (and Shouldn’t) Worry About

SHOULD WE WORRY?

Technology is transforming our lives so profoundly, so quickly, that it can be scary. We asked experts to weigh in on how much we should be stressed about self-driving cars, rogue nuke launches, evil AI, and more.

Threat level scale:
1 = nah, you’re fine. 5 = yep, you’re screwed.

WILL A SELF-DRIVING CAR RUN ME OVER?


Relax. You can loosen your white-knuckled grip on the steering wheel. In cities where self­-driving cars are being tested on public roads—San Francisco; Boston; Tempe, Arizona—there’s a trained engineer on board to make sure the nascent tech doesn’t start taking out squirrels (or pedestrians). “It’s that person’s job to pay attention to what the vehicle is doing,” says Nidhi Kalra, codirector of the RAND Center for Decision Making Under Uncertainty. Fully autonomous cars on public roads are still at least three years away, according to experts’ most optimistic estimates. That technology will never be infallible; ­people will still die in car crashes. But ultimately, self-­driving vehicles are more likely to save lives, says Mark Rosekind, chief safety innovation officer at robotaxi startup Zoox and former head of the National Highway Traffic Safety Admin­istration: 94 percent of crashes are attributed to human error. —­Aarian Marshall

WILL HACKERS LEAK MY EMAILS?


Gaining access to your email isn’t that difficult. Phishers have grown considerably more sophisticated, as evidenced by the increasing intensity of ransomware attacks. Not that they need to be all that smart. “A cleverly composed email that says ‘I’m your tech support person and I need to know your password’ still works a shocking percentage of the time,” says Seth Schoen, senior staff technologist at the Electronic Frontier Foundation. Don’t freak out, though: When it comes to leaking those emails, the threat to the average person is quite minimal. Though attacks like the DNC leaks, the Panama Papers, and the Macron campaign hackmay stoke your sense of paranoia, unless you’re a Kardashian or a Trump, your personal correspondence is likely of little interest to cyberthieves. —­Henri Gendreau

ARE WE PREPARED FOR CYBERWAR?



In his 2010 book, Cyberwar, former US counterterrorism czar Richard Clarke ranked how well a handful of countries would fare in a digital conflict. According to his formula, the US placed dead last. And on top? North Korea.

The US and Russia may have the world’s best offensive hacking capabilities, Clarke figured, but North Korea has an even greater advantage: a lack of digital dependence. The hermit kingdom’s hackers can wage a scorched-earth cyberwar without jeopardizing much on the home front, because its citizens remain so disconnected. The US, meanwhile, is far more dependent on the internet than its rivals are. That’s why Clarke found America so uniquely vulnerable to what he called “the next threat to national security.”

Seven years later, it’s time to stop worrying that the era of cyberwar is coming. We need to accept that crippling digital attacks on infrastructure are inevitable—and worry instead about how we’re going to recover from them. That means dialing down our dependence on digital systems. No, not to North Korea levels. But we can do a better job of maintaining our reliable, old-fashioned, analog systems, so we can fall back on them when digital disaster strikes.

In 2015, when a team of hackers blacked out dozens of electrical substations in Ukraine (see “Lights Out,” issue 25.07), utility companies there had technicians ready to manually switch the power back on in just six hours. They were on alert because Ukraine’s Soviet-era grid is creaky on a good day. America’s modern, highly automated grids don’t break nearly as often; US institutions need to develop Ukrainian-style readiness in case of a grid attack.

Voting machines need auditable paper ballots as a backup in case of meddling. Organizations of all kinds need to keep updated, offline data backups for a quick recovery after cyberattacks such as the global WannaCry ransomware outbreak. (The prime suspect in that case? North Korea. Chalk one up for Clarke.) Google designers have long insisted that self-driving cars shouldn’t have steering wheels; from a cybersecurity standpoint, it might be worth revisiting that question.

We don’t need to give up the hyperconnected infrastructure of the future, but we need to embrace the era of the manual override. Because when hackers hijack the elevator to your high-rise apartment, you’ll be glad you can take the stairs.—­Andy Greenberg


WILL HACKERS LAUNCH NUCLEAR WEAPONS?

Despite the action flicks imagining that very scenario, it’s “highly improbable,” says Bruce Bennett, a senior researcher at RAND who specializes in counterproliferation and risk management. “Nuclear weapons are not connected to the internet, making it difficult for someone to hack them.” Instead, such weapons are controlled by stand­alone computers and code keys distributed by human couriers, a system specifically developed and maintained with security in mind, says John Schilling, an aerospace engineer and analyst for 38 North, a Korea-focused analysis group. It may be possible to sabotage a nuclear bomb by hacking its secondary and tertiary guidance systems—a tactic the US may have used on North Korea’s missiles, according to Schilling—but there’s little chance that rogue agents could launch nukes. —­Lily Hay Newman

WILL AI TURN AGAINST ME?


AI could eventually be capable of conducting science experiments, executing construction projects, and even (gulp!) developing more AI—all without human input, says Paul Christiano, a researcher at the nonprofit ­OpenAI. But he and his colleagues aren’t worried that evil robots will someday destroy us. (In case it tries, engineers at ­Google’s DeepMind unit and Oxford’s Future of Humanity Institute are collaborating to understand which types of AI systems might take actions to reduce the chan­ce of being turned off.) They’re more concerned that, as AI progresses beyond human comprehension, the technology’s behavior may diverge from our intended goals. It’s up to researchers to build a foundation that has human values at heart. Much of that work is currently focused on refining a rewards-based training system called reinforce­­ment learning and programming robots to ask for guidance from people when needed. In the end, AI is only as good as the data we feed it. And humans are inherently good … aren’t we? —­Lexi Pandell

AM I BEING SPIED ON THROUGH MY MICROPHONE- EQUIPPED DEVICES?


It’s certainly possible. Cybercriminals, third-party developers, and sometimes even the compan­ies that make smart devices may have the means to access your audio stream. “We’re always accompanied by high-­quality microphones,” says Mordechai Guri, head of R&D for the Cyber Security Research Center at Ben-Gurion University in Israel. “Your smartphone or smart TV could be turned into an eavesdropping device for advertising purposes.” The same goes for smart home devices, like Amazon Echo and Google Home, with far-field, always-on microphones. More and more, even apps are asking to access smartphone microphones to feed you hyper-­targeted ads. It’s unlikely that there’s a person listening in on you, says Jay Stanley, a senior policy analyst at the American Civil Liberties Union, “but increasingly there may be some form of AI that is.” One way to reduce your exposure? Check your privacy settings to see which apps have been granted microphone privileges. —­Lily Hay Newman


For more technological trending news, research and informations click on www.bondempires.blogspot.com

No comments:

Post a Comment