OPINION

Brave New World?

The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com.

We were promised flying cars and virtual reality that was indistinguishable from the real things, what we got was a mess. The future wasn’t all it was cracked up to be, but it was never going to be. At least not in our lifetimes.

That technology has made our lives easier, and some would say better, isn’t really a matter for debate. But advances have begun to outpace common sense and, more importantly, our readiness for them.

This week, two stories highlighted just how unprepared we are for the future that resides just around the corner.

First, the “Facebook data breach” that wasn’t. 

You’ve undoubtedly heard the stories about how Cambridge Analytica, a campaign data company that worked for the Trump campaign, either stole, hacked, or through some other nefarious way mined personal information from 50 million Americans and used it to target ads on Facebook to people likely receptive to them. Honestly, it’s a yawn of a story, and would be ignored if a Democrat had done it. Actually, it wouldn’t be ignored, it would be praised…because it was when the Obama campaign did essentially the same thing in 2012.

“Why not try sifting through self-described supporters’ Facebook pages in search of friends who might be on the campaign’s list of the most persuadable voters? Then the campaign could ask the self-identified supporters to bring their undecided friends along. The technique, as they saw it, could also get supporters to urge friends to register to vote, to vote early or to volunteer and donate,” wrote the New York Times in a 2013 story about President Obama’s reelection campaign.

Add to that the confession by a former campaign staffer that they basically took whatever they wanted from Facebook, with their blessing, and you begin to think there might be double standards at work here.

Hilariously, the pearl-clutching extended to the concept of data collection by Facebook itself. What did people think they do there? Having billions of members who join something for free isn’t exactly the pathway to getting rich, unless and until you do something with all the information your members voluntarily give you. It’s a trade – you get to post cat videos and vacation pictures on their website so you can make your friends jealous, they get to sell advertisements based on the information you give them. 

Would anyone have signed up for Facebook if it were $50 a year but there were no ads? Maybe, but only a fraction of what they currently have. Just like people wouldn’t use gmail if they had to pay for it. But those “free” services have to be paid for somehow. The piper doesn’t play for free.

The conveniences and luxuries technology offer us come at a price. Not cash, but a tiny bit of our privacy. If you don’t like it, you don’t have to sign up. But just because no one reads the iTunes user agreement doesn’t mean you aren’t bound by it. 

The details of the Cambridge story are unimportant, users agreed to it and they exploited it. If it didn’t involve the Trump campaign, it wouldn’t be a story. Since it did, there will likely be congressional hearings. 

(I will say this: It’s rather amusing watching the media report this Facebook data story in outraged, “invasion of privacy” terms while they downplay the clear abuse of the FISA system to spy on Trump campaign officials. One is a private company going about their business legally, the other is the government spying on its citizens trying to send some of them to prison. Which is worse, apparently, depends upon who you voted for in 2016.)

Second, while we won’t have flying cars anytime soon (and thank God for that, considering how bad so many people are on the ground), we are flying toward having driverless cars. This would be great for disabled people, and the blind in particular. However, as is always the case with major leaps forward, there are a lot of questions as to what it will really mean for people.

While there are many cars on the roads without anyone at the wheel being tested across the country, and there have been a number of accidents (all the fault of the human drivers around them, or so we’re told), this week we saw the first fatality involving one. 

A woman in Arizona was hit and killed by an Uber driverless car on Monday. What exactly happened we don’t yet know, and we also don’t know what it means for the technology. 

From the start, questions dogged this idea. “What if a car knows it can’t stop in time before hitting a group of kids getting off a bus, and the only alternative is to drive off a cliff and kill the car’s occupants?” is a common “What if” scenario programmers have to grapple with. 

What isn’t being discussed, at least not yet publicly, is who is ultimately responsible when something like what happened in Arizona becomes common? 

You likely haven’t thought about driverless car insurance, but the current car insurance market isn’t prepared to deal with it. Who’s responsible when no one is driving? The programmer? The manufacturer? The owner of the car? 

The actuarial tables haven’t been created to deal with the prospect of millions of cars with no one at the wheel being on the road or how to assign responsibility when everything goes according to programming and something still goes wrong. 

If a semi-truck flips on black ice, is the company whose goods it was hauling responsible for any injuries or damage to others? We don’t know. And you can’t really insure against something where blame can’t yet be ascribed. 

The rules and regulations are only just now being considered, but the possibilities they’re going to have to compensate for are limitless. Congress, which is going to have a heavy hand in the development of them (which means you can expect the process to go as smoothly as everything else they do), has no idea how to handle this. The future may be driverless, but when the future is a Congress and state and federal bureaucracies away, it may never get here. 

We’re entering a time when technological advancements risk outpacing our ability to adapt to them. Some will be exploited for political gain, others run risk of creating a regulatory and legal mess. What could go wrong when the same people who designed and implemented Obamacare are looking to involve themselves in social media and draw the map for driverless cars? If history is any guide, pretty much everything.