Tipsheet

Zuckerberg: I'm Confident Facebook's Artificial Intelligence Will Soon Allow Us to Block 'Hate Speech' Before It's Posted

Facebook founder and CEO Mark Zuckerberg endured two grueling days of testimony on Capitol Hill this week, during which he was peppered with questions from members of Congress -- some of whom did not evince a strong grasp of how social media works -- in both the House and Senate.  His testimony was deeply apologetic and deferential, and was generally well-received by many observers, as well as the markets (even as fact-checkers picked apart some of his assertions).  The reason he was dragged to DC stemmed from a freakout over news stories about the Trump campaign-linked data firm Cambridge Analytica exploiting Facebook-culled data during the 2016 election.  While some elements of that controversy deserve scrutiny, and while the use of Big Data can certainly be very creepy, it does appear as though much of the current hand-wringing is attributable to objections over "bad" political actors breaking, bending, or taking advantage of Facebook's rules -- as opposed to "good" actors.  Good for Obama? Brilliant and innovative.  Good for Trump? Problematic and scary.  Not every detail is exactly comparable, of course, but I don't think it's a stretch to reach that overall conclusion about the tone and volume of media coverage.

As massive tech companies play an ever-larger role in American life, including harvesting truly enormous amounts of data on individual users, Washington must strike a balance between protecting consumers' rights and allowing the free market to flourish.  I'm as concerned about the potential for Orwellian abuses from companies like Google, Facebook and Amazon as the next guy, but I'm also highly suspicious of big government's ability to effectively hold them in check without making a mess.  Ignorance plus power does not inspire confidence:


"Now do guns," quipped a conservative writer in response, making a useful point.  Anyway, as an advocate for the free and open exchange of ideas, one of my biggest concerns about Silicon Valley is its temptation and ability to impose its left-wing ethos on the rest of the country through viewpoint discrimination.  This fear is not unfounded.  Zuckerberg's testimony was mostly rehearsed and calibrated, but one comment he made raised some eyebrows.  Speaking about advances in technology, he boasted of 'optimism' that Facebook's algorithms and tools may eventually be able to identify and shut down "hate speech" before it's even posted:

Facebook CEO Mark Zuckerberg predicted Tuesday it will be five to 10 years before Facebook has technological tools in place to flag and remove hate speech from the platform before it is posted. “That’s a success in terms of rolling out [artificial intelligence] tools that can proactively police and enforce safety across the community,” Zuckerberg said. “Hate speech, I am optimistic that over a five to 10 year period we'll have AI tools that can get into some of the nuances, the linguistic nuances of different types of content to be more accurate in flagging things for our systems, but today is just not there on that.”

Now, in fairness, part of the context of this answer was dealing with ISIS and other terrorist propaganda -- but Sen. Ben Sasse of Nebraska pressed Zuckerberg on who gets to define, and what constitutes, "hate speech." Sasses referenced the growing attitude on many college campuses that speech that could be offensive or hurtful is "hateful" and that speech can be tantamount to violence: 

Senator Ben Sasse (R., Neb.) said he worried about policies that are “less than First Amendment full-spirit embracing in my view.” “I worry about a world where when you go from violent groups to hate speech in a hurry,” Sasse told Zuckerberg. “Facebook may decide it needs to police a whole bunch of speech that I think America may be better off not having policed by one company that has a really big and powerful platform.” “Can you define hate speech?” he asked. Zuckerberg said it would be hard to pin down a specific definition, and mentioned speech “calling for violence” as something Facebook does not tolerate. “I’m worried about the psychological categories around speech,” Sasse interjected. “We see this happening on college campuses all across the country. It’s dangerous.”

Zuckerberg expressed his belief that Facebook must remain a platform for virtually all political ideas, but it's essential to ensure that Big Tech doesn't put its thumb on the cultural scale under the pretext of "safety." Recently, the pro-Trump vloggers known as "Diamond and Silk" said their Facebook community had been censored by the company:


A Facebook spokesperson told Fox News, "the message they received last week was inaccurate and not reflective of the way we communicate with our community and the people who run Pages on our platform.”  What's worrying is that someone, or a whole team, inside Facebook decided that they'd flag and punish Diamond and Silk as "unsafe."  You may find their schtick ridiculous or stupid -- I think they're brash, sassy and sometimes funny, even though I certainly part ways with their Trump worship -- but to categorize them as dangerous in some way is ludicrous.  And it shows how political bias can creep into, or even dictate, decisions.  The data privacy element of overseeing major tech companies is an important one, but it's hardly the only issue that needs attention.  Not only should elected officials ask tough questions (here's Marsha Blackburn taking Zuckerberg to task over the Diamond and Silk episode), but companies like Facebook and Google need to make sure that internal safeguards exist to prevent insular groupthink from imposing rigidity of thought onto users.  A wide spectrum of ideas needs to be represented inside those companies.  Diversity of thought at decision-making level can help ensure diversity of thought is protected at the grassroots level.  I'll leave you with this observation -- the analogy isn't perfect -- about some leftists' views on corporate speech: