Press enter to see results or esc to cancel.

Facebook needs to learn the value of a human touch in 2017

In 2016, the tech industry ran full speed toward automation. This was the year of the chatbot, where automated responses were implemented in the management of customer service. The battle of AI and Virtual Assistance began and will play out over the next few years. UBER introduced a fleet of self-driving cars, and algorithms reigned supreme in the online and offline worlds. All these amazing innovations have their place, but like Steve Griffiths says, technology should only exist to make life better. In 2016, Facebook showed how dangerous it can be to develop technology that forgets about people.

Algorithms have their limits

It all started when Facebook laid off their entire editorial team in an effort to eliminate biases in regards to trending news stories. The intentions were noble (possibly?), but the results were problematic. The social media giant cut ties with their human editors in August, and immediately after, the platform would continually feature fake news stories among actual trending news, some of which had a surprisingly profound effect on the 2016 presidential election. Human engineers still oversaw the trending algorithm to weed out repetitive or non-newsworthy stories, but it seemed that machines were still not equipped to tell real information and misinformation apart. In this instance, a human touch was needed to guide the algorithm and ensure it made the right and responsible decisions.

The problem becomes even more damning to Facebook’s internal decision-making when you consider that LinkedIn, another social platform that traffics in written content and news stories, doesn’t have this problem. Executive editor Daniel Roth has credited the community for self-policing and holding peers to a higher standard that is focused on the business side of the platform. It cannot, however, be ignored that they still have human editors who are “tasked with ‘creating, cultivating and curating,’” something Facebook clearly lacks at this moment in time now that they are relying almost entirely on their algorithm to do the work for them.

We can point fingers all we want, at whoever we want, but much of the blame for fake news and the spreading of propaganda over Facebook can be laid at the feet of Mark Zuckerberg himself. Despite discussion surrounding the platform’s increased status as a publishing platform, Zuckerberg has denied that they have any responsibility to curate what gets posted there. This has been his way of avoiding the sticky situation of monitoring and policing freedom of speech. That’s all legitimate and understandable, however, when he famously responded to fake news concerns about Facebook not taking interest in being an “arbiter of truth,” it rang as disinterest toward taking responsibility for the monster he had a hand in creating.

Now Facebook is finally taking the issue seriously, after months of pressure and bad press. Instead of appointing an internal team, they have decided to partner with third-party fact-checking organizations, including PolitiFact, Snopes, and the Washington Post, to adhere to international fact-checking standards. They are also putting the power in the users’ hands by making it easier to report any story they believe to be fake. While this is certainly a step in the right direction, it still relies far too heavily on outside forces shaping the content that appears on their own platform. It feels like a system that has too many parts, which could be solved by Facebook taking a more active and direct role here.

Tech can be dangerous on its own

Facebook’s next problem proves that their corrective measures are not enough. Just as one crisis has finally been addressed, another seems to be pushing its way onto the horizon. And this one is also Facebook’s fault. The social platform introduced a Safety Check feature that allows people close to dangerous incidents to mark themselves as safe on their Facebook. It also offers information about the incident by automatically feeding relevant news articles directly to them. In the past, this feature had been activated for natural disasters, like hurricanes and earthquakes, as well as during the 2015 terror attacks in Paris. It was a tool designed to do some good in the world and ensure that friends and loved ones are safe in times of danger.

Started in 2014, the Safety Check feature was only activated when Facebook employees deemed it necessary, ultimately activating it 39 times over two years. However, the company was criticized about how it determined what events were worthy—and ultimately unworthy. Again, instead of taking responsibility for the technology they developed, they turned to an algorithm that could sift through news stories and user mentions to find trending news. They also brought in a third party to verify that these stories were legitimate. Since making the change in June, the security alert has been activated 350 times, sometimes for incidents that get far overblown, or others that don’t happen at all.

When Facebook stopped using human beings to activate the security alert feature, things turned ugly and fake news threw a wrench into their plans. The security algorithm was erroneously activated when a protest in Thailand involving firecrackers was turned into an explosion by a fake news story. The third party verified the nearby protest, but when Facebook’s Safety Check automatically generated a page filled with “related” news articles, it made things worse by including the fake news and an unrelated bombing from the year before. Without any Facebook employees getting directly involved, this small incident was transformed into a terrifying event. The New York Times has an excellent breakdown of events that shows just how susceptible this system is to fake news.

What we learned this year

Our blind trust in algorithms proved to be one of the tech industry’s biggest mistakes this year, at a time when the industry forgot about people far too often. The tech world needs to learn that algorithms, and technology as a whole, require human guidance as part of the equation for success. Algorithms are still programmed by humans, meaning they can be faulty, limited, and still subject to bias, even though we see computers as inherently objective in nature.

The tech world moves at the speed of light, but sometimes moving so fast doesn’t allow us the chance to understand repercussions. We have to find a better way to optimize speed while including quality. As they say in emergency response, “bad information is worse than no information,” and that has never been more true here. Facebook, as a leading voice in the tech world, needs to take responsibility for their hasty actions that spread disinformation and put real people in danger. In 2016, they proved that less human intervention isn’t always a good thing because there is no replacement for human judgement.