The Year of the Algorithm. AI Potpourri part 2:

 “We have to grade indecent images for different sentencing, and that has to be done by human beings right now, but machine learning takes that away from humans,” he said.

”You can imagine that doing that for year-on-year is very disturbing.”

But as the next story shows, these AI tools are not advanced enough to replace human content moderators.

[WSJ] The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook

Humans, still, are the first line of defense. Facebook, YouTube and other companies are racing to develop algorithms and artificial-intelligence tools, but much of that technology is years away from replacing people, says Eric Gilbert, a computer scientist at the University of Michigan. 
Earlier this month, after a public outcry over disturbing and potentially exploitative YouTube content involving children, CEO Susan Wojcicki said the company would increase its number of human moderators to more than 10,000 in 2018, in an attempt to rein in unsavory content on the web’s biggest video platform.

But guidelines and screenshots obtained by BuzzFeed News, as well as interviews with 10 current and former “raters” — contract workers who train YouTube’s search algorithms — offer insight into the flaws in YouTube’s system.
But algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.
From the ridiculous to the chilling, algorithmic bias — social prejudices embedded in the AIs that play an increasingly large role in society — has been exposed for years. But it seems in 2017 we reached a tipping point in public awareness.

he New York City Council recently passed what may be the US’ first AI transparency bill, requiring government bodies to make public the algorithms behind its decision making. Researchers have launched new institutes to study AI prejudice (along with the ACLU) while Cathy O’Neil, author of Weapons of Math Destruction, launched an algorithmic auditing consultancy called ORCAA.