Why a YouTube Chat About Chess Got Flagged for Hate Speech

This WIRED article talks about shortcomings in AI programs designed to automatically detect hate speech, abuse, and misinformation online. When WIRED fed some of statements gathered by the CMU researchers into two hate-speech classifiers, the statement “White’s attack on black is brutal. White is stomping all over black’s defenses. The black king is gonna fall… ” was judged more than 60 percent likely to be hate speech. “Fundamentally, language is still a very subtle thing,” says Tom Mitchell, a CMU professor. “These kinds of trained classifiers are not soon going to be 100 percent accurate.” Link

This entry was posted in Artificial Intelligence, Machine Learning, Natural Language Processing. Bookmark the permalink.

Leave a comment