Why a YouTube Chat About Chess Got Flagged for Hate Speech

This WIRED article talks about shortcomings in AI programs designed to automatically detect hate speech, abuse, and misinformation online. When WIRED fed some of statements gathered by the CMU researchers into two hate-speech classifiers, the statement “White’s attack on black is brutal. White is stomping all over black’s defenses. The black king is gonna fall… ” was judged more than 60 percent likely to be hate speech. “Fundamentally, language is still a very subtle thing,” says Tom Mitchell, a CMU professor. “These kinds of trained classifiers are not soon going to be 100 percent accurate.” Link

This entry was posted in Artificial Intelligence, Machine Learning, Natural Language Processing. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s