Home > Media News > Facebook AI Equated Black Men With 'Primates', Prompting Another Toothless ...

Facebook AI Equated Black Men With 'Primates', Prompting Another Toothless Apology
7 Sep, 2021 / 06:40 am / OMNES Media LLC

Source: https://in.mashable.com/

859 Views

Some Facebook users who recently watched a Daily Mail video depicting Black men reported seeing a label from Facebook asking if they were interested in watching more videos about "primates."

The label appeared in bold text under the video, stating "Keep seeing videos about Primates?" next to "Yes" and "Dismiss" buttons that users could click to answer the prompt. It's part of an AI-powered Facebook process that attempts to gather information on users' personal interests in order to deliver relevant content into their News Feed.

The video in question showed several instances of white men calling the police on Black men and the resulting events, and had nothing to do with primates. Facebook issued an apology, telling the New York Times that it was an "unacceptable error" and that it was looking into ways to prevent this happening in the future.

The label came to Facebook's attention when Darci Groves, a former Facebook content design manager, posted it to a product feedback forum for current and former Facebook employees and shared it on Twitter. Groves said that a friend came across the label and screenshotted and shared it with her.

 

The offensive label feels particularly unacceptable considering the extremely expansive database of user-uploaded photos that Facebook has access to, and could presumably use to ensure proper facial recognition by its tools. While AI can always make mistakes, it is the company's responsibility to properly train its algorithms, and this misstep cannot be blamed on a lack of resources.

In addition to mishandling past racial justice issues within the company, Facebook's lack of transparent plan to address its AI problem continues to sow distrust. While the apology was needed, the company's lack of apparent actionable steps beyond disabling the feature and a vague promise to "prevent this from happening again" doesn't cut it.

The approach is especially lackluster following Facebook's recent move to cut off researchers' access to tools and accounts used to explore user data and ad activity on the platform, citing possible violation of a settlement with the Federal Trade Commission. The FTC has directly disputed that defense.

Combining a vague response with decreased access to facts makes it rather hard to simply trust that Facebook will handle this inappropriate AI gaffe with any kind of immediacy or results. If Facebook is committed to creating and using AI tools in an inclusive manner, it needs to specify exactly how it plans to fix this issue, and it needs to do so soon.