How do you keep an AI’s behavior from becoming predictable?
Facebook hopes “deep features” keep it ahead in its arms race with abusive accounts. …
reader comments
12 with 10 posters participating
A lot of neural networks are black boxes. We know they can successfully categorize things—images with cats, X-rays with cancer, and so on—but for many of them, we can’t understand what they use to reach that conclusion. But that doesn’t mean that people can’t infer the rules they use to fit things into different categories. And that creates a problem for companies like Facebook, which hopes to use AI to get rid of accounts that abuse its terms of service.
Most spammers and scammers create accounts in bulk, and they can easily look for differences between the ones that get banned and the ones that slip under the radar. Those differences can allow them to evade automated algorithms by structuring new accounts to avoid the features that trigger bans. The end result is an arms race between algorithms and spammers and scammers who try to guess their rules.
Facebook thinks it has found a way to avoid getting involved in this arms race while still using automated tools to police its users, and this week, it decided to tell the press about it. The result was an interesting window into how to keep AI-based moderation useful in the face of adversarial behavior, an approach that could be applicable well beyond Facebook.
The problem
Facebook sees billions of active users in a month, and only a small fraction of those fall into the category that the company terms abusive: fake and compromised accounts, spammers, and
Continue reading – Article source
Similar Posts:
- What Is SEO & Why It Matters To Your Business
- Know the different kinds of kids bed available on the market
- Top Reasons to Invest in Real Estate Czech Republic
- Why Anti Hail Cover is the Best Protection for Your Car?
- Want to Use H7 LED Car Headlight Bulbs? Here are the Things to Know about LED Technology!