| Thread Tools |
9th December 2021, 08:58 | #1 |
[M] Reviewer Join Date: May 2010 Location: Romania
Posts: 153,541
| Meta's prototype moderation AI only needs a few examples of bad behavior to take acti Moderating content on today’s internet is akin to a round of Whack-A-Mole with human moderators continually forced to react in realtime to changing trends, such as vaccine mis- and disinformation or intentional bad actors probing for ways around established personal conduct policies. Machine learning systems can help alleviate some of this burden by automating the policy enforcement process, however modern AI systems often require months of lead time to properly train and deploy (time mostly spent collecting and annotating the thousands, if not millions of, necessary examples). To shorten that response time, at least to a matter of weeks rather than months, Meta’s AI research group (formerly FAIR) has developed a more generalized technology that requires just a handful of specific examples in order to respond to new and emerging forms of malicious content, called Few-Shot Learner (FSL). https://www.engadget.com/metas-proto...0.html?src=rss |
Thread Tools | |
| |