Title : Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube
link : Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube
Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube
The Financial Times has the story:
by Alex Barker and Hannah Murphy
"Google’s YouTube has reverted to using more human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors of its video platform.
"When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other forms of harmful content or misinformation.
"But Neal Mohan, YouTube’s chief product officer, told the Financial Times that one of the results of reducing human oversight was a jump in the number of videos removed, including a significant proportion that broke no rules."
*****************
Wired Magazine has a good backgrounder on the AI attempt to alter the recommender engine:
by Clive Thompson
Thus Article Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube
You now read the article Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube with the link address https://latestjobsus.blogspot.com/2020/09/filtering-inappropriate-content-is.html
0 Response to "Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube"
Post a Comment