[Home] [Headlines] [Latest Articles] [Latest Comments] [Post] [Sign-in] [Mail] [Setup] [Help] [Register]
Status: Not Logged In; Sign In
Science/Tech See other Science/Tech Articles Title: When Artificial Intelligence Judges a Beauty Contest, White People Win, Sometimes bias is difficult to track, but other times its clear as the nose on someones facelike when its a face the algorithm is trying to process and judge. An online beauty contest called Beauty.ai, run by Youth Laboratories (that lists big names in tech like Nvidia and Microsoft as partners and supporters on the contest website), solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age. However, race seemed to play a larger role than intended; of the 44 winners, 36 were white. The tools used to judge the competition were powered by deep neural networks, a flavor of artificial intelligence that learns patterns from massive amounts of data. In this case, the algorithms would have been shown, for example, thousands or millions of photos with people who have wrinkles and people who dont. The algorithm slowly learns similarities between different instances of wrinkles on faces, and can identify them in new photos. But if the algorithm learns primarily from pictures of white people, its accuracy drops when confronted with a darker face. (The same goes for the other judged traits, which each used a separate algorithm.) While 75% of applicants were white and of European descent, according to Motherboard, that theoretically shouldnt matter. To the machine, these arent people, but similar assortments of pixels. When pixels dont follow the expected pattern, they could be dropped as a bad input or accidentally punished by the algorithms misjudgment. In other words, the beauty in the photos was being judged by an objective standardbut that objective standard was built from an aggregate of white people. {snip} The answer to this problem is better data. If the algorithms are shown a more diverse set of people, theyll be better-equipped to recognize them later. {snip}. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces, writes Kate Crawford, principal researcher at Microsoft Research New York City, in a New York Times op-ed. So inclusivity mattersfrom who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes. Beauty.ai will hold another AI beauty contest in October, and though Zhavoronkov says that better data needs to be made available to the public, its unclear whether the next contest will use a different data set. Post Comment Private Reply Ignore Thread Top Page Up Full Thread Page Down Bottom/Latest Begin Trace Mode for Comment # 3.
#1. To: NeoconsNailed (#0)
I read of an AI that was sent out on the Internet to learn as much as possible. It was shut down after a week. The AI became a 911 Truther and a Nazi filled with anti-Semitic insults for those on the web who disagreed. The AI was re-written to take out freedom of thought.
AI is the basis for the Terminator flicks. Re-writing it can only doom it. ;)
There are no replies to Comment # 3. End Trace Mode for Comment # 3.
Top Page Up Full Thread Page Down Bottom/Latest |
||
[Home]
[Headlines]
[Latest Articles]
[Latest Comments]
[Post]
[Sign-in]
[Mail]
[Setup]
[Help]
[Register]
|