[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help] 

Status: Not Logged In; Sign In

Chabria: ICE arrested a California union leader. Does Trump understand what that means?Anita Chabria

White House Staffer Responsible for ‘Fanning Flames’ Between Trump and Musk ID’d

Texas Yanks Major Perk From Illegal Aliens - After Pioneering It 24 Years Ago

Dozens detained during Los Angeles ICE raids

Russian army suffers massive losses as Kremlin feigns interest in peace talks — ISW

Russia’s Defense Collapse Exposed by Ukraine Strike

I heard libs might block some streets. 🤣

Jimmy Dore: What’s Being Said On Israeli TV Will BLOW YOUR MIND!

Tucker Carlson: Douglas Macgregor- Elites will be overthrown

🎵Breakin' rocks in the hot sun!🎵

Musk & Andreessen Predict A Robot Revolution

Comedian sentenced to 8 years in prison for jokes — judge allegedly cites Wikipedia during conviction

BBC report finds Gaza Humanitarian Foundation hesitant to answer questions

DHS nabbed 1,500 illegal aliens in MA—

The Day After: Trump 'Not Interested' In Talking As Musk Continues To Make Case Against BBB

Biden Judge Issues Absurd Ruling Against Trump and Gives the Boulder Terrorist a Win

Alan Dershowitz Pushing for Trump to Pardon Ghislaine Maxwell

Signs Of The Tremendous Economic Suffering That Is Quickly Spreading All Around Us

Joe Biden Used Autopen to Sign All Pardons During His Final Weeks In Office

BREAKING NEWS: Kilmar Abrego Garcia Coming Back To U.S. For Criminal Prosecution, Report Says

he BEST GEN X & Millennials Memes | Ep 79 - Nostalgia 60s 70s 80s #akornzstash

Paul Joseph Watson They Did Something Horrific

Romantic walk under Eiffel Tower in conquered Paris

srael's Attorney General orders draft for 50,000 Haredim amid Knesset turmoil

Elon Musk If America goes broke, nothing else matters

US disabilities from BLS broke out to a new high in May adding 739k.

"Discrimination in the name of 'diversity' is not only fundamental unjust, but it also violates federal law"

Target Replaces Pride Displays With Stars and Stripes, Left Melts Down [WATCH]

Look at what they are giving Covid Patients in other Countries Whole packs of holistic medicine Vitamins and Ivermectin

SHOCKING Gaza Aid Thefts Involve Netanyahu Himself!


Science/Tech
See other Science/Tech Articles

Title: Researchers Get Humans to Think Like Computers
Source: [None]
URL Source: [None]
Published: Mar 26, 2019
Author: staff
Post Date: 2019-03-26 23:58:19 by Tatarewicz
Keywords: None
Views: 71

TEHRAN (FNA)- Computers, like those that power self-driving cars, can be tricked into mistaking random scribbles for trains, fences and even school busses. People aren't supposed to be able to see how those images trip up computers but in a new study, Johns Hopkins University researchers show most people actually can.

The findings suggest modern computers may not be as different from humans as we think, and demonstrate how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. The research appears today in the journal Nature Communications.

"Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences. "Our project does the opposite -- we're asking whether people can think like computers."

What's easy for humans is often hard for computers. Artificial intelligence systems have long been better than people at doing math or remembering large quantities of information; but for decades humans have had the edge at recognizing everyday objects such as dogs, cats, tables or chairs. But recently, "neural networks" that mimic the brain have approached the human ability to identify objects, leading to technological advances supporting self-driving cars, facial recognition programs and helping physicians to spot abnormalities in radiological scans.

But even with these technological advances, there's a critical blind spot: It's possible to purposely make images that neural networks cannot correctly see. And these images, called "adversarial" or "fooling" images, are a big problem: Not only could they be exploited by hackers and causes security risks, but they suggest that humans and machines are actually seeing images very differently.

In some cases, all it takes for a computer to call an apple a car, is reconfiguring a pixel or two. In other cases, machines see armadillos and bagels in what looks like meaningless television static.

"These machines seem to be misidentifying objects in ways humans never would," Firestone says. "But surprisingly, nobody has really tested this. How do we know people can't see what the computers did?"

To test this, Firestone and lead author Zhenglong Zhou, a Johns Hopkins senior majoring in cognitive science, essentially asked people to "think like a machine." Machines have only a relatively small vocabulary for naming images. So, Firestone and Zhou showed people dozens of fooling images that had already tricked computers, and gave people the same kinds of labeling options that the machine had. In particular, they asked people which of two options the computer decided the object was -- one being the computer's real conclusion and the other a random answer. (Was the blob pictured a bagel or a pinwheel?) It turns out, people strongly agreed with the conclusions of the computers.

People chose the same answer as computers 75 percent of the time. Perhaps even more remarkably, 98 percent of people tended to answer like the computers did.

Next, researchers upped the ante by giving people a choice between the computer's favorite answer and its next-best guess. (Was the blob pictured a bagel or a pretzel?) People again validated the computer's choices, with 91 percent of those tested agreeing with the machine's first choice.

Even when the researchers had people guess between 48 choices for what the object was, and even when the pictures resembled television static, an overwhelming proportion of the subjects chose what the machine chose well above the rates for random chance. A total of 1,800 subjects were tested throughout the various experiments.

"We found if you put a person in the same circumstance as a computer, suddenly the humans tend to agree with the machines," Firestone says. "This is still a problem for artificial intelligence, but it's not like the computer is saying something completely unlike what a human would say."

Post Comment   Private Reply   Ignore Thread  



[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help]