[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help] 

Status: Not Logged In; Sign In

Border Czar Tom Homan: “We Are Going to Bring National Guard in Tonight” to Los Angeles

These Are The U.S. States With The Most Drug Use

Chabria: ICE arrested a California union leader. Does Trump understand what that means?Anita Chabria

White House Staffer Responsible for ‘Fanning Flames’ Between Trump and Musk ID’d

Texas Yanks Major Perk From Illegal Aliens - After Pioneering It 24 Years Ago

Dozens detained during Los Angeles ICE raids

Russian army suffers massive losses as Kremlin feigns interest in peace talks — ISW

Russia’s Defense Collapse Exposed by Ukraine Strike

I heard libs might block some streets. 🤣

Jimmy Dore: What’s Being Said On Israeli TV Will BLOW YOUR MIND!

Tucker Carlson: Douglas Macgregor- Elites will be overthrown

🎵Breakin' rocks in the hot sun!🎵

Musk & Andreessen Predict A Robot Revolution

Comedian sentenced to 8 years in prison for jokes — judge allegedly cites Wikipedia during conviction

BBC report finds Gaza Humanitarian Foundation hesitant to answer questions

DHS nabbed 1,500 illegal aliens in MA—

The Day After: Trump 'Not Interested' In Talking As Musk Continues To Make Case Against BBB

Biden Judge Issues Absurd Ruling Against Trump and Gives the Boulder Terrorist a Win

Alan Dershowitz Pushing for Trump to Pardon Ghislaine Maxwell

Signs Of The Tremendous Economic Suffering That Is Quickly Spreading All Around Us

Joe Biden Used Autopen to Sign All Pardons During His Final Weeks In Office

BREAKING NEWS: Kilmar Abrego Garcia Coming Back To U.S. For Criminal Prosecution, Report Says

he BEST GEN X & Millennials Memes | Ep 79 - Nostalgia 60s 70s 80s #akornzstash

Paul Joseph Watson They Did Something Horrific

Romantic walk under Eiffel Tower in conquered Paris

srael's Attorney General orders draft for 50,000 Haredim amid Knesset turmoil

Elon Musk If America goes broke, nothing else matters

US disabilities from BLS broke out to a new high in May adding 739k.

"Discrimination in the name of 'diversity' is not only fundamental unjust, but it also violates federal law"

Target Replaces Pride Displays With Stars and Stripes, Left Melts Down [WATCH]


Science/Tech
See other Science/Tech Articles

Title: AI Program Taught Itself How To 'Cheat' Its Human Creators
Source: [None]
URL Source: https://www.zerohedge.com/news/2019 ... f-how-cheat-its-human-creators
Published: Jan 5, 2019
Author: Tyler Durden
Post Date: 2019-01-05 09:19:52 by Horse
Keywords: None
Views: 72

When most people think about the potential risks of artificial intelligence and machine learning, their minds immediately jump to "the Terminator" - a future where robots, according to a dystopian vision once articulated by Elon Musk, would march down suburban streets, gunning down every human in their path.

But in reality, while AI does have the potential to sow chaos and discord, the manner in which this might happen is much more pedestrian, and far less exciting than a real-life "Skynet". If anything, risks could arise from AI networks that can create fake images and videos - known in the industry as "deepfakes" - that are indistinguishable from the real think.

Who could forget this video of President Obama? This never happened - it was produced by AI software - but it's almost indistinguishable from a genuine video.

Well, in the latest vision of AI's capabilities in the not-so-distant future, a columnist at TechCrunch highlighted a study that was presented at a prominent industry conference back in 2017. In the study, researchers explained how a Generative Adversarial Network - one of the two common varieties of machine learning agents - defied the intentions of its programmers and started spitting out synthetically engineered maps after being instructed to match aerial photographs with their corresponding street maps.

The intention of the study was to create a tool that could more quickly adapt satellite images into Google's street maps. But instead of learning how to transform aerial images into maps, the machine-learning agent learned how to encode the features of the map onto the visual data of the street map.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:

The agent's actions represented an inadvertent breakthrough in the capacity for machines to create and fake images.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

Instead of finding a way to complete a task that was beyond its abilities, the machine learning agent developed its own way to cheat.

One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

And if even these sophisticated researchers nearly failed to detect this, what does that say about our ability to differentiate genuine images from those that were fabricated by a computer simulation?

Post Comment   Private Reply   Ignore Thread  



[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help]