[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help] 

Status: Not Logged In; Sign In

Real Monetary Reform

More Young Men Are Now Religious Than Women In The US

0,000+ online influencers, journalists, drive-by media, TV stars and writers work for State Department

"Why Are We Hiding It From The Public?" - Five Takeaways From Congressional UFO Hearing

Food Additives Exposed: What Lies Beneath America's Food Supply

Scott Ritter: Hezbollah OBLITERATES IDF, Netanyahu in deep legal trouble

Vivek Ramaswamy says he and Elon Musk are set up for 'mass deportations' of millions of 'unelected bureaucrats'

Evidence Points to Voter Fraud in 2024 Wisconsin Senate Race

Rickards: Your Trump Investment Guide

Pentagon 'Shocked' By Houthi Arsenal, Sophistication Is 'Getting Scary'

Cancer Starves When You Eat These Surprising Foods | Dr. William Li

Megyn Kelly Gets Fiery About Trump's Choice of Matt Gaetz for Attorney General

Over 100 leftist groups organize coalition to rebuild morale and resist MAGA after Trump win

Mainstream Media Cries Foul Over Musk Meeting With Iran Ambassador...On Peace

Vaccine Stocks Slide Further After Trump Taps RFK Jr. To Lead HHS; CNN Outraged

Do Trump’s picks Rubio, Huckabee signal his approval of West Bank annexation?

Pac-Man

Barron Trump

Big Pharma-Sponsored Vaccinologist Finally Admits mRNA Shots Are Killing Millions

US fiscal year 2025 opens with a staggering $257 billion October deficit$3 trillion annual pace.

His brain has been damaged by American processed food.

Iran willing to resolve doubts about its atomic programme with IAEA

FBI Official Who Oversaw J6 Pipe Bomb Probe Lied About Receiving 'Corrupted' Evidence “We have complete data. Not complete, because there’s some data that was corrupted by one of the providers—not purposely by them, right,” former FBI official Steven D’Antuono told the House Judiciary Committee in a

Musk’s DOGE Takes To X To Crowdsource Talent: ‘80+ Hours Per Week,’

Female Bodybuilders vs. 16 Year Old Farmers

Whoopi Goldberg announces she is joining women in their sex abstinence

Musk secretly met with Iran's UN envoy NYT

D.O.G.E. To have a leaderboard of most wasteful government spending

In Most U.S. Cities, Social Security Payments Last Married Couples Just 19 Days Or Less

Another major healthcare provider files for Chapter 11 bankruptcy


Science/Tech
See other Science/Tech Articles

Title: AI Program Taught Itself How To 'Cheat' Its Human Creators
Source: [None]
URL Source: https://www.zerohedge.com/news/2019 ... f-how-cheat-its-human-creators
Published: Jan 5, 2019
Author: Tyler Durden
Post Date: 2019-01-05 09:19:52 by Horse
Keywords: None
Views: 63

When most people think about the potential risks of artificial intelligence and machine learning, their minds immediately jump to "the Terminator" - a future where robots, according to a dystopian vision once articulated by Elon Musk, would march down suburban streets, gunning down every human in their path.

But in reality, while AI does have the potential to sow chaos and discord, the manner in which this might happen is much more pedestrian, and far less exciting than a real-life "Skynet". If anything, risks could arise from AI networks that can create fake images and videos - known in the industry as "deepfakes" - that are indistinguishable from the real think.

Who could forget this video of President Obama? This never happened - it was produced by AI software - but it's almost indistinguishable from a genuine video.

Well, in the latest vision of AI's capabilities in the not-so-distant future, a columnist at TechCrunch highlighted a study that was presented at a prominent industry conference back in 2017. In the study, researchers explained how a Generative Adversarial Network - one of the two common varieties of machine learning agents - defied the intentions of its programmers and started spitting out synthetically engineered maps after being instructed to match aerial photographs with their corresponding street maps.

The intention of the study was to create a tool that could more quickly adapt satellite images into Google's street maps. But instead of learning how to transform aerial images into maps, the machine-learning agent learned how to encode the features of the map onto the visual data of the street map.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:

The agent's actions represented an inadvertent breakthrough in the capacity for machines to create and fake images.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

Instead of finding a way to complete a task that was beyond its abilities, the machine learning agent developed its own way to cheat.

One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

And if even these sophisticated researchers nearly failed to detect this, what does that say about our ability to differentiate genuine images from those that were fabricated by a computer simulation?

Post Comment   Private Reply   Ignore Thread  



[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help]