[Home] [Headlines] [Latest Articles] [Latest Comments] [Post] [Sign-in] [Mail] [Setup] [Help]
Status: Not Logged In; Sign In
Science/Tech See other Science/Tech Articles Title: Bypassing the Rapture to immortality On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists they included a comedian and a former Miss America had to guess what it was. On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME's photo-essay "Cyberdyne's Real Robot.") Kurzweil then demonstrated the computer, which he built himself a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher. But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence. That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity our bodies, our minds, our civilization will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.) Computers are getting faster. Everybody knows that. Also, computers are getting faster faster that is, the rate at which they're getting faster is increasing. True? True. So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties. If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville. Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity. The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation. (skipping page 2) Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains." Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity never say he's not conservative at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. (See how robotics are changing the future of medicine.) The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians. Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence. In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. (See TIME's computer covers.) At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters handed out pamphlets. An android chatted with visitors in one corner. After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here. For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. (Comment on this story.) Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable rather like the heat death of the universe is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable." Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger. But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal. Read more: www.time.com/time/health/...138,00.html#ixzz1GYDHNmhh Read more: www.time.com/time/health/...138,00.html#ixzz1GYAns4gk Comments: George Kauffman Genetic engineering and our further understanding of biology might make us immortal in the future. This article touched on this set of medical breakthroughs for the future. In another section of the article, it is stated that computer technology will become extensive enough to support our immortality. This is a confusion of immortality and eternal fame. Our human conciousness can only be copied to a new "platform". Once all of the information in a brain becomes functional in the platform (a computer), it will be a clone (of whomever) with an independent future. This might maintain a presence that seems to be an person for all the nearby human associates,..but the original life will continue or end independently as far as they are (and I) am concerned. On the topic of exponential computer growth, the article ignores other realities that will compete with Moore's law. Moore's law is strictly based on our ever improving ability to make better and more transistors, packing them into shrinking packages. Larger systems ultimately suffer from diminishing returns and there is a point where the complexity will successfully sabotage further scaling of the system. I'm not saying that humans won't build a silicon based simulation of the human brain; I'm predicting that such a machine won't enjoy an edge over the million years of brain development. The perfect memory of a computer relies on the relative simplicity of its data and retrieval methods. Once the true nature, complexity and scope of a brain's information is involved, the conciousness supported by the software will include many very human like limitations. My two bytes (cents). Read more: www.time.com/time/health/...138,00.html#ixzz1GYpib7XS Natasha Sails George Kauffman, brought up a good point! And to further his connection to biology, it can be said that although the brain is the driving machine of the body, the body is still built with many organs, signals, and functions that make the brain work the way it does. You, not only have to copy the brain's functions but also the functions of every cell in the body because every cell and organ contributes to a human's being. The brain isn't the only organ that controls things in the body. Genetic code gives the original instructions to all the body's parts to function in specific ways. Every cells function and ability to read genetic information and the brain's altogether workings would have to be decoded and coded. It sounds like a very huge process. There is much more biology involved then even just the moving of the 'soul' of a human over to a machine. The soul isn't just in the brain it is in every part of a person's physical being. Read more: www.time.com/time/health/...138,00.html#ixzz1GYqP6AMT Comments: Wallman97 Machinces getting faster? Bit of missinformation. CPUs hit the limits of silicon years ago (you can only make metal so thin before it cannot carry an eletrical charge). Since then computers stopped getting directly faster and started teaming up. The new speed is from mulitple cores (mini CPUs) working together. Its like geting more work done with 10 people instead of just 1 person working harder. But there are limits. Ten people cannot walk/run from point A to point B any faster then 1 person can walk/run from point A to point B. Adding cores adds speed in some ways, but its not true speed. Which brings us to an issue on size. Today's quad core has 800 million gates(on/off switches). It fits in the palm of your hand with silicon already at its limit (i.e. its not getting smaller). The Human brain has 86 billion (that's B) nuetrons in it. So we'd need abount 100 quad cores to match the gate count. While the cores are small, the power supplys, memory, cooling and other parts are not. Such a computer would be the size of a small house. Then there is one more minor issue. A gate has only 2 values, on/off. A neuron has thousands (based on number of connections times the rate at which it pulses). To get enough gates to match that kind of horsepower, and the associated hardware to power/cool/use them, we're talking about a computer the size of the sun. I'll stop there & let others attack it from the angle of, even if you could build it, how would you program/teach it to be smarter then you? you cannot teach what you do not know. Read more: www.time.com/time/health/...138,00.html#ixzz1GYmQAeUK Comments: Spikosauropod It was once believed that flies were generated spontaneously from dung. We now know that if a fly does not lay an egg in the dung, complete with all of its genetic coding, no fly will emerge. I do not think it is necessarily impossible for consciousness to exist in a machine. However, I do not think it will emerge spontaneously. In order for consciousness to be in the machine, we will have to master consciousness, build consciousness, and implement consciousness. Currently, we know exactly nothing about consciousness. The idea that if we build a seemingly intelligent machine, it will automatically be conscious, is just more flies from dung. That said, I think it is possible for a machine to behave intelligently, be creative, and pass a Turing test with flying colorsall of this, and never contain so much as a wisp of consciousness. It will be what philosophers call a "zombie". It may come across as being aware and quite charming, but there will be no light on inside. No one will be at home. Homework assignment: read all of these papers on consciousness: consc.net/online [This user is an administrator] Ted Johanson Well, consciousness arose unaided in nature. If we can apply the same principles and structures for learning like nature does with a machine I think it might very well arise spontaneously. But I don't think you can program a machine to be conscious if that's what you mean. The key word here is a learning machine. Take a look at Markram's Blue Brain project. I think there's a good chance that this approach could give rise to real consciousness. We don't need to know how it works - just that it does. And since Markram's group is copying natures system I think there's a good chance of success. Besides, if you have a machine that behaves in all ways possible like a conscious person - how do you know it's not? [This user is an administrator] Spikosauropod If you made a perfect digital model of a pile of dung, no flies would emerge. Flies will not lay their eggs in circuitry and software. You cannot be certain that anything either is or is not conscious. Honestly, I suspect that people I have encountered, both in life and in writing, are not conscious. This observation once bothered me so much that I had to send a desperate email to one of my favorite philosophers, David Chalmers. Here, read his Online Papers on Consciousness: Read more: www.time.com/time/health/...138,00.html#ixzz1GYKbo7jf Read more: www.time.com/time/health/...138,00.html#ixzz1GXyjrYW2 Post Comment Private Reply Ignore Thread Top Page Up Full Thread Page Down Bottom/Latest
#1. To: Tatarewicz (#0)
Clip of Kurzweil on "I've Got a Secret." http://www.youtube.com/watch?v=X4Neivqp2K4
ping
|
||
[Home]
[Headlines]
[Latest Articles]
[Latest Comments]
[Post]
[Sign-in]
[Mail]
[Setup]
[Help]
|