Will the technological singularity happen dating

Technological singularity - Wikipedia

will the technological singularity happen dating

We'll get to a point where technical progress will be so fast that unenhanced human I set the date for the Singularity—representing a profound and disruptive . that change occurs at the same rate that we have experienced most recently. Ray Kurzweil is famous for his vision and prediction of a Technological Singularity by Although whenever Ray predicts a date like Futurism | Ray Kurzweil claims singularity will happen by is the consistent date I've predicted, when an artificial intelligence will pass Singularity is that point in time when all advances in technology, particularly.

But a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process, of which technology is a primary example.

As I show in the book, this has also been true of biological evolution. Indeed, technological evolution emerges from biological evolution. You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of the economy, and you get the same exponential—not linear—progression. I have over forty graphs in the book from a broad variety of fields that show the exponential nature of progress in information-based measures.

For the price-performance of computing, this goes back over a century, well before Gordon Moore was even born. Yes, any number of bad predictions from other futurists in earlier eras can be cited to support the notion that we cannot make reliable predictions. In general, these prognosticators were not using a methodology based on a sound theory of technology evolution.

I say this not just looking backwards now. But how can it be the case that we can reliably predict the overall progression of these technologies if we cannot even predict the outcome of a single project?

Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. However, as I argue extensively in the book, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness as measured in a variety of ways of information technologies.

And as I mentioned above, information technology will ultimately underlie everything of value. But how can that be? We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events.

Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gas—comprised of a great many chaotically interacting molecules—can be done very reliably through the laws of thermodynamics.

Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call "the law of accelerating returns. Radical life extension, for one. Sounds interesting, how does that work? Each will provide a dramatic increase to human longevity, among other profound impacts. Biotechnology is providing the means to actually change your genes: Already, new drug development is precisely targeting key steps in the process of atherosclerosis the cause of heart diseasecancerous tumor formation, and the metabolic processes underlying each major disease and aging process.

That will bring us to the nanotechnology revolution, which will achieve maturity in the s. And how does that work? Human body version 2. In the book, I describe how each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones, and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could be reserved for culinary pleasures rather than the tedious biological function of providing nutrients.

And the third revolution? We can then back up the information. Using nanotechnology-based manufacturing, we could recreate your brain, or better yet reinstantiate it in a more capable computing substrate.

will the technological singularity happen dating

Our biological brains use chemical signaling, which transmit information at only a few hundred feet per second. Electronics is already millions of times faster than this. In the book, I show how one cubic inch of nanotube circuitry would be about one hundred million times more powerful than the human brain. I see this starting with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence.

But keep in mind that nonbiological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed in capacity. As we get to the s, the nonbiological portion of our intelligence will predominate.

will the technological singularity happen dating

So tell me more about how genetics or biotechnology works. As we are learning about the information processes underlying biology, we are devising ways of mastering them to overcome disease and aging and extend human potential. One powerful approach is to start with biology's information backbone: With gene technologies, we're now on the verge of being able to control how genes express themselves.

It blocks the messenger RNA of specific genes, preventing them from creating proteins. Since viral diseases, cancer, and many other diseases use gene expression at some crucial point in their life cycle, this promises to be a breakthrough technology. When that gene was blocked in mice, those mice ate a lot but remained thin and healthy, and actually lived 20 percent longer.

New means of adding new genes, called gene therapy, are also emerging that have overcome earlier problems with achieving precise placement of the new genetic information. Another important line of attack is to regrow our own cells, tissues, and even whole organs, and introduce them into our bodies without surgery.

For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. Today, we are learning the precise biochemical pathways that underlie both disease and aging processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts is vast.

But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biology's principles of operation.

Our interneuronal connections compute at about transactions per second, at least a million times slower than electronics. As another example, a nanotechnology theorist, Rob Freitas, has a conceptual design for nanobots that replace our red blood cells. The GNR revolutions will result in other transformations that address this issue. For example, nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation.

Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization. But these developments are not without their dangers. What sort of perils? G, N, and R each have their downsides. The existential threat from genetic technologies is already here: The tools and knowledge to do this are far more widespread than the tools and knowledge to create an atomic bomb, and the impact could be far worse.

But the idea of relinquishing new technologies such as biotechnology and nanotechnology is already being advocated. I argue in the book that this would be the wrong strategy. Besides depriving human society of the profound benefits of these technologies, such a strategy would actually make the dangers worse by driving development underground, where responsible scientists would not have easy access to the tools needed to defend us.

So how do we protect ourselves? I discuss strategies for protecting against dangers from abuse or accidental misuse of these very powerful technologies in chapter 8. The overall message is that we need to give a higher priority to preparing protective strategies and systems.

We need to put a few more stones on the defense side of the scale. One strategy would be to use RNAi, which has been shown to be effective against viral diseases. We would set up a system that could quickly sequence a new virus, prepare a RNA interference medication, and rapidly gear up production. We have the knowledge to create such a system, but we have not done so. We need to have something like this in place before its needed.

Ultimately, however, nanotechnology will provide a completely effective defense against biological viruses. The existential threat from engineered biological viruses exists right now. Okay, but how will we defend against self-replicating nanotechnology? There are already proposals for ethical standards for nanotechnology that are based on the Asilomar conference standards that have worked well thus far in biotechnology. These standards will be effective against unintentional dangers.

For example, we do not need to provide self-replication to accomplish nanotechnology manufacturing. But what about intentional abuse, as in terrorism? Blue goo to protect us from the gray goo! Ultimately, however, strong AI will provide a completely effective defense against self-replicating nanotechnology. Yes, well, that would have to be a yet more intelligent AI. This is starting to sound like that story about the universe being on the back of a turtle, and that turtle standing on the back of another turtle, and so on all the way down.

So what if this more intelligent AI is unfriendly? Another even smarter AI? History teaches us that the more intelligent civilization—the one with the most advanced technology—prevails. But I do have an overall strategy for dealing with unfriendly AI, which I discuss in chapter 8. There are limits to the exponential growth inherent in each paradigm. In the s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall.

It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pressure to create the next one. Yes, I discuss these limits in the book. The ultimate 2 pound computer could provide cps, which will be about 10 quadrillion times more powerful than all human brains put together today.

If we allow it to get hot, we could improve that by a factor of another million. And when we saturate the ability of the matter and energy in our solar system to support intelligent processes, what happens then? Which will take a long time I presume. Well, that depends on whether we can use wormholes to get to other places in the Universe quickly, or otherwise circumvent the speed of light.

will the technological singularity happen dating

If wormholes are feasible, and analyses show they are consistent with general relativity, we could saturate the universe with our intelligence within a couple of centuries.

I discuss the prospects for this in the chapter 6. Other natural things include malaria, Ebola, appendicitis, and tsunamis. Many natural things are worth changing. In my view, death is a tragedy.

It's a tremendous loss of personality, skills, knowledge, relationships. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

We design them to behave as if they had certain sorts of psychologybut there is no psychological reality to the corresponding processes or behavior. Automation, Accelerating Technology and the Economy of the Future [51] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity.

This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine.

Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold.

This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the s, and thus hyperbolic growth should not be expected in the future.

A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, [62] a law of diminishing returns. The number of patents per thousand peaked in the period from toand has been declining since. Jaron Lanier refutes the idea that the Singularity is inevitable.

It's not an autonomous process. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination Standard of Living Since the Civil Warpoints out that measured economic growth has slowed around and slowed even further since the financial crisis ofand argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.

One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.

  • What does being on track for the predicted Technological Singularity mean and are we on track ?

The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity. Based on population growth, the economy doubled everyyears from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every years, a remarkable increase.

If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis. Existential risk from artificial general intelligence The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.

Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. We spend most of our waking time communicating through digitally mediated channels With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".

The article further argues that from the perspective of the evolutionseveral previous Major Transitions in Evolution have transformed life through innovations in information storage and replication RNADNAmulticellularityand culture and language. In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system humans capable of creating technology that will result in a comparable evolutionary transition.

The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the s, "the quantity of digital information stored has doubled about every 2. In biological terms, there are 7. The digital realm stored times more information than this in see fgure. The total amount of DNA contained in all of the cells on Earth is estimated to be about 5. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just years".

Artificial intelligence in fiction In Februaryunder the auspices of the Association for the Advancement of Artificial Intelligence AAAIEric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions.

They discussed the extent to which computers and robots might be able to acquire autonomyand to what degree they could use such abilities to pose threats or hazards. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence.

The Singularity is Near » Questions and Answers

The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist. Existential risk from artificial general intelligence Berglas claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility.

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so.

For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI.

While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement or the AI could transform itself into something unfriendly and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.

He noted that the first real AI would have a head start on self-improvement and, if friendly, could prevent unfriendly AIs from developing, as well as providing enormous benefits to mankind. It also proposed a simple design that was vulnerable to corruption of the reward generator. One hypothetical approach towards attempting to control an artificial intelligence is an AI boxwhere the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world.

However, a sufficiently intelligent AI may simply be able to escape by outsmarting its less intelligent human captors.

Technological singularity

Unfortunately, it might also be the last, unless we learn how to avoid the risks. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here — we'll leave the lights on"? Probably not — but this is more or less what is happening with AI. If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years right.

In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace perhaps on the order of decadeson a timescale where ongoing human interaction and correction can effectively steer the AGI's development.

Kurzweil Claims That the Singularity Will Happen by

For instance, Intel has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular — they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff.

Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.

The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a very hard, 5-minute takeoff but thinks a takeoff from human to superhuman level on the order of 5 years is reasonable.

He calls this a "semihard takeoff". Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.

More also argues that a superintelligence would not transform the world overnight, because a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world.

Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil suggests somatic gene therapy ; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.