Ray Kurzweil wasn't like the other nice, Jewish boys he grew up with in Queens. While they were putting baseball cards in the spokes of their bikes, Ray was writing computer programs and shaking hands with the President. Now, those other kids from the neighborhood are doctors and lawyers, and Kurzweil is a techno-prophet whose book, The Singularity Is Near: When Humans Transcend Biology, changed our discourse on technology with its bold predictions about the coming merger between man and machine.
In 2006, a year after the book came out, Kurzweil began hosting a yearly conference to mark how the past year brought us closer to the Singularity, and to debate the complex philosophical ramifications of the melding of technology and biology. And this year, this reporter will be there to cover it.
Over the weekend, I will post a number of blog entries detailing the profound, the important, and the just plain wacky ideas thrown about with abandon by intellectual heavyweights like Stephen Wolfram, David Chalmers and Kurzweil himself. Then on Monday, I'll break it all down with a larger summary of what I've learned, where we're going as a species, and which nanobots all the cool kids will compose their cells out of.
See you at the Singularity.
The approach of a "singularity" (I prefer to regard it as more akin to a phase transition) is, of course, generally accepted by those attending the summit.
They are among the tiny minority of people on this planet who have managed to grasp the very clear pattern of exponential change evidenced by the history of technology. "Moores law" is a particularly striking example.
Unfortunately, while these folk have accepted exponentiality in technological developments, a more general awareness of the extension to other natural processes on this planet, particularly with regard to living systems, is not common.
Furthermore, most of the predictions that arise are strongly colored by anthropocentrism. Robots are frequently predicted to be humanoid or to acquire various human characteristics. On another tack, humans will transcend biology. Or humans will ultimately populate the galaxy. Or humans will maybe, by developing neural networks create a new "intelligence"
These result from the same kind of naive assumptions that, in earlier times, spawned religions. The various deities having, to varying extents, humanoid characteristics.
Man made gods in his own behavioral image.
To properly interpret the patterns science observes in nature it is necessary to learn the trick of stepping outside our (very natural) anthropocentric shell so that objectivity is not compromised.
Another common characteristic that weakens prediction is over-specialization.
Our world does not evolve along a single path but by multiple inter-related networked processes.
Gross patterns need to be appreciated in order to better understand universal trends.
I have done my best to address these problems in my book "Unusual Perspectives", the electronic version of which can be freely downloaded from:
I read singularity is near about 3 years ago. I was blown away. Recommended it to everyone I knew. I even bought extra copies for a friend of mine and my cousin. Although I have doubts about some of the predictions, I generally agree with the book's contents and in every page I learned so many things. My doubts are due to the fact that the developments in real world will have obstacles in front of them such as society, political etc.. so it is not like a developing microchip speed exponentially in isolated labs. However, the concept of the law of accelerating returns mentioned in the book is totally logical and eventually it brings everything into the course of IT.
Singularity as Kurzweil describes is not possible and is not a necessary outcome of the Moore's law process.
We agree that Moore's law can be extended back in time from when it was defined to include the evolution of living things. One interesting aspect of this is that living things' programming language never evolved, only the information stored within "improved" letting the surviving generations (the evolved ones) adopt to current conditions. The improvements in programming mechanisms/language would be much more time consuming comparing to simple changes and increase of the data stored in the system hence the very rapid rate of evolution (of the stored data that define the living thing). The amount of data stored by living systems is increasing over the millions of years but not fast enough anymore to sustain the Moore's law but new computing paradigms continue propelling the Moore's law as it was true in the far past (evolution that improved and increased the information stored in living organisms) and in the near past (switches, lamps, transistors, ICs, ...). True that in each cycle the current technology (now silicon ICs) reaches technological limits and declines as the new paradigm takes over and continues the exponential growth but does that really lead to singularity or is singularity the only possible outcome? I don't think so.
An interesting observation which somehow is ignored or unnoticed is that this growth (Moore's law) eventually leads to transforming the energy into "knowledge". THE AMOUNT OF ENERGY COMMITTED TO "STORING" THE INFORMATION KEEPS INCREASING EXPONENTIALLY. Does this lead to a singularity where all the energy is transformed and organized into a form of pure knowledge (sustained by mechanisms yet unknown to us) or leading to "heat death" or "big crunch" or would that be a definition of a god?... Is entropy really an increasing-only proposition or our current understanding of thermodynamics needs some major overhaul????? The process of organizing energy into energy systems that store knowledge decreases the entropy as a measure of disorder if the disorder is measured as a distance from a knowledge state to which energy allocation seems converging.
Singularity is possible only if there is unlimited energy available. If there is a limited energy in the universe then singularity will not exist and possibly there exists a greater paradigm than what we call "universe". Nothing else can be true under this assumption.
Imagine you created a computer program that is capable of improving itself (rewriting itself) which learns and accumulates more and more knowledge. Initially it would probably improve itself exponentially but very soon it would run out of resources and would stop improving. This assuming it would not switch to another paradigm and started controlling resources outside of its "body". In case of the universe it could be possible that the accumulated knowledge will eventualy be able to figure out an "out-of-the-body" experience beyond the known universe. We can't possibly know if a "heat death" or "big crunch" or "singularity" are possible. Science seems inventing these syndromes that "explain" what we don't yet know.