Showing posts with label singularity. Show all posts
Showing posts with label singularity. Show all posts

Friday, October 21, 2011

Is the Singularity Near? Kurzweil vs. Allen

When will humanity reach Singularity, that now-famous point in time when artificial intelligence becomes greater than human intelligence? It is aptly called the Singularity proponents like Ray Kurzweil: like the singularity at the center of a black hole, we have no idea what happens once we reach it. However, the debate today is not what happens after the Singularity, but when will it happen. _BigThink
In the video below, Kurzweil discusses some of his ideas about the coming singularity, including timelines and cautionary notes.

Microsoft co-founder and billionaire Paul Allen recently expressed skepticism about Kurzweil's timeline for the singularity, in a Technology Review article.
Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.

While we suppose this kind of singularity might one day occur, we don't think it is near.

...Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don't. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer's hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn't enough to just run today's software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. _Technology Review_Paul Allen
Allen goes on to discuss the "complexity brake," which the limitations of the human brain (and the limitations of the human understanding of the human brain) will apply to any endeavour that begins to accelerate in complexity too quickly.

Allen's argument is remarkably similar to arguments previously put forward by Al Fin neurscientists and cognitivists. The actual way that the human brain works, is something that is very poorly understood -- even by the best neuroscientists and cognitivists. If that is true, the understanding of the brain by artificial intelligence researchers tends to be orders of magnitude poorer. If these are the people who are supposed to come up with super-human intelligence and the "uploading of human brains" technology that posthuman wannabes are counting on, good luck!

But now, Ray Kurzweil has chosen the same forum to respond to Paul Allen's objections:
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

...Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm.

...Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems.

...How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons.

...Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain "bottom up" without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. _TechnologyReview_Ray Kurzweil
Kurzweil's attitude seems to be: "Because difficult problems have arisen and been solved in the past, we can expect that all difficult problems that arise in the future will also be solved." Perhaps I am being unfair to Kurzweil here, but his reasoning appears to be fallacious in a rather facile way.

Al Fin neuroscientists and cognitivists warn Kurzweil and other singularity enthusiasts not to confuse the cerebellum with the cerebrum, in terms of complexity. They further warn Kurzweil not to assume that a machine intelligence researcher can simply program a machine to emulate neurons and neuronal networks to a certain level of fidelity, and then vastly expand that model to the point that it achieves human-level intelligence. That is a dead end trap, which will end up wasting many billions of dollars of research funds in North America, Europe, and elsewhere.

This debate has barely entered its opening phase. Paul Allen is ahead in terms of a realistic appraisal of the difficulties ahead. Ray Kurzweil scores points based upon his endless optimism and his proven record of skillful reductionistic analyses and solutions of previous problems.

Simply put, the singularity is not nearly as near as Mr. Kurzweil predicts. But the problem should not be considered impossible. Clearly, we will need a much smarter breed of human before we can see our way clear to the singularity. As smart as Mr. Kurzweil is, and as rich as Mr. Allen is, we are going to need something more from the humans who eventually birth the singularity.

Thursday, September 02, 2010

Singularity University Class of 2010

Singularity Hub presented a wrap-up of 2010's Singularity University second summer session. The wrap-up was a close follow-up of a comprehensive video look at the full range of projects undertaken by SU's students this summer.

Students were busy working on energy projects, water projects, food projects, space projects, and "up-cycle" projects -- a focus on sustainability. It sounds as if this summer's projects may have been focused upon the problems of the third world -- which calls into question the entire idea of naming this endeavour "Singularity University", if it is only another "appropriate technology" approach to saving the third world from its own idiocratic nature.

The Singularity is about mind-blowing, futuristic advances that happen so quickly as to be unpredictable and untrackable.

Wait and see what happens next year, I suppose. The phantom hobgoblins of overpopulation doom, carbon catastrophe doom, peak oil doom, peak water doom, and all the other scary story dooms that haunt the modern airwaves and electron-ways, may be too powerful to resist -- even if totally false. If so, pack up the babies and grab the old ladies and go go go to Brother Love's Salvation of the Earth Singularity Show, because there is no escaping it now.

Tuesday, November 04, 2008

The Singularity as A Collective Hive Mind?


Utopian visions of a technological singularity occupy the individual minds of tens of thousands of Earthers. At the recent Singularity conference in San Jose, attendees were reminded that the human mind remains far and away the most capable cognitive machine in the known universe. But that did not stop them from dreaming utopian visions of all-powerful artifical intelligences, and machines capable of "storing" human consciousness, simulating a full-sensory paradise for its uploaded minds.

From there, the idea of a collective mind, a mental "super-organism" of global proportions, arises easily. Particularly for anyone who has been exposed to the Star Trekkian vision of "The Borg." Kevin Kelly's essay on the Superorganism elaborates on the concept of consciousness evolving out of the growing internetwork of networked co-evolving humans-machines.

An interesting web attempt at creating an evolving future-oriented collective intelligence, is The Space Collective. Combining clear speculative thinking with crisp visual artistic values, TSC is a future oriented website worth visiting more than once.

Our own brain-minds are an example of a hive mind of sorts, at least a modular consciousness. Humans spontaneously organise socially as families, clans, tribes, guilds, militias, hunting parties, religions, etc. Further, had any ancient astronaut observers of human civilisation taken the time to film time-lapse documentaries of the rise and fall of human city-states and cultures, a definite social insect-like quality of the onset and decay of culture would be clear.

Humans are desperately in need of an "organising principle" to lead them to a more enlightened and sustainable era of safe-branching futures. While socialist economics is known to be a destructive dead-end, it continues to attract more blindly utopian and less mentally endowed humans like moths to a flame. And while capitalist economics combined with libertarian politics provides the excess wealth needed to finance evolutionary projects, the deeper level of guidance needed to help choose workable long-term directions of effort is lacking.

Hives too often degenerate into mobs, as more personally powerful individual interests assert themselves in counter-productive directions, assuming amplitudes strong enough to skew the crowd away from ordinary and more productive activities.

Religions have always failed in the end, as have philosophies and ideologies. Whatever guiding principle an enlightened group mind may adopt, it should be simple, subject to test by reality, and resistant to co-option by charismatic or ruthlessly powerful individuals.

Wednesday, June 25, 2008

The Posthumans Among Us

The recent IEEE Spectrum special on the singularity brought a lot of comment across the blogosphere. The NYT's John Tierney weighed in on the topic recently with a piece: When Do Post-Humans Show Up? Tierney gives Ray Kurzweil a chance to fight back against the "singularity deniers", and Kurzweil obliges.
These critics obviously have not read my book and have not read this chapter because they do not respond to anything I’ve written. It is as if they’ve just heard a superficial presentation of these ideas and respond without any engagement of the extensive discussion that has already taken place about these issues. _NYT
That may very well be, or it may be that Kurzweil is mentally fixed on a particular set of mechanisms and scenarios of singularity. It may be that Kurzweil's "extensive discussion that has already taken place about these issues..." is not as extensive or profound as Kurzweil imagines.

Kurzweil's discussion about how easily the human brain/mind will be emulated is particularly naive. This naivete comes naturally when a prolific and esteemed person such as Kurzweil is insufficiently familiar with the subject matter he is discussing--the genetics (and epigenetics) of the mind/brain.
I point out that the complexity of the design of the brain is at least 100 million times simpler than it appears because the design is in the genome. Even including the genetic machinery that implements the genome, the compressed genome is only about 50 million bytes (which I analyze in the book), and that is a level of complexity we can handle. We are already showing that we can develop realistic models and simulations of brain regions like the cerebellum and others. The cerebellum, for example, repeats a basic pattern a few billion times with some random variation within certain prescribed constraints. There is a lot of apparent complexity in the cerebellum but not very much unique design information, and we’re showing we can reverse-engineer it.
Of course the cerebellum is only peripherally involved in most conscious activity. It is an important "co-processor", but not the central processing center of consciousness. One can derive no comfort in the quest to understand human cognition from the apparent simplicity of cerebellar structure.

Similarly, if one supposed that the apparent simplicity of the brain genome implied a simplicity of the brain/mind itself, one would have to overlook much recent research detailing the "post-genomic", meta-genomic, and epigenetic development of central nervous system structures. These critical aspects of brain development are not well enough understood to allow useful modeling or quantification. Worse yet, even a complete understanding of how to create a human brain will not immediately put us in a place to understand how that brain works, or how it might be improved.

The road to the "singularity" will not be a smooth exponential curve. It will be a fractal fracturing of boundaries and limitations that will take decades to sort out. We will have pieces of the singularity existing a hand's breadth away from other pieces, with neither recognising the other. It will be up to post-humans to put the pieces together so that they do not blow up into a Skynet or Colossus.

If western civilisation survives attacks from desert religious fanaticisms, and 19th century cloistered ghetto-inspired central planning, various critical parts of the "singularity" may achieve capabilities and versatilities that allow them to connect with other critical parts in the same place at the same time. It is up to the post-humans among us to follow the threads of accomplishment, splice them together into a self-generative, autopoietic symbiotic whole, and wrap it all in a sustainable energy/matter matrix.

In Kurzweil's vision, the singularity drives the post-human. But doesn't it make more sense the other way around?

Eventually, the biological substrate of consciousness will be outpaced by other forms of conscious cognition. Post-humans will build their world around that knowledge, so as not to be left behind. Currently, only science fiction provides the speculative power to imagine the transformations that will come from genomics, nanotechnology, advanced hyper-parallel computation, robotics, evolved machine intelligences, and any combination of the above. After science fiction, Kurzweil provides a more "connected" view of our potential. Finally, there is mainstream science, which runs a very distant third in scope and vision to SF and Kurzweil.

But if you want a realistic assessment of what is likely to happen, you need scientist/engineers trained in multiple disciplines, who are also thoroughly steeped in biology, cognitive science, history, and science fiction. Post-humans will have to be able to bridge disciplines, cultures, even civilisations.

Previously published at Al Fin