The initial key to a successful out-migration of humans into the solar system and the larger universe, is to develop the ability to prepare the ground ahead of human colonists. When human settlers arrive on Luna, Mars, Titan, and other potential colonies, they should find working habitats and vital infrastructure waiting for them.
Here are a few basic ideas on complex systems and autopoetic self-organisation which will likely form the foundation for more advanced techniques of seeding complex systems in hostile environments. The slides come from Rene Doursat, who lectures at Ecole Polytechnique and specialises in complex systems and morphogenetic engineering.
Complex systems and networks possess some interesting properties which distinguish them from most human-made systems and networks. It is the goal of morphogenetic engineers such as Rene Doursat, to learn to incorporate many of the more desirable properties of complex systems into systems which could be co-created by humans.
Complex systems can arise on multiple levels, all the way from subatomic particles up to the very universe itself. These systems tend to be decentralised, emerging from a more disordered substrate by the means of self-organising principles.
The individual agents vary between complex systems, depending upon the level on which they operate. Low level complex systems can act as agents themselves, of higher level complex systems.
Complex systems can display an intricate and highly functional architecture -- without having had the "benefit" of an intelligent architect. This self-organising architecture of complex systems acts as an inspiration for possible architectures of future complex systems which might be evolved and co-created for use by humans.
Such novel complex systems could serve to allow the survival and prospering of humans in locations which would ordinarily not support such complex animals which evolved in a much more suitable location (Earth) for their needs. In other words, the "seeds" of human-enabling complex systems could be delivered to hostile -- but strategically important -- locations, where they will proceed to build human-friendly micro-environments, habitats, and infrastructures.
Such intricate yet robust systems would necessarily need to be evolved and grown in a self-organised fashion, using the materials on hand. And yet these "system seeds" must be designed to serve their specialised function.
Naturally, there is the opposite approach which must be considered: A re-designed human animal more suited for the hostile environments likely to be met in space and on other space bodies.
Rene Doursat has looked at biological design (PDF) in ways which might be applied to such a project, in the future.
Such concepts can be applied to biological bodies AND biological brains, as well. In fact, the functioning of the human brain itself is a good example of decentralisation, emergence, and self-organisation -- a prototypical complex network.
So while human designs up until now have tended to possess rather simple and straightforward architectures, at the same time humans are capable of comprehending the nature of complex, evolved systems. The idea of incorporating the properties of complex systems into human designs is not so far-fetched, perhaps in part because of the way our brains work.
The understanding and harnessing of complex systems to human purposes, is a grand project. It is worth spending a good deal of time on, in the future.
Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts
Saturday, December 24, 2011
Wednesday, December 21, 2011
Los Alamos Scientists Mimic Neuron Function to Help Computers to See More Clearly
The brain has an uncanny ability to detect and identify certain things, even if they’re barely visible. Now the challenge is to get computers to do the same thing. And programming the computer to process the information laterally, like the brain does, might be a step in the right direction.The scientists want to create computer models of human vision that are capable of picking out complex objects from a cluttered visual field, and do it as well as humans -- except faster.
...“This model is biologically inspired and relies on leveraging lateral connections between neurons in the same layer of a model of the human visual system,” said Vadas Gintautas of Chatham University in Pittsburgh and formerly a researcher at Los Alamos.
Neuroscientists have characterized neurons in the primate visual cortex that appear to underlie object recognition, noted senior author Garrett Kenyon of Los Alamos. “These neurons, located in the inferotemporal cortex, can be strongly activated when particular objects are visible, regardless of how far away the objects are or how the objects are posed, a phenomenon referred to as viewpoint invariance.” _HPCwire
To quantify the temporal dynamics underlying visual processing, we performed speed-of-sight psychophysical experiments that required subjects to detect closed contours (amoebas) spanning a range of shapes, sizes and positions, whose smoothness could be adjusted parametrically by varying the number of radial frequencies (with randomly chosen amplitudes). To better approximate natural viewing conditions, in which target objects usually appear against noisy backgrounds and both foreground and background objects consist of similar low-level visual features, our amoeba/no-amoeba task required amoeba targets to be distinguished from locally indistinguishable open contour fragments (clutter). For amoeba targets consisting of only a few radial frequencies (), human subjects were able to perform at close to accuracy after seeing target/distractor image pairs for less than 200 ms, consistent with a number of studies showing that the recognition of unambiguous targets typically requires 150-250 ms to reach asymptotic performance [22], [23], [35], here likely aided by the high intrinsic saliency of closed shapes relative to open shapes [7]. Because mean inter-saccade intervals are also in the range of 250 ms [34], speed-of-sight studies indicate that unambiguous targets in most natural images can be recognized in a single glance. Similarly, we found that closed contours of low to moderate complexity readily “pop out” against background clutter, implying that such radial frequency patterns are processed in parallel, presumably by intrinsic cortical circuitry optimized for automatically extracting smooth, closed contours. As saccadic eye movements were unlikely to play a significant role for such brief presentations, it is unclear to what extent attentional mechanisms are relevant to the speed-of-sight amoeba/no-amoeba task.The PLos link above allows access to the entire study.
Our results further indicate that subjects perform no better than chance at SOAs shorter than approximately 20 ms. _PLoS
These findings provide additional insight into the unconscious nature of neural processing, previously touched on in the previous posting here.
Researchers are attempting to take these profound insights and use them for devising computer models which simulate various unconscious brain functions. It will not be an easy task, but by approaching the problem in a system by system manner, limited success is quite possible within a reasonable time frame.
If computers ever learn to "see" and distinguish objects within complex and dynamically changing fields, as well or better than humans, there will be a number of profitable applications waiting.
Friday, October 21, 2011
Is the Singularity Near? Kurzweil vs. Allen
When will humanity reach Singularity, that now-famous point in time when artificial intelligence becomes greater than human intelligence? It is aptly called the Singularity proponents like Ray Kurzweil: like the singularity at the center of a black hole, we have no idea what happens once we reach it. However, the debate today is not what happens after the Singularity, but when will it happen. _BigThinkIn the video below, Kurzweil discusses some of his ideas about the coming singularity, including timelines and cautionary notes.
Microsoft co-founder and billionaire Paul Allen recently expressed skepticism about Kurzweil's timeline for the singularity, in a Technology Review article.
Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.Allen goes on to discuss the "complexity brake," which the limitations of the human brain (and the limitations of the human understanding of the human brain) will apply to any endeavour that begins to accelerate in complexity too quickly.
While we suppose this kind of singularity might one day occur, we don't think it is near.
...Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don't. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer's hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn't enough to just run today's software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. _Technology Review_Paul Allen
Allen's argument is remarkably similar to arguments previously put forward by Al Fin neurscientists and cognitivists. The actual way that the human brain works, is something that is very poorly understood -- even by the best neuroscientists and cognitivists. If that is true, the understanding of the brain by artificial intelligence researchers tends to be orders of magnitude poorer. If these are the people who are supposed to come up with super-human intelligence and the "uploading of human brains" technology that posthuman wannabes are counting on, good luck!
But now, Ray Kurzweil has chosen the same forum to respond to Paul Allen's objections:
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.Kurzweil's attitude seems to be: "Because difficult problems have arisen and been solved in the past, we can expect that all difficult problems that arise in the future will also be solved." Perhaps I am being unfair to Kurzweil here, but his reasoning appears to be fallacious in a rather facile way.
...Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm.
...Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems.
...How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons.
...Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain "bottom up" without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. _TechnologyReview_Ray Kurzweil
Al Fin neuroscientists and cognitivists warn Kurzweil and other singularity enthusiasts not to confuse the cerebellum with the cerebrum, in terms of complexity. They further warn Kurzweil not to assume that a machine intelligence researcher can simply program a machine to emulate neurons and neuronal networks to a certain level of fidelity, and then vastly expand that model to the point that it achieves human-level intelligence. That is a dead end trap, which will end up wasting many billions of dollars of research funds in North America, Europe, and elsewhere.
This debate has barely entered its opening phase. Paul Allen is ahead in terms of a realistic appraisal of the difficulties ahead. Ray Kurzweil scores points based upon his endless optimism and his proven record of skillful reductionistic analyses and solutions of previous problems.
Simply put, the singularity is not nearly as near as Mr. Kurzweil predicts. But the problem should not be considered impossible. Clearly, we will need a much smarter breed of human before we can see our way clear to the singularity. As smart as Mr. Kurzweil is, and as rich as Mr. Allen is, we are going to need something more from the humans who eventually birth the singularity.
Friday, July 06, 2007
Mind the Enchantment

Because our minds are "self organised", they are subject to falling into distinctly different states, at particular "bifurcations."

But the deeper you dive into the mechanisms of consciousness, the larger the number of possible mind states, so that bistability becomes tristability and so on. Just the single topic of synaptic plasticity quickly acquires a complexity to confound most scientists.
Hypnosis takes advantage of the inherent ambiguity of consciousness, and "adjusts the weighting" of various competing states of mind. Since mind is inherently a self-organizing, ongoing trance-like process, it is often likened to "riding the wave," or staying on the "bucking bronco." From the moment of waking to the release of sleep, that "blinking cursor" of consciousness compels us to provide answers and solutions, even to unknown or nonexistent problems.
For anyone who is curious about some of the underlying neurophilosophy of consciousness, I suggests looking over this article by Edelman and Tononi--two prolific and respected students of consciousness. Or look over this overview of Models of Consciousness from Scholarpedia.
Understanding human consciousness is difficult enough. But a lot of people wish to create intelligence in machines. This dream goes back hundreds, if not thousands, of years. But since the computer age beginning in the 1940s, multiple generations of ingenious scientists of mind and computation have dashed their skulls against the wall of computational complexity (not to mention a lack of understanding of the complexity of human cognition or intentionality).

Each person experiences a consciousness of an enchanted mind. Not a mind of equations and computations. Rather a mind of metaphor and narrative. An entranced mind where real world expediencies intrude on waking dreams. Complex trances of strange attractors and slippery bistable conscious surfaces.
There would be no point in trying to emulate all of that in a machine. Not unless that is the only way we can find to create a conscious machine. Perhaps it is better to settle for machines that only seem conscious or intelligent, as viewed by a simple Turing test. After all, we are only looking for help in making better decisions and devising a better world for smarter, healthier, longer-lived people.
We may be entranced, but why burden our machines with all of that? It is our trance that we wish to enjoy far into the future, not the trance of a machine.
Originally posted at Al Fin
Subscribe to:
Posts (Atom)