Tuesday, December 27, 2011

The Curse of Bad Governance Hangs Over Earth Like a Shroud

Australia's Finance News Network recently interviewed Jim Rogers by teleconference from his home in Singapore. Rogers' insights into the sad underpinnings of the ongoing global economic crisis are useful for anyone who is making short, medium, and long-term plans.

First Rogers explains his pessimistic outlook for the US and Europe, then he moves to Asia:

Lelde Smits: You say Asia will suffer, do you see signs this suffering has already begun?

Jim Rogers: Oh of course it has. If you look at China, China’s already got problems they have inflation. They’re doing their best to calm things down in China, wisely in my view, and then China’s been hit of course with the slowdown in America and Europe. So no, everybody in Asia is slowing down.

Lelde Smits: And what about Chinese property prices. The latest government data from 70 Chinese cities shows the average price of Chinese property has fallen for the second straight month. Do you believe Chinese property is in a bubble and are we witnessing the bust?

Jim Rogers: Some parts of the Chinese real estate market have been in a bubble. Certainly urban coastal real estate in China was in a bubble - that bubble has popped. How far down it has to go, I don’t have a clue. Probably a lot further because usually when bubbles pop, a lot of people get hurt. But it’s not like in the US where in the US, people are buying four or five houses with no job, no down-payment. You know, and then the banks were taking the mortgages and jigging them up even more. China didn’t have that kind of problem. You’re going to see real estate developers go bankrupt in China - no question about that. But it’s not going to be the end of the Chinese economy as it was in the US, the UK, Spain and a few other places.

Lelde Smits: OK Jim, so we’ve spoken about US and European debt, and a potential Chinese slowdown. How do you see global markets reacting in 2012?

Jim Rogers: Well I’m not optimistic for the most part about stock markets. I don’t own many stocks anywhere in the world. The only offset of the caveat for me is the fact that there is an election in the US, in Spain – sorry in France, a few other places. So whenever there are elections coming governments spend, spend, spend they throw money out the window to buy votes. So some people are going to be much better off in 2012. Is it enough to offset the world’s problems? I don’t think so, except in some sectors which will benefit.

So for myself, I’m short stocks around the world, I’m short American technology stocks, I’m short emerging market stocks and I’m short European stocks.

Lelde Smits: And what else are you buying Jim and why?

Jim Rogers: Well I own commodities, because if the world economy gets better Lelde, then commodities will do well because of the shortages. The lucky countries will continue to be lucky for a while. I own some currencies, which I mentioned, I’m worried about currency markets. I’m short stocks. I mentioned the things I’m short. So I anticipate problems in stock markets. If the world economy doesn’t get better, you’re not going to make money in stocks. But then central banks will print more money and when they print money Lelde, the thing to do is to own real assets. _FNN

Rogers can afford to live almost anywhere on the planet, and he has chosen Singapore. He states that his reason for choosing that location is so that his children can learn to speak fluent Mandarin Chinese, away from a polluted environment such as one might find in Beijing or Shanghai.

I suspect that Rogers is not being entirely truthful here, since he does not want to earn the enmity of the CCP government in China. Nevertheless, a person tends to vote with his feet and with his bank balances, and Rogers is likely to be particularly honest in those cases.

The western nations which comprise modern western civilisation continue to innovate in the scientific and technological areas. This innovation is dependent upon the underlying financial productivity of these nations.

In other words, the scientific and technological promise of the future is warring against the political realities of today, in order to come to reality. Keep that in mind when making your plans. And always watch what the "smart money" is doing.

All governments are corrupt, all ideologies are false, and everything you think you know just ain't so. Other than that, the way ahead is relatively straightforward.

Saturday, December 24, 2011

Novel Approach to Architecture Without an Architect: How to Seed the Universe in Preparation for the Human Epoch

The initial key to a successful out-migration of humans into the solar system and the larger universe, is to develop the ability to prepare the ground ahead of human colonists. When human settlers arrive on Luna, Mars, Titan, and other potential colonies, they should find working habitats and vital infrastructure waiting for them.

Here are a few basic ideas on complex systems and autopoetic self-organisation which will likely form the foundation for more advanced techniques of seeding complex systems in hostile environments. The slides come from Rene Doursat, who lectures at Ecole Polytechnique and specialises in complex systems and morphogenetic engineering.
Complex systems and networks possess some interesting properties which distinguish them from most human-made systems and networks. It is the goal of morphogenetic engineers such as Rene Doursat, to learn to incorporate many of the more desirable properties of complex systems into systems which could be co-created by humans.
Complex systems can arise on multiple levels, all the way from subatomic particles up to the very universe itself. These systems tend to be decentralised, emerging from a more disordered substrate by the means of self-organising principles.
The individual agents vary between complex systems, depending upon the level on which they operate. Low level complex systems can act as agents themselves, of higher level complex systems.
Complex systems can display an intricate and highly functional architecture -- without having had the "benefit" of an intelligent architect. This self-organising architecture of complex systems acts as an inspiration for possible architectures of future complex systems which might be evolved and co-created for use by humans.

Such novel complex systems could serve to allow the survival and prospering of humans in locations which would ordinarily not support such complex animals which evolved in a much more suitable location (Earth) for their needs. In other words, the "seeds" of human-enabling complex systems could be delivered to hostile -- but strategically important -- locations, where they will proceed to build human-friendly micro-environments, habitats, and infrastructures.
Such intricate yet robust systems would necessarily need to be evolved and grown in a self-organised fashion, using the materials on hand. And yet these "system seeds" must be designed to serve their specialised function.

Naturally, there is the opposite approach which must be considered: A re-designed human animal more suited for the hostile environments likely to be met in space and on other space bodies.

Rene Doursat has looked at biological design (PDF) in ways which might be applied to such a project, in the future.

Such concepts can be applied to biological bodies AND biological brains, as well. In fact, the functioning of the human brain itself is a good example of decentralisation, emergence, and self-organisation -- a prototypical complex network.

So while human designs up until now have tended to possess rather simple and straightforward architectures, at the same time humans are capable of comprehending the nature of complex, evolved systems. The idea of incorporating the properties of complex systems into human designs is not so far-fetched, perhaps in part because of the way our brains work.

The understanding and harnessing of complex systems to human purposes, is a grand project. It is worth spending a good deal of time on, in the future.

Wednesday, December 21, 2011

Los Alamos Scientists Mimic Neuron Function to Help Computers to See More Clearly

The brain has an uncanny ability to detect and identify certain things, even if they’re barely visible. Now the challenge is to get computers to do the same thing. And programming the computer to process the information laterally, like the brain does, might be a step in the right direction.

...“This model is biologically inspired and relies on leveraging lateral connections between neurons in the same layer of a model of the human visual system,” said Vadas Gintautas of Chatham University in Pittsburgh and formerly a researcher at Los Alamos.

Neuroscientists have characterized neurons in the primate visual cortex that appear to underlie object recognition, noted senior author Garrett Kenyon of Los Alamos. “These neurons, located in the inferotemporal cortex, can be strongly activated when particular objects are visible, regardless of how far away the objects are or how the objects are posed, a phenomenon referred to as viewpoint invariance.” _HPCwire
The scientists want to create computer models of human vision that are capable of picking out complex objects from a cluttered visual field, and do it as well as humans -- except faster.
To quantify the temporal dynamics underlying visual processing, we performed speed-of-sight psychophysical experiments that required subjects to detect closed contours (amoebas) spanning a range of shapes, sizes and positions, whose smoothness could be adjusted parametrically by varying the number of radial frequencies (with randomly chosen amplitudes). To better approximate natural viewing conditions, in which target objects usually appear against noisy backgrounds and both foreground and background objects consist of similar low-level visual features, our amoeba/no-amoeba task required amoeba targets to be distinguished from locally indistinguishable open contour fragments (clutter). For amoeba targets consisting of only a few radial frequencies (), human subjects were able to perform at close to accuracy after seeing target/distractor image pairs for less than 200 ms, consistent with a number of studies showing that the recognition of unambiguous targets typically requires 150-250 ms to reach asymptotic performance [22], [23], [35], here likely aided by the high intrinsic saliency of closed shapes relative to open shapes [7]. Because mean inter-saccade intervals are also in the range of 250 ms [34], speed-of-sight studies indicate that unambiguous targets in most natural images can be recognized in a single glance. Similarly, we found that closed contours of low to moderate complexity readily “pop out” against background clutter, implying that such radial frequency patterns are processed in parallel, presumably by intrinsic cortical circuitry optimized for automatically extracting smooth, closed contours. As saccadic eye movements were unlikely to play a significant role for such brief presentations, it is unclear to what extent attentional mechanisms are relevant to the speed-of-sight amoeba/no-amoeba task.

Our results further indicate that subjects perform no better than chance at SOAs shorter than approximately 20 ms. _PLoS
The PLos link above allows access to the entire study.

These findings provide additional insight into the unconscious nature of neural processing, previously touched on in the previous posting here.

Researchers are attempting to take these profound insights and use them for devising computer models which simulate various unconscious brain functions. It will not be an easy task, but by approaching the problem in a system by system manner, limited success is quite possible within a reasonable time frame.

If computers ever learn to "see" and distinguish objects within complex and dynamically changing fields, as well or better than humans, there will be a number of profitable applications waiting.

Wednesday, December 14, 2011

The Ascendancy of the Unconscious Mind

Scientists are beginning to zero in on the different types of unconscious brain activity which underlie and inform ordinary conscious awareness. From mathematics to music to visual awareness to moods, our unconscious minds form the foundation and framework for whatever our conscious minds choose to build.

We have looked at Daniel Kahneman's theories of the fast mind (intuitive, automatic, unconscious) and the slow mind (conscious, deliberative) and discovered that although we cannot live without our fast intuitive minds, we cannot altogether trust them either. Regardless, since we are stuck with this mode of cognition, we had best set about understanding it as well as we can.
Today the domain of the unconscious—described more generally in the realm of cognitive neuroscience as any processing that does not give rise to conscious awareness—is routinely studied in hundreds of laboratories using objective psychophysical techniques amenable to statistical analysis. Let me tell you about two experiments that reveal some of the capabilities of the unconscious mind. Both depend on “masking,” as it is called in the jargon, or hiding things from view. Subjects look but don’t see.

Unconscious Arithmetic
The first experiment is a collaboration among Filip Van Opstal of Ghent University in Belgium, Floris P. de Lange of Radboud University Nijmegen in the Netherlands and Stanislas Dehaene of the Coll├Ęge de France in Paris. Dehaene, director of the INSERM-CEA Cognitive Neuroimaging Unit, is best known for his investigations of the brain mechanisms underlying counting and numbers. Here he explored the extent to which a simple sum or an average can be computed outside the pale of consciousness. Adding 7, 3, 5 and 8 is widely assumed to be a quintessential serial process that requires consciousness. Van Opstal and his colleagues proved the opposite in an indirect but clever and powerful way...

...What’s Wrong with this Picture?
Liad Mudrik and Dominique Lamy of Tel Aviv University and Assaf Breska and Leon Y. Deouell of the Hebrew University in Jerusalem set out to test the extent to which the unconscious can integrate all the information in any one picture into a unified and coherent visual experience. Giulio Tononi and I had proposed in the last Consciousness Redux column [September/October 2011] that the ability to rapidly integrate all the disparate elements within a scene and place them into context is one of the hallmarks of consciousness.

The Israeli researchers used “continuous flash suppression,” a powerful masking technique, to render images invisible. A series of rapidly changing, randomly colored patterns was flashed into one eye while a photograph of a person carrying out some task was slowly faded into the other eye. For a few seconds, the picture is completely invisible, and the subject can see only the colored shapes. Because the images become progressively stronger, eventually they will break through, and the subject will see them. It is like Harry Potter’s cloak of invisibility fading with time and revealing what is underneath... _More at SciAm
The above short SciAm description of the two lines of research, should serve as a teaser for those interested in how the brain works. Here is more information from researchers themselves:

Von Opstal et al Rapid Parallel Semantic Processing of Numbers Without Awareness (Abs)

Full PDF article Von Opstal, Dehaene, De Lange

Integration Without Awareness: Expanding the Limits of Unconscious Processing (PDF) Mudrik, Lamy, et al

More publications from Lamy's lab

George Alvarez: Representing Multiple Objects as an Ensemble Enhances Visual Cognition (PDF) How unconscious ensemble coding helps us effortlessly keep track of multiple objects.

It is only by understanding both our strengths and our weaknesses that we can plot our path through life's challenges and obstacles. In many ways, our intuitive and automatic unconscious minds are both a strength and a weakness.

Certainly if we do not put in the effort to understand our own minds, eventually someone else will. And they are not as likely to have our best interests at heart.

Friday, December 09, 2011

Accelerating Convergence of Hand-Held Technologies


We are accustomed to using our hand held devices for media play, reading, web search and email, telephoning, as still and video cameras, playing games, etc. But hand held electronics devices can provide a wide range of functions that are more serious. The oscilloscope device pictured above is a serious instrument useful for both hobbyists and researchers. Hand held iOS devices can also be used as scientific microscopes, EEG imaging devices, neurofeedback devices, and for a wide range of other instrumentation uses.
Let me start out by stating that this doesn’t actually compare to the high-end models as far as sampling and bandwidth. You won’t use the iMSO-104 for extremely high-speed, GHz-frequency signal applications. Honestly, for home maker use, I don’t see this being an issue for a long time. Oscium provided a scope for my review and before it even arrived I thought of my list of features to look for and try out. So what were some of the things that I was looking for in using the scope? First, being based on an iOS device, I was looking for a simple and navigable interface. Check. The scope plots zoom just as you would expect with the common iOS finger pinches and spreads. The traces are easy to drag up and down as you would expect as are the measurement cursors. Measurement cursors! That was another item! In the video linked above, Collin shows how you can make measurements on an old CRT scope using the time per division and volts per division selection and visual cues. On some of the digital scopes I have used, you could bring up a cursor that would give you the time or voltage between the points. This scope includes those cursors and if you know what I am talking about it is just as easy to use and intuitive as you think. If you don’t: trust me, it is very intuitive.

The Oscium scope can also do all of those things you would expect from any scope: triggering, running measurements, the ability to freeze the display, screen shot, data capture, e-mail, and configuration saving. The unit supports a single analog probe and four digital probes, all included in the kit. You can run all five inputs at one time or select any combination to show. _Wired
The imagination is the limiting factor, as for most human activities. What is the app for that?

Thursday, December 08, 2011

Daniel Kahneman's Hard-Won Insights on Thinking

Daniel Kahneman is a retired psychologist in his late 70's, who as a boy was forced to hide out in a chicken coop to survive Hitler, and who grew up to win the Nobel Prize in economics. His latest book, "Thinking, Fast and Slow," was meant as something of a parting shot to the world, from a man who has experienced the best and the worst of it, and through it all had come to understand his species better than most. And still, understanding himself will always be beyond him.

This is not a book review -- there are dozens available on the web that focus on various aspects of Kahneman's book. It is not a summary of the book -- Google Books' preview of Thinking, Fast and Slow is your best bet for that. This is a short look at the key idea that apparently summarises Kahneman's thinking and life's work, and what that idea means for modern, between-level humans and their quest to find themselves, find the truth, and have it all.

Regular readers of Al Fin will have seen Kahneman's basic idea in various forms: The human brain does most of its work unconsciously and automatically. It requires work to maintain the focus and attention of the conscious mind to learn something new, but once it is learned it tends to revert to the automaticity of the subconscious. In other words, we are capable of conscious, deliberate, and painstakingly logical thought. But usually we flow with the river of unconsciousness, automatically.

Kahneman's claim to fame -- which helped him win his Nobel prize -- is to expose how error-prone our intuitive and automatic thinking is, and why. He exposed this perilously "false intuition" in a number of different areas of "expertise." The reason a psychologist won a Nobel Prize in economics, for example, is because Kahneman and Tversky revealed the "illusion of expertise and certainty" utilised by experts when making economic and investing decisions. But the delusion is operative in every area of human action.

The reason human intuition can go so badly wrong despite the strong certainty felt by the individual, is that the intuition uses a faulty source of information, according to Kahneman. The intuition relies upon "whatever comes to mind most easily," or whatever is salient in memory. For those who are consumers of the mass media, the most salient thing in memory is whatever the masters of the media decide to put on the menu. For those who are card carrying members of one devout orthodoxy or another, the salient thing is whatever they are soaking their minds in. Singers in echo choirs are particularly prone to the tyranny of the salient idea, which is reinforced with every chorus that is sung.

You may be thinking that Kahneman is warning against self-delusion, telling you to be careful not to fall into thinking traps or bogs of false belief. No. He is telling you that you cannot help but fall into the traps, bogs, and quagmires -- it is in your nature, and mine. And he does a better job of explaining why that is the case than most anyone else could do, in everyday language.

If you absorb the basic idea, and carry it far enough forward, it will be easy to understand why all ideology is false. It will also be easy to understand why you can never find yourself -- and what the very idea of finding yourself has in common with catching your own shadow when the light is constantly changing directions and intensity.

Previously published at Al Fin blog

Wednesday, December 07, 2011

When Will We Develop a Human Superbrain?

Pharmacological enhancers of cognition promise a bright new future for humankind: more focus, more willpower, and better memory, with applications ranging from education to military combat. Underlying such promises is a linear, more-is-better vision of cognition that makes intuitive sense. This vision is at odds, however, with our understanding of cognition’s evolutionary origins. The mind has evolved under various constraints and consequently represents a delicate balance among these constraints. Evidence of the trade-offs that have shaped cognition include (a) inverted U-shaped performance curves commonly found in response to pharmacological interventions and (b) unintended side effects of enhancement on other traits. Taking an evolutionary perspective, we frame the above two sets of findings in terms of within-task (exemplified by optimal-control problems) and between-task (associated with a gain/loss asymmetry) trade-offs, respectively. With this framework, psychological science can provide much-needed guidance to enhancement development, a field that still lacks a theoretical foundation. _Thomas Hills
The above is the abstract from a recent paper published in Current Directions in Psychological Science, a journal of the Association for Psychological Science, titled: Why Aren’t We Smarter Already: Evolutionary Trade-Offs and Cognitive Enhancements. The authors suggest that we are not likely to develop enhanced intelligence for humans anytime soon, for a variety of reasons. More:
Just as there are evolutionary tradeoffs for physical traits, Hills says, there are tradeoffs for intelligence. A baby’s brain size is thought to be limited by, among other things, the size of the mother’s pelvis; bigger brains could mean more deaths in childbirth, and the pelvis can’t change substantially without changing the way we stand and walk.

Drugs like Ritalin and amphetamines help people pay better attention. But they often only help people with lower baseline abilities; people who don’t have trouble paying attention in the first place can actually perform worse when they take attention-enhancing drugs. That suggests there is some kind of upper limit to how much people can or should pay attention. “This makes sense if you think about a focused task like driving,” Hills says, “where you have to pay attention, but to the right things—which may be changing all the time. If your attention is focused on a shiny billboard or changing the channel on the radio, you’re going to have problems.”

It may seem like a good thing to have a better memory, but people with excessively vivid memories have a difficult life. “Memory is a double-edged sword,” Hills says. In post-traumatic stress disorder, for example, a person can’t stop remembering some awful episode. “If something bad happens, you want to be able to forget it, to move on.”

Even increasing general intelligence can cause problems. Hills and Hertwig cite a study of Ashkenazi Jews, who have an average IQ much higher than the general European population. This is apparently because of evolutionary selection for intelligence in the last 2,000 years. But, at the same time, Ashkenazi Jews have been plagued by inherited diseases like Tay-Sachs disease that affect the nervous system. It may be that the increase in brain power has caused an increase in disease.

Given all of these tradeoffs that emerge when you make people better at thinking, Hills says, it’s unlikely that there will ever be a supermind. “If you have a specific task that requires more memory or more speed or more accuracy or whatever, then you could potentially take an enhancer that increases your capacity for that task,” he says. “But it would be wrong to think that this is going to improve your abilities all across the board.” _MedXpress
Very disappointing, if true. But is it possible that the authors overlooked something? After all, a few million years ago, chimpanzee psychologists and philosophers must have been thinking and saying much the same about the prospects for superior chimp brains, yes?

But in fact, a chimpanzee superbrain did develop, which we call the "human brain."
Despite the minute genetic differences between human brains and their primate relatives, Homo sapiens cognitive ability is significantly more advanced, enabling us to “make complicated tools, come up with complicated culture and colonize the world,” said lead author Mehmet Somel, a postdoc studying human evolutionary genomics at the University of California, Berkeley. Because humans spend more than a decade developing into adults and learning, far more than the two or three years of chimpanzee adolescence, researchers have long suspected that developmental genes are involved in human brain evolution. “And the idea that brain gene expression profiles might be different between species was proposed 40 years ago,” Somel added. _Scientist
We are just beginning to learn the genetic and epigenetic specifics which led to the divergence of the human brain from the brain of the common ape ancestor. Fascinating changes in the details of gene expression in the brain created a whole new level of cognitive functioning. There is no reason to doubt that similar genetic and epigenetic changes could lead to even newer and higher levels of cognition.

The human brain has borrowed various hacks and kludges from brain and nerve evolution all the way back down the evolutionary tree. Some of these hacks and kludges are potentially limiting in terms of other, concurrent hacks and kludges that might otherwise be utilised. But there are potential hacks and kludges which might replace the limiting hacks, and some of these potential hacks might very well allow an entire train of further, enhancing hacks to follow.

That is a possibility that most mainstream psychologists and philosophers fail to understand -- generally because they have adopted groupthink as their modus operandi. This is a common failure of academics from the inbred world of the university culture. Perhaps that is why so many of the world-changing visionaries and billionaires of our day have been high school and college dropouts. They escaped before their brains could be gelded.

There are a number of ways in which we might approach the human superbrain. Simple pharmacologic cognitive enhancers, such as stimulants, are not likely to provide the broad spectrum enhancement we will need. But there are a number of prosthetic enhancements for the human brain which would give us near quasi-superbrain status, over time. Certainly the things that humans can do when empowered by modern computing and telecommunications tools would astound most humans of past eras.

But what we really want, are superbrains that continue working even if the power goes out or the batteries run down. For that, we will need genetic and epigenetic change. So how can we go about inducing these genetic changes without running into the problems that so many highly intelligent persons and breeding groups have run into?

That will be a topic of future articles.

Sunday, December 04, 2011

Understanding Network Threat to Industrial Infrastructures

SCADA (supervisory control and data acquisition) generally refers to industrial control systems (ICS): computer systems that monitor and control industrial, infrastructure, or facility-based processes, as described below:
Industrial processes include those of manufacturing, production, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, wind farms, civil defense siren systems, and large communication systems.
Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control HVAC, access, and energy consumption. _Wikipedia SCADA

SCADA YouTube Tutorial

SCADA systems control large sections of modern industrial infrastructures, and are often connected to the public internet. This makes them susceptible to hackers, some of whom would like nothing better than to hold a city or a corporation hostage, via its water or power supply etc. Even worse, in case of international hostilities, a foreign powers may threaten an opponent's vital infrastructure as a routine part of negotiations over trade treaties, economic pacts, or cooperation agreements.
A SCADA sends instructions to shopfloor machines like pumps, valves, robot arms and motors. But such systems have moved from communicating over closed networks to a far cheaper conduit: the public internet. This can give hackers a way in. Eric Luiijf of TNO Defence and his colleagues found a litany of insecure "architectural errors" in the waterworks' SCADA networks (International Journal of Critical Infrastructure Protection, DOI: 10.1016/j.ijcip.2011.08.002).

Some firms did not separate their office and SCADA networks, allowing office hardware failures, virus infections and even high data traffic to potentially "bring down all SCADA operations". While remote internet access to SCADAs is supposed to be possible only with strict security controls, the researchers found this was often not the case. And some water firms allowed third party contract engineers to connect laptops to their SCADA network with no proof they were running up-to-date antivirus software.

This was compounded by news of the hack at the Texas water plant, where on 20 November a hacker named "prof" gained access to the plant's systems using a three-character default password on an internet-accessed SCADA made by Siemens of Germany. "No damage was done to any machinery; I don't really like mindless vandalism. It's stupid and silly. On the other hand, so is connecting your SCADA machinery to the internet," he wrote on the Pastebin website.

One of PRECYSE's main approaches to securing systems will be "whitelisting", a way of ensuring only authorised users obtain access. This is the opposite of the approach used by antivirus software. "Instead of hunting for malicious code, as in an antivirus blacklist, this only lets the known good guys connect," says security engineer Sakir Sezer at Queens University Belfast in the UK. Unusual behaviour - such as attempting to extract the control codes used to drive equipment - would also mean access is blocked. Deep-packet inspection, normally used to spot copyrighted material on the net, could be harnessed to ensure no attack code is injected.

But it won't be easy. "The biggest risk we face is that of denying the legitimate user access to their SCADA because something in the security setup has changed," says Sezer. "You don't want to create a denial of service attack against yourself."

The systems have other enemies, too. The Stuxnet worm, which attacked Siemens SCADAs in Iran's uranium-enrichment facility in Natanz, wrecked 400 machines. Duqu, a relative of Stuxnet spread in Word files, is currently probing SCADA networks seeking out control instructions.

The battle for the safety of our utilities has only just begun. _NewScientist
The stealthier that a hacker can be when infiltrating an industrial or civic infrastructure, the more deniable is the attack.

Smart industries and cities will begin designing redundant backup systems. They will also look into economical and reliable ways of taking their vital infrastructures off the public networks. Unfortunately, we are entering the age of the Idiocracy, when political correctness, affirmative action, and a faux egalitarianism are all more important than a secure foundation for building a better future.

In other words, hope for the best, but prepare for the worst.