Every 30 years there is a new wave of things that computers do. Around 1950 they began to model events in the world (simulation), and around 1980 to connect people (communication). Since 2010 they have begun to engage with the physical world in a non-trivial way (embodiment – giving them bodies).
‒ Butler Lampson, Microsoft Research
Hermann Helmholtz, a German physicist, was one of the first, in the XIX century, to try to describe the creative process as a succession of steps, which he defined: saturation, incubation and illumination.
The French mathematician Jules Henri Poincare, in 1908, added a fourth phase: verification.
A ‘cyber-physical system’ is another name for modern robots, autonomous vehicles, and industrial systems that are connected to the internet, in other words anything that moves or automatically controls something.
Modern security relies on protecting the perimeter and keeping the enemy away from the critical assets using firewalls and intrusion detection systems. Analogy from a physical world would be a medieval fortress which has tall and thick walls, narrow gates, kill zones, and sufficient storage of provisions and water to survive a blockade. Additional inner rings of fortifications isolate the most vital systems and allow defenders extra time and space to respond if the outer walls are breached.
Modern security process is based on Threat Analysis and Risk Assessment (TARA) where security experts get together and work out models to understand who will most likely attack the system, their skills and tools, and possible damage they could cause. This process goes back thousands of years and is based on the way military engineers designed fortresses. Watch the following video ‘Physical Security: An Ever Changing Mission’ from 1996 which is referenced at the bottom of this article.
The issue with this approach is that these robotic systems are extremely complex already and the perimeter is not well understood. The approach is also passive and relies on detecting anomalous behavior, it is defensive in nature.
From the history of human warfare, we know that no fortress was able to withstand attacks, some stood longer than the others, but all eventually fell.
In modern digital cybersecurity we also build analogies of walls which do not prevent hackers from getting inside of our systems, as is evident from everyday news. What one man builds, another man will find a way to break. Cryptography is a good example of that postulate where the first known coded messages date to 1900BCE in the Old Kingdom of Egypt and where non-standard hieroglyphs used to protect communication from prying eyes, but no matter how smart new schemes were all of them had some weakness that could be exploited.
Presently we have to maintain an army of people to enforce Secure Development Lifecycles and to do feature and security validations. Often it may take three times as much to test the code as to write it and even the most thorough testing does not guarantee that software will not have bugs in production or that no intentional backdoors were planted. In effect, it is safe to assume that the probability of software failure equals one; that is, the software will fail 100% of the time. We have to assume that software we write cannot be trusted.
In neuroscience there is a concept of homunculus, a neurological “map” of the anatomical divisions of the body.
If you apply a similar concept to modern software lifecycle you will turn the following beautiful and logical V-model into an ugly process weighted towards validation and testing that do not contribute to development of features and functionality and do not guarantee security or safety of the system.
The main goal of security is to prevent from unauthorized or unintended access, change, or destruction. In other words, we want to have a robotic system that does not allow to make certain changes to itself that were not approved by an authority but also that could damage it. Is there a solution for this now? Digital rights management comes closest to that and we know that it does not work. We have ourselves a problem then.
Digital security evolved as a response to specific problems technology of the day faced. The following table from the American Scientist brings defense methods into focus:
However, if we look at the effectiveness of these technologies to protect different classes of systems from Data Centers and PCs to the Industrial Systems and modern cars, the picture looks very bleak:
Clearly, alarm bells should be ringing. It does not take much to make grim conclusions by looking at this matrix.
Moonshot Security Project (Manhattan Project?)
Civilization advances by extending the number of important operations we can perform without thinking about them.
‒ Alfred North Whitehead
In 1980s NASA worked on a STAR project: Self-Test and Repair. When you launch a vehicle to Mars or beyond Pluto, if something breaks there won’t be a technician there to fix a problem.
Interestingly, in mechanical engineering for over a decade there has been research into self-healing nanostructures. For example, if there are cracks in a bridge ideally we would like to have materials that can self-repair.
Why do not we have nano-something in software that can self-repair? What do we have to do in hardware and software in order to build a technology that will be self-aware of its state, will detect changes, and will restore itself to the latest good known version?
If you build a cyborg with a nervous system, how would you protect him from damage? Where would you place gateways and firewalls? How would you collect and analyze data? Is it even feasible to do for a highly complex system?
I think that the goal for the new security technology has to be ‘self-healing’; we have to build ‘pain’ and ‘fear’ into our future robots, including vehicles.
In the following figure there is a telling table that compares current and future complexity states of vehicles. I’ll point that ‘low complexity’ describes existing architectures that already have close to 100 ECUs and 100 million lines of code and I heard recently that we should expect close to 300 million lines of code in cars by 2020/2025. It is unlikely that in the future ‘high complexity’ systems manual validation will be even possible. We have to build intelligent systems that can learn and protect themselves.
How do we get there? Can we design a system that creates a virtual view of the vehicle as it leaves the assembly? If we create a virtual view of the entire system, amount of data will be humongous, no single human will be able to act on the data. The question is, can we create a virtual mirror image of a physical vehicle? Can we track all transitions and failures in the vehicle?
Can we use the virtual system to self-test modifications before they are pushed to the physical vehicle? Likewise, if a modification is detected in a physical vehicle, can we mirror it to the virtual image maintaining history of the state transitions and changes? Will that allow us to build an analytics engine capable to detect malicious versus required changes of the state in real time without having external connection to the backend mothership? The scare of hardware and especially software updates is that it is impossible to test how a software/firmware update will affect other systems in a vehicle. There has to be a way for the system to prevent from damaging itself if something goes wrong after the update.
I always talk about this to folks at Microsoft, especially to developers. What’s the most important operating system you’ll write applications for? Ain’t Windows, or the Macintosh, or Linux. It’s Homo Sapiens Version 1.0. It shipped about a hundred thousand years ago. There’s no upgrade in sight. But it’s the one that runs everything.
‒ Bill Buxton from Microsoft Research
I play very decent chess, however it is a futile exercise to play against a computer, you are only delaying the inevitable while trying to prolong the suffering. Human brain is exceptional at pattern recognitions and I play by constructing patterns on the board, however the computer knows the winning moves because it can compute path to the victory. Code that we write does not have patterns, it is like a game of chess – a set of instructions. A computer is better than the human brain at taking human’s intention (I want this data as input and I want such an output to make up my feature) and finding the most optimal way of executing those instructions. We have been reordering code in compilers and in CPU for many years, there is no reason why we cannot go even further and have Artificial Intelligence analyze the executing code, fix the inefficiencies, remove dead code, and rearrange the remaining instructions without breaking the feature to both optimize the execution and remove any security vulnerabilities.
Using analogy from quantum mechanics I would ask a question: Can we create a system that will find the lowest energy state of the executing code, the ‘global energy minimum’?
 Peter J. Denning, Dorothy E. Denning. 2016. Cybersecurity Is Harder Than Building Bridges.
 Jeff Quitney. 2015. Physical Security: An Ever Changing Mission. 1996 Federal Law Enforcement Training Center
This video provides an overview of the evolution of physical security measures, due to increased criminal activity, and outlines pro-active strategies and physical security techniques used to augment the traditional reactive responses to crime.
Physical security describes security measures that are designed to deny unauthorized access to facilities, equipment and resources, and to protect personnel and property from damage or harm (such as espionage, theft, or terrorist attacks). Physical security involves the use of multiple layers of interdependent systems which include CCTV surveillance, security guards, protective barriers, locks, access control protocols, and many other techniques…
Physical security systems for protected facilities are generally intended to:
- deter potential intruders (e.g. warning signs and perimeter markings);
- detect intrusions and monitor/record intruders (e.g. intruder alarms and CCTV systems); and
- trigger appropriate incident responses (e.g. by security guards and police).
It is up to security designers, architects and analysts to balance security controls against risks, taking into account the costs of specifying, developing, testing, implementing, using, managing, monitoring and maintaining the controls, along with broader issues such as aesthetics, human rights, health and safety, and societal norms or conventions. Physical access security measures that are appropriate for a high security prison or a military site may be inappropriate in an office, a home or a vehicle, although the principles are similar.
The goal of deterrence methods is to convince potential attackers that a successful attack is unlikely due to strong defenses.
The initial layer of security for a campus, building, office, or other physical space uses crime prevention through environmental design to deter threats. Some of the most common examples are also the most basic: warning signs or window stickers, fences, vehicle barriers, vehicle height-restrictors, restricted access points, security lighting and trenches…
Alarm systems and sensors
Alarm systems can be installed to alert security personnel when unauthorized access is attempted. Alarm systems work in tandem with physical barriers, mechanical systems, and security guards, serving to trigger a response when these other forms of security have been breached. They consist of sensors including motion sensors, contact sensors, and glass break detectors.
However, alarms are only useful if there is a prompt response when they are triggered. In the reconnaissance phase prior to an actual attack, some intruders will test the response time of security personnel to a deliberately tripped alarm system. By measuring the length of time it takes for a security team to arrive (if they arrive at all), the attacker can determine if an attack could succeed before authorities arrive to neutralize the threat. Loud audible alarms can also act as a psychological deterrent, by notifying intruders that their presence has been detected. In some jurisdictions, law enforcement will not respond to alarms from intrusion detection systems unless the activation has been verified by an eyewitness or video. Policies like this one have been created to combat the 94–99 percent rate of false alarm activation in the United States.
Surveillance cameras can be a deterrent when placed in highly visible locations, and are also useful for incident verification and historical analysis. For example, if alarms are being generated and there is a camera in place, the camera could be viewed to verify the alarms. In instances when an attack has already occurred and a camera is in place at the point of attack, the recorded video can be reviewed. Although the term closed-circuit television (CCTV) is common, it is quickly becoming outdated as more video systems lose the closed circuit for signal transmission and are instead transmitting on IP camera networks…