Autonomics: A new scientific Discipline?
I am convinced that we will increasingly encounter autonomous systems in many areas. There are good reasons for this view: The convergence – the networking – of previously separate systems leads to the emergence of “systems of systems” that are so large, open, and heterogeneous that humans will often be overwhelmed with their manual control. They do not have a good overview and are unable to react quickly enough. The so-called “Smart Grid” – i.e., the new distribution system for electrical energy – is an example well suited to illustrate this situation: If sunshine determines whether thousands of households will supply energy generated by photovoltaics to the grid or draw energy from the grid, the system must autonomously ensure that the energy balance is maintained. Humans would be overwhelmed by this task. However, acting autonomously is also required to realize certain projects economically. For example, the basic idea of Industrie 4.0 – producing mass-individualized products – requires a flexible production environment that must adjust autonomously to changed requirements. Manual retrofitting by humans would probably make mass-individualized production economically unattractive in most cases. Finally, there is still the motivation of increased comfort: We do not need autonomous driving because humans are incapable of driving themselves, but because it would be quite pleasant to use driving time for more meaningful activities than focusing exclusively on road traffic.
On the one hand, the expected benefits of autonomous systems are often very high; on the other hand, by no means have all questions relating to autonomous systems been answered yet. There are unresolved technical challenges that concern, for example, the aspect of comprehensive safety and security, meaning especially the interaction of data security and functional safety. There exist hardly any accepted methods and, in particular, no standards for the joint analysis of these properties. Following current standards, the use of components that employ Artificial Intelligence is also not recommended in safety-critical application areas, but of course, “Deep Learning” techniques, for example, will play a significant role going forward, so methods must be found to enable the use of such techniques in critical applications as well. Finally, there is the question of which principles the architecture of autonomous systems must observe. In addition, there are legal as well as ethical questions. In the essays presented here, important aspects from this range of topics are discussed by qualified experts.
Since the beginning of industrialization, technical progress has been characterized by a succession of crosscutting technologies, which have brought about massive changes in virtually all areas of application. Mechanical engineering, which made industrialization possible, was followed in the 20th century by electrical engineering. The current cross-cutting discipline in the 21st century is computer science, whose contribution is digitalization. The subsequent step could consist of realizing crosscutting autonomy. This would require the emergence of a new discipline – the science of autonomous systems. By analogy to computer science/informatics, we could call it “autonomics”. I have the impression that we are currently already on the way there.
Peter Liggesmeyer is the managing director of the Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern since 2015 and has held the professorship "Software Engineering: Dependability" at the Technical University of Kaiserslautern since 2004. From 2014 to 2017, he led the Gesellschaft für Informatik (GI e.V.) as its president. He is the founder of the Fraunhofer Embedded Systems Alliance, of which he was spokesperson from 2010 to 2013. From 2004 to 2014, Liggesmeyer was the scientific director of Fraunhofer IESE. He is the author of numerous technical articles and widely used reference books, in particular, the standard work "Software Quality".