The multilingual contents of the site are the result of an automatic translation.
 

 
 
 
 
 
Français
English
Français
English
 
 
 
View
 
 
 
 
 
View
 
 

Other sources

 
Saut de ligne
Saut de ligne

Cognitive Sciences and Command Post Organization 2/2

military-Earth thinking notebook
Science & technology
Saut de ligne
Saut de ligne

Cognitive sciences are often considered a fad or even a matter for a few specialists. However, they have already revolutionized many fields and are in full expansion. Could and would the army, which has been able to make the shift to digitalization, benefit from this new field of evolution? If so, what cognitive discoveries are likely to influence command structures in the future? It is to these questions, and to many others, that the authors of this article attempt to answer.

2/2


Command organization and cognitive sciences: the dream of a revolution?

  • A bit of cognitive (-fiction) science

Also, given the potential, if not the hopes, even the promises (of Gascons?) contained in research and development in the cognitive sciences, it is not forbidden to let the imagination conjecturing the evolutions that could take place in a sufficiently distant future. Let us therefore imagine, with a hint of utopianism, that a real breakthrough will occur under the effect of a cognitive revolution of command. Let's place this one far enough, further than 2035 in any case, and let's go rather until 2050, because it takes time, and distancing, for reasonable probabilities to be distinguished from hypotheses that are sometimes a bit fanciful.

Here is the scenario: a major military power, which we would call, for example, the Atlantics, is pursuing an aggressive policy of expansion. It seizes several territories within its sphere of influence, using the defence of Atlantean minorities as a pretext. In the face of this unorthodox diplomatic rhetoric, the international community is moved, the UN is out of breath, Europe is surprised, NATO is shaken, but the President of the Atlantic region remains inflexible, playing the quiproquo of a pro domo status quo.e de facto and stricto sensu, a casus belli. So, all negotiations having failed, a coalition led by the United States is committed to sending the Atlantean forces home.

Militarily, the well-trained and well-equipped Atlantean armed forces act according to hybrid modes of action, combining the use of robotic units with "human" and conventional forces. The coalition also operates in a hybrid mode. It has a full range of robots and platforms that exploit the most advanced technologies.

The difference in design in the use of hybridity between the Atlanteans and the coalition has led to the confrontation of two distinct modes of action. The Atlantean army has developed a massive force of robots, favouring a quantity of small, mobile, aggressive and autonomous systems, with the aim of overwhelming its enemy with a massive use, according to a tactic close to "swarming".

The coalition, on the other hand, relies on another dimension of new technologies. Indeed, Western military thinkers have given up the illusion of crushing the enemy under mechanical mass. They have replaced the accumulation of material with the accumulation of information, seeking to dumbfound the adversary by the explosion of information flows in the compression of time. Victory depends, in their eyes, on the ability to sustain an ever-faster cognitive tempo. This is why efforts have been focused for several decades on the military development of cognitive sciences.

These efforts have already borne fruit, leading to a complete transformation of force organization and command structures. Thus, the expeditionary force engaged by the coalition consists of two divisions, only 40% of which are robotised. Opposite them, the Atlantean forces consist of three divisions, 60% of which are robotic. Basically pyramid-shaped, the latter are structured in a classic way, according to the model in force at the dawn of the 21st century of layers that stack up to coordinate and direct the action of subordinate levels. The Atlanteans, renowned for their pragmatism, have retained a fundamentally vertical mode of decision-making and information dissemination that has been tried and tested for centuries: component, division, brigade and regimental headquarters.

The Western coalition, for its part, has broken down these types of structures in order to restore and promote initiative from the lowest tactical levels. Initiated in France by the Scorpio Doctrine, this approach has become a school of thought by proposing, in the information age, a restored form of the "Aufstragtaktik". In strict opposition to the monolithism of verticality, this approach is based on the horizontality of information management. Although the organisation in metropolitan France has changed relatively little since the 2020s, the articulation of forces in operations is based on a model of organisation in permanent project, and develops almost autonomously on the model of a neuronal system. Each module positions itself and interacts according to the situation of the others, naturally opting for the best possible solution with regard to the desired end state. In order to move faster, obsolete military coordination tools such as spindles, coordination lines or phasing of action have been abandoned: the permanent exchange of information has made it possible to do away with them.

Thus, while the Atlantean forces are led by dozens of general officers, the coalition is commanded by only three generals, who succeed one another on the basis of the principle of quartering. Rising in turn every eight hours, they are connected to the Cognitiveshare system developed by the MIT in 2030, which allows them, in just a few minutes, to acquire an intimate knowledge of the general situation. The coupling of this system with the COPD4Victory analysis software , also developed in the United States, ensures the consistency of their decisions and guarantees that they are able to make informed decisions.The coupling of this system with the COPD4Victory analysis software, also developed in the United States, ensures the consistency of their decisions and guarantees that they correspond to the strategic objectives set by the policy, while leaving a non-negligible part to the expression of the service general's personality. The combination is a virtuous one: the conduct of operations reveals a succession of decisions that are perfectly in line with the aims of the war, but always adapted to the given moment. They are also systematically tested by the now classic Hologrammic wargaming developed by MiliGoogle Corp. For the Atlantean experts, the result is an impression of irrationality that thwarts all the predictions calculated by their analyst robots, despite an extremely powerful artificial intelligence. In addition, the general on watch has direct access to politics, which ensures that the synergy between politics, public opinion and the military is maintained in strict coherence, according to the injunctions formulated by the Pentagon in a founding publication of 2035, Cognitiving Clausewitz's Trinity.

At the tactical level, this horizontal networked operation allows coalition forces to optimise the fluidity offered by cyber space. They operate in small, highly responsive and highly mobile modules, capable of depriving the enemy of his freedom of action, despite his mass of resources. Fighting more against time than against their adversaries, the enemy struggles to redirect their forces to regain the upper hand, while the coalition members constantly re-articulate and strike them as soon as they reveal a vulnerability. This practice of war is the result of situational intelligence, in the most literal sense of the term, allowed by the cognitive sciences. The fog of war is weighted by the BattleBigdata Mil-check system, which, combined with the optimization of the cognitive faculties of decision-makers, has developed a lightning-fast mode of action that cannot be overcome by technology alone, even the most modern.

Thus, in a few days, and despite a quantitative ratio of equivalent forces, the Atlantean system collapsed and soldiers surrendered en masse to the coalition. The Atlantean generals, unable to keep up with the pace imposed by the coalition, found themselves unable to command their subordinates other than at a moment's notice: The Atlantean command system has seized up, causing a systemic collapse of the entire force.

  • Limits to potential developments

The scenario imagined above deliberately exaggerates a break in the organization of command, in line with the idealized vision of an ultra-connected and robotized world. Idealized, because it undoubtedly carries a large part of utopia: many limitations make it inaccessible, at least until the horizon of 2050. These limits are technological, organizational, economic, legal and, finally, ethical.

  • Technological and scientific limits

However, cognitive science points to immense potential. For example, the FACETS[1] project, which brings together German researchers, is now trying to design a computer that is supposed to function like the human brain. Some scientists believe that it will be possible to do even better in the future than artificially reproducing human intelligence: digitizing a real human brain and then transferring it to a computer. In short, it would be a matter of activating the mind of an individual outside its carnal envelope.

However, this type of project remains very theoretical, and has not yet led to results that would indicate inexorable progress in reducing the as yet unknown spaces of the human mind. Therefore, and as it stands, it is futile to imagine developments on the horizon of 2050, let alone 2035, that would allow us to fully model the functionalities of the human brain. These are neither quantifiable nor reducible, from a connexionnist point of view in any case, to simple formulas. Moreover, there is no consensus on the scientific definition of attention or memory, two areas where cognitive sciences have the most significant potential. When it comes to objectifying personality, intuition or creativity, a strict cognitive rationalisation appears to be peremptory for the time being. The global understanding of cognitive processes remains very theoretical. It has only revealed the submerged part of an iceberg that several decades will not be enough to guess, let alone decode so that it can be recoded in the form of software [2].

  • Organizational limits

Moreover, it is unlikely that such technologies will be developed before a very distant horizon on an industrial scale, even if they had reached full technological maturity, i.e. they would be standardised and mastered. The cost of such programmes, but also of organisational reforms of the structures that would integrate them into the civilian or military worlds, would probably appear disproportionate to the benefits envisaged. For example, the future supercomputers conceptualized by Bull are supposed to reach, at the advent of the 2020s, a computing power of the order of hexaflops, i.e. roughly the same level of performance as the human brain. But the human brain will always remain a million times less energy consuming (20 to 30 megawatts for the supercomputer, compared to 30 watts for the human brain [3]). The criterion of profitability in the wake of the 2008 crisis is likely to weigh heavily on the first quarter of the 21st century. However, the innovative but heavy systems that could be used in 2035 and, to a large extent, in 2050, are being conceptualized today and should be put into production in the next two decades. Without forgetting also that the use of such new means will inevitably require a complete overhaul of selection criteria and training courses to enable operators to implement them and decision-makers to grasp their potential.

  • Legal and ethical limits

Finally, whatever the technological opportunities, ethical and legal limitations should slow down, if not hinder, the implementation of applications in this field. This does not mean that they interrupt the almost inexorable process of research, but they will probably postpone its industrial transformation to a much more distant future than 2035 or 2050. This is particularly because the development of technology is now dependent on the precautionary principle which is enshrined in France's Constitution, in a logic of environmental preservation, of course, but which applies de facto to all fields of research . Also, at the crossroads between bioethics and technology, the exploitation of cognitive sciences cannot be ruled out as long as its real consequences on man are not under control[4].

Moreover, it is a fact that research on human beings, whether cognitive or genetic, presents eminently ethical issues. These do not allow for the ill-considered development of technological opportunities. The fear of altering human nature by attempting to generate a kind of superman, or of creating inequalities between those who will or will not have access to technology, are issues that go beyond the problems of the cognitive sciences alone and subject them to the same ethical and social vigilance. The possibility that a system may escape human control seems unthinkable for societies that approach the judicialization of connected objects.s and robots according to the overriding principle of "man in the loop", all the more so when it comes to conflict and the use of force. It should be noted, however, that the empowerment of machines is continuing, as is the experimentation with autonomous cars, and that we can therefore envisage a growing acceptance of these technologies, which will eventually reach command systems.

Opportunities for the Army

Once the possible limits of their development between now and 2050 have been set, the useful and exploitable advances in cognitive sciences in the area of command become clearer: improving the cognitive capacities of leaders, developing human-machine interfaces to take advantage of technological advances in robotics and artificial intelligence, and optimising the organisation of headquarters.

  • Human factors

Increasing the cognitive capacities[5] of leaders can be done in different ways. The objective remains to gain lucidity (against fatigue, stress) and efficiency (speed, complexity) in decision making.

First of all, advances in medical imaging would make it possible to select leaders on their ability to resist stress, fatigue, and to make complex decisions. This type of selection already exists, for example, to define suitability for pilot positions.

Training will remain repetitive: According to Nathalie Koulmann, a medical researcher at the Armed Forces Institute for Biomedical Research (IRBA), "task-specific training is the first form of optimisation of cognitive functions" [6].

6] Various techniques, some of which are already known, will make it possible to increase the capacities of certain cognitive functions. We can cite among others:

  • Transcranial electrical stimulation (tDCS): a team of researchers from the US Air Force hasrecently shown that electrode stimulation in the left dorsolateral prefrontal cortex increases the ability to treat several problems simultaneously [7].
  • Theuse of pharmacopoeia and various products whose side effects will become increasingly well known. These include: propranolol (a beta-blocker used since 1960) to reduce stress [8], "ampakines", stimulants used to increase memory, or modafinil, which is used to combat narcolepsy and improves resistance to sleep deprivation [9].
  • 9]Behavioural and cognitive therapies (CBT), which relocate the subject in his or her environment, may also promote learning: According to psychotherapist Jean Cottraux, "further study of cognitive schemas and their relationship with neural networks will undoubtedly lead to a theoretical breakthrough in the near future" [10].
  • 10]Creativity could be stimulated: Frenchadvertising executive Jean-Marie Dru has proposed "disruption" [11], a method that starts with the analysis of cognitive biases that hinder innovation.
  • Genetic manipulation could also be considered technically, but for Pierre-Yves Cusset, project manager at France Stratégie, "it is very important to be able to identify and identify thegenetic factors that are responsible for innovation," he said.It is highly unlikely that we can significantly improve an individual's intelligence by transferring the mutation of a single gene or even a limited number of genes" [12].
  • Human-Computer Interfaces

Two main families of possible improvements in the interfaces between humans and machines are emerging.

On the one hand, brain-machine interfaces (BMIs) would not only improve performance (increased human performance), but also allow monitoring of brain states (fatigue, stress, etc.) or remote control of robots (like manned-unmanned teaming, MUM-T, tested by DARPA). Described in an IRSEM study [13], the brain-machine interface is a direct link between a brain and a machine (robot, prosthesis, computer, etc.). Brain activity is measured by sensors; the data are then analyzed and the device, whether invasive (implanted electrodes) or non-invasive (helmet), sends commands to a mechanical control system. While this dual technology is not yet mature and feeds many fantasies (communication through thought, remote robot control, etc.), it is still a very promising technology.), progress in this field is considerable and should be made in the next decade, particularly thanks to the famous convergence of NBICs (nanotechnologies, biotechnologies, computer and cognitive sciences).

On the other hand, the ergonomics of the human-machine interface (HMI) should contribute to making the technology increasingly efficient, consolidating confidence in the machine: the user can no longer understand the complexity of the machine's operation. Studies carried out in this direction aim to reduce the cognitive load resulting from the use of increasingly advanced technologies, to allow the selection of the most appropriate HMI for the user, to allow the selection of the most appropriate HMI for the user, and to improve the ergonomics of the HMI.relevant information (e.g. digitisation, where not all information has to be displayed according to the zoom of the map), and finally to allow the visualisation of complex data: augmented reality, mind maps, geomatics. These tools, coupled with data analysis tools, will facilitate the understanding of the environment, and thus the decision-making process. For example, they will have direct implications by allowing a thorough analysis of the enemy. The scalability of the tools will also be studied, taking into account the ever-changing needs of users. It is still to be hoped that progress in the ergonomics of HMIs will keep pace with new technologies, and therefore that the cognitive load associated with learning and implementing them will stop increasing. Moreover, it is to be feared, since these tools are programmed by humans, that they reproduce the biases of human reasoning and only contribute to consolidating certainties and preconceived ideas.

This strengthening of the man-machine link will require mutual trust, which in turn will require a clear definition of the division of authority between the responsibilities of man and machine (somewhat like the RACI). Reflections are therefore being conducted along these lines for the ballistic missile defence command and control system. In any event, confidence in and empowerment of the machine should increase, in line with what happens in civilian life (such as cars, which are becoming increasingly automated or even autonomous). Staff will therefore be in a better position to make full use of the capabilities of IT tools.

  • Systems engineering

The optimization of the internal processes of the PC contributes to the quality and speed of the decision. This optimisation is achieved through the optimisation of group work, thus improving the circulation of information and knowledge.

The optimization of the work of the staff will first be done by taking into account and correcting cognitive biases (confirmation bias, mental anchoring, self-indulgence, etc.). Techniques for questioning exist, such as the one described by two researchers [14], which uses cognitive maps to apprehend the complexity of systems and detect errors. It is also conceivable, in accordance with the conclusions of a report by the Rand Corporation [15], that the progress of analytical tools will make it possible to reduce "friction" between humans, by reducing the volume of headquarters.

The desire to improve the flow of information has always existed in warfare. Today it is reflected in NATO's concept of Knowledge Development[16], or in the existence of CMIs (information management cells) in headquarters, which are responsible for ensuring that information reaches the right people at the right time. General Hubin describes a future organization of reactive command [17], close to the collaborative combat desired by Scorpio, where the units in charge of execution (theequivalent of the current SGTIA) would exchange more information among themselves, where the conduct level (the GTIA) would ensure coordination, the design level would plan and combine effects. This desire to describe a network structure that is more efficient than a hierarchical structure is not new [18]. It is likely that the command organisation would become more flexible, combining a network structure with the possibility of forming ad hoc units, but without going as far as the functioning described by the GéIt is likely that the command organization would become more flexible, with the possibility of forming ad hoc units in a networked structure, but without going as far as that described by General Hubin, because human relations (including command relations) will remain decisive in ensuring the effectiveness of the system; in war, only leaders make decisions, which involve life and death: so hierarchy will continue to be the determining factor.

This progress will therefore make it possible to develop techniques and technologies that are already being considered, but which today have limited applications. Through faster and more efficient data management, cognitive sciences seem to contribute to good and rapid decision-making.

Conclusion: thinking fast and better

Understanding how the brain works and modelling its dynamics means decoding the formula that transforms information into a decision. It means grasping the principles that frame and explain the way in which the individual interacts and influences his or her environment. An operational decision-making cell tries to carry out exactly the same operation through planning and conduct: the organization of command appears as a cognitive mimicry. A command system, and therefore any command post, reproduces the same data collection and processing processes to arrive at prioritised options. Cognitive science research should therefore converge with the desire to improve the organisation of CPs. The more these will be known and mastered, the more the latter will derive ever more efficient operating models from them.

In view of the limitations described and presented in this study, it is probably futile to pretend that we will one day arrive at an exact science of thought. As with any other science, it is probably necessary to be content with taking the cognitive sciences for what they are: a methodical approach to progressively correct hypotheses in a complementary way. A science is not intended to provide firm and definitive answers, but to decipher what it does not yet know. Descartes neither invalidated nor replaced Euclid's mathematics: he clarified it and made progress. In the same way, the cognitive sciences are not going to revolutionize the organization of command, at least probably not in the near future, but to perfect it. In this respect, they have a rich potential. They shed light on successful practices in information management, and they reveal processes that can be artificially imitated all the more effectively the closer they get to the very nature of thinking.

Above all, it seems that cognitive sciences help to understand the way in which information flows are conducted and exploited, which may have conjunctural, but to date not structural, implications in organisational models and the management of resources dedicated to decision-making. This leads us to appreciate them as factors of progress, but not as the break with which the military are sometimes fond...

1] http://facets.kip.uni-heidelberg.de/public/

2] As such, let's note here that the very nature of the "science of thought" causes debate and rejects any peremptory assertion concerning the object of cognitive sciences. Because it should be possible to determine whether thought is carried by words or images. There are two schools of thought, one realistic and the other constructivist, which have their roots in the quarrel between universals, of which there is no question of choosing one or the other.Their opposition reveals that it is impossible to make a decision as it stands and to draw certainties from the hypotheses that the cognitive sciences are trying to confirm. In fact, since the Cartesian cogito, thinking has often been understood as the interaction of concepts, linked together by logical sequences. This causal approach has made it possible to hypothesize a mechanical functioning of thought and to deduce a coding that leads to computers (Georges Boole stated: "the laws we need to build are those of the human mind". Such a conception has introduced a bias that makes us consider the operations of the mind as mathematical operations). The constructivist school is opposed to this conception, because it avoids a thought that would convey image-ideas and not symbolic ideas. Cognitive linguistics sees the association of word and idea as a representation incapable of translating the substance of thought and therefore the possibility of objectively reporting on it.

3] According to Simon Horst, deputy director of the Lawrence Berkley Laboratories, in http://itbulletin.fr/2014/11/18/quand-lapuissance- de-calculate-joint-this-brain-human/

4] Conceptualized by Hans Jonas in 1970 with a view to environmentally friendly scientific progress, the precautionary principle is supposed to prevent the irreparable and destructive consequences of science without conscience. It became part of international law at the Rio Summit and the Maastricht Treaty in 1992. The constitutional revision of 2005 incorporates it into the Charter of the Environment, which specifies its application in Article 5: "When the occurrence of damage, although uncertain in the state of scientific knowledge, could seriously and irreversibly affect the environment, the public authorities shall ensure, by application of the principle of precaution, that the damage caused is not likely to be significant and irreversible.Where the occurrence of damage, although uncertain in the state of scientific knowledge, could seriously and irreversibly affect the environment, public authorities shall ensure, by application of the principle of precaution and within their areas of competence, the implementation of risk assessment procedures and the adoption of provisional and proportionate measures to prevent the occurrence of damage.

5] On this subject, read the article by two doctors from the Armed Forces Biomedical Research Institute (Canini & Trousselard, 2016).

[6] Koulmann, 2012

[7] Nelson, 2016

[8] http://www.ladepeche.fr/article/2015/12/21/2242406-toulouse-mise-medicament-soigner-stress-traumatique.html

[9] (Koulmann, 2012)

10] (Cottraux, 2006).

[11] (Dru, 2016)

[12] Colin & dir., "The Augmented Man, sociological reflections for the soldier, 2016.

[13] Colin, 2012

14] Laroche & Nioche, 2006

[15] Ries, 2016

16] RDIA-004 - Knowledge Development , 2010

17] General Hubin, 2003

18] As early as 1998, following Operation Desert Storm, the American Admiral Cebrowski created the concept of Network Centric Warfare NCW (De Neve & Henrotin, 2006 ): he transposes to the military world the way the company organized its computer networks, allowing to increase productivity while making it more reactive.

A graduate of the Institut d'études politiques de Paris, holder of a DEA in international relations and a doctorate in political science from the University of Paris I, Ms Aline LEBŒUF is a researcher at the Centre for Security Studies of the French Institute of International Relations. She was an auditor at the Cours supérieur interarmes from September 2016 to January 2017.

Saint-cyrien of the promotion "General de Galbert", Battalion Commander HOURS chose to serve in the infantry. He spent his first part of his career in the 16th Battalion of chasseurs, as section chief, deputy officer and then unit commander. Assigned to the Infantry School from 2013 to 2016, he served as a brigadier in the Application Division.

Saint-cyrien of the promotion "General de Galbert", Squadron Leader LARCHET served in the 1st Foreign Cavalry Regiment and in the Force Employment Doctrine Centre. He attended the Joint Superior Course from September 2016 to January 2017.

Séparateur
Title : Cognitive Sciences and Command Post Organization 2/2
Author (s) : Madame LEBOEUF, le Commandant HOURS et le Chef d’escadrons LARCHET
Séparateur


Armée