The phenomena of emergence, entropy and synchronisation still need to be comprehensively researched in human-machine interaction (HMI) in order to better understand the complex dynamics of these interactions.
1. Emergence in human-machine interaction (HMI):
Emergence refers to the occurrence of new, unexpected properties or behaviours in complex systems that cannot be derived directly from the properties of the individual components.
In MMI contexts:
• Emergent meaning-making: De Jaegher and Di Paolo (2007) developed the concept of ‘participatory sense-making’, which is also applicable to HMI. They argue that meaning and understanding emerge emergently in interaction, suggesting that in advanced MMI systems new meanings and concepts can emerge from the dynamic interaction between humans and machines [1].
• Emergent machine behaviour: Rahwan et al. (2019) discuss how AI systems can exhibit emergent behaviour that cannot be directly derived from their algorithms. This is particularly relevant for complex MMI systems, where the interaction of human and machine behaviour can lead to unexpected results [2].
• Situational emergence: Suchman (2007) argues that meaning and action in human-machine interaction emerge situationally. This points to emergent processes in which the context and the specific interaction situation play a decisive role [3].
• Distributed cognition: Hollan et al. (2000) develop the concept of distributed cognition, which implies that cognitive processes can emerge emergently from the interaction between humans and technological artefacts. In MMI systems, this could lead to new forms of problem solving and information processing that neither humans nor machines alone could achieve [4].
• Emergent team performance in human-AI systems: Saenz, Revilla and Simón (2020) examine how the integration of AI into human teams can lead to improved performance. They emphasise the importance of designing AI systems that complement and augment human capabilities rather than replacing them. The authors argue that well-designed human-AI teams can lead to emergent capabilities that go beyond the individual strengths of humans or AI. They propose a framework that fosters collaboration, communication and mutual learning between humans and AI systems, which can lead to new, emergent forms of problem solving and decision making [5].
2. Entropy in HMI:
Entropy, a measure of disorder or uncertainty in a system, can be interpreted in different ways in HMI contexts:
• Information exchange: The entropy in communication between humans and machines can be understood as a measure of the uncertainty or information content of the messages exchanged. A reduction in entropy could indicate an improvement in communication efficiency.
• Learning processes: In human-machine learning systems, entropy can serve as a measure of the uncertainty of the system about the knowledge status of the human learner. Decreasing entropy could indicate progress in the learning process.
• Decision-making: In collaborative decision making, entropy can represent uncertainty regarding the best decision. The joint work of humans and machines often aims to reduce this entropy.
Griffiths and Tenenbaum (2006) discuss how entropy-based models can explain human learning and reasoning, which is also applicable to MMI contexts [3].
3. Synchronisation in HMI:
Synchronisation refers to the coordination of events or states in interacting systems.
In HMI contexts:
• Brain-computer interfaces: Synchronisation between brain activity and computer output is crucial for effective BCIs. Wolpaw and McFarland (2004) discuss how this synchronisation can be achieved and improved [4].
• Affective computing: Emotional synchronisation between humans and machines, in which the system recognises the emotional states of the human and reacts to them, is a central goal of affective computing. Picard et al. (2001) investigate methods for recognising and synchronising emotions [5].
• Collaborative robotics: In human-robot interaction, the physical and temporal synchronisation of movements is crucial for effective cooperation. Dragan et al. (2013) investigate methods to improve this synchronisation [6].
Integration of the concepts:
These three phenomena – emergence, entropy and synchronisation – are often closely interwoven in HMI systems:
• Emergence through synchronisation: Synchronisation between humans and machines can lead to emergent behaviours or solutions that none of the participants could have generated on their own.
• Entropy and emergence: The reduction of entropy in an HMI system (e.g. through improved communication or learning progress) can favour emergent phenomena.
• Synchronisation to reduce entropy: Improved synchronisation between man and machine can lead to a reduction in entropy in the overall system by reducing uncertainty in the interaction.
• Emergence as an entropy regulator: Emergent properties in HMI systems can help to regulate the entropy of the system by generating new, more efficient interaction patterns.
Example: In an adaptive learning environment, increasing synchronisation between the learner and the system (e.g. through improved prediction of learning needs) could lead to a reduction of entropy in the learning process. This could in turn favour emergent learning patterns that were not foreseen by the learner or the system alone.
Conclusion
Viewing HMI through the lens of emergence, entropy and synchronisation provides a rich framework for understanding and designing such interactions. It allows us to grasp the complex dynamics that emerge when humans and machines work together and can lead to new insights and innovations in the design of HMI systems.
Future research could focus on how these concepts can be quantified and applied in practice to optimise HMI systems and enable new forms of collaboration between humans and machines.
References
[1] De Jaegher, H., & Di Paolo, E. (2007). Phenomenology and the Cognitive Sciences, 6(4), 485-507.
[2] Rahwan, I., et al. (2019). Nature 568, 477–486.
[3] Suchman, L. A. (2007). Cambridge University Press.
[4] Hollan, J., Hutchins, E., & Kirsh, D. (2000). ACM Transactions on Computer-Human Interaction, 7(2), 174-196.
[5] Saenz, M. J., Revilla, E., & Simón, C. (2020). Designing AI Systems With Human-Machine Teams. MIT Sloan Management Review.
[2] Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686-688.
[3] Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767-773.
[4] Wolpaw, J. R., & McFarland, D. J. (2004). Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proceedings of the National Academy of Sciences, 101(51), 17849-17854.
[5] Picard, R. W., Vyzas, E., & Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10), 1175-1191.
[6] Dragan, A. D., Lee, K. C., & Srinivasa, S. S. (2013). Legibility and predictability of robot motion. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 301-308). IEEE.