[Extract from chapter 10]
Artificial superintelligence (ASI) refers to a hypothetical form of artificial intelligence that far surpasses human cognitive abilities in virtually all areas [1]. In contrast to general artificial intelligence (AGI), which is supposed to have human-like capabilities in various domains, an ASI would be able to solve problems and gain insights that are unimaginable for humans [2].
The development of an ASI could potentially lead to an ‘intelligence explosion’ in which the AI improves and optimises itself, leading to an exponential increase in its capabilities [3]. This raises important ethical and existential questions, especially with regard to the control and alignment of such superintelligence with human values and goals [4].
While some experts emphasise the potential benefits of ASI for scientific progress and global problem solving, others warn of potential risks and unintended consequences [5]. The debate around ASI remains a key issue in AI ethics and futurology.
In the 21st century, the vision of an artificial superintelligence appears as a ‘miracle machine’ (similar to the computer at the beginning of the 20th century) that is supposed to make all human impossibilities possible, but also as a threat that could wipe out humanity. In order to master this balancing act, the phenomenon of emergence in an artificially intelligent system must be given greater consideration and included in the joint development. Preferably before the Singularity.
- Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.[↩]
- Yampolskiy, R. V. (2016). Artificial superintelligence: A futuristic approach. Chapman and Hall/CRC.[↩]
- Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17(9-10), 7-65.[↩]
- Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.[↩]
- Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.[↩]