The mathematical limitations of Artificial General Intelligence 

92df4c3d6c0b7395321340fd56a515f6.jpg

Institutional Communication Service

23 March 2023

The recent popularity of ChatGPT and other language models capable of writing elaborate texts and answering users' questions has revived the debate on Artificial General Intelligence and its risks. Artificial general intelligence (AGI) is an artificial intelligence capable of matching or surpassing all expressions of human intelligence. The topic was the focus of a seminar organised by the Institute of Philosophical Studies of the Faculty of Theology in Lugano, affiliated to USI, based on the essay Why Machines Will Never Rule the World (Routledge 2023) by Jobst Landgrebe and Barry Smith and also attended by Tim Crane (CEU, Vienna), Emma Tieffenbach (USI, EPFL) and Stefan Wolf (USI). 

The capabilities of these language models suggest that the road to Artificial General Intelligence is fast approaching. What separates current systems capable of producing persuasive texts on various topics and a programme capable of matching or even exceeding the human mind's capabilities would only be a matter of computing power. Based on this idea, some philosophers like Nick Bostrom argue that the development of the so-called Singularity, a wholly autonomous and potentially dangerous artificial intelligence, is inevitable and that the task of philosophy should be to prepare for this event. 
On the other hand, Landgrebe and Smith's thesis is that it is impossible to develop an artificial general intelligence and, as they explained during their talk at the seminar, not because of technological limitations: it is a mathematical impossibility. 
Systems like ChatGPT are based on deep learning and owe their effectiveness to a training phase that takes place by analysing a sample of data. Smith explained that this training set must have a variance comparable with that of the data on which the artificial intelligence will operate. But for systems such as the human mind and language, creating a training set with these characteristics is impossible, so the models will be forcibly limited. This is, as mentioned, a mathematical impossibility, similar to the impossibility of having a perpetual motion machine. 

Landgrebe and Smith introduce an important distinction between logical and complex systems to understand this impossibility better. A closed set of rules governs the former, and it is, therefore, possible, even if in some cases it can be very complicated, to predict their behaviour. For example, the motion of planets and a combustion engine are logical systems. Complex systems, on the other hand, evolve according to context-dependent interactions. Examples of complex systems are, in addition to the human mind, the global climate, the stock market and living organisms. It is possible to create predictive models for complex systems, but these will either be general or time-limited. This is the case, for example, with weather forecasts: detailed and reliable for the next few days, general for more extended periods. 

Landgrebe and Smith's theory explains both the important successes achieved by artificial intelligence systems when dealing with specific tasks and the limitations that emerge in ChatGPT, for example, when the language model is put to the test in 'extreme' contexts such as scientific or philosophical language.

 

Faculties

Sections