Prof. Paweł Skruch, AGH University and Aptiv Technical Centre: Fear of AI comes from lack of knowledge

‘We need to learn to use the artificial intelligence competently and reasonably, because if we use it in a mindless way or take as read what the chatbot suggests, it will do us no good for sure,’ says AGH University Prof. Paweł Skruch, DSc Eng, from the Department of Automatics, Computer Science and Biomedical Engineering Faculty. Director and chief AI/ML engineer at an Aptiv Technical Centre.

The dynamic development of language models such as the popular ChatGPT is fuelling fear of AI in many experts and common people. Is there anything to be afraid of from your perspective as a person who works on artificial intelligence professionally?

When I hear these clarion calls about a potential AI threat to various aspects of our lives in the nearest future, I never cease to wonder where the fear is coming from. Among the people I work with who try to use AI-driven methods in practical solutions, we mostly treat AI as a tool which is offering and which will continue to offer a wealth of opportunities, which will help us solve various kinds of problems, and will contribute to the advancement of scientific and technical fields in a broad sense. That is why I think that you shouldn’t see AI as something potentially ‘bad’ that can turn our world upside down. We need to learn how to use AI effectively, because all the fears seem to stem from a lack of awareness as to how these algorithms – and it’s nothing but algorithms – function. Now, there is a great deal of interest in ChatGPT-type solutions, but from the technical side of things, neural-network-based solutions, of which ChatGPT is an example, have existed for many years and are in widespread use in many aspects of everyday life.

What about the labour market? In this regard, what is being voiced are fears of AI eliminating the need for certain kinds of work done by people.

This process in inevitable. But this is the case with every new technology which results in carrying out particular activities faster, better, with less effort and at less cost. I’m convinced that quite soon many jobs, especially the ones revolving around simple tasks, are certainly going to be replaced by the algorithms of this kind. But I wouldn’t feel afraid. New technologies make some professions disappear, even as the development of this tech opens up brand new possibilities for people.

The fears we are discussing are getting expressed in the form of diverse AI restrictions. Bans on using generative AI are cropping up in some companies or universities. Elon Musk with a throng of experts postulated a temporary pause in the work on language models, so as to create a space for thought on their regulation. In your opinion, do we need such a break in AI growth?

I am against any regulation of this sort, especially as regards the progress of technology, for the reason that no limitation will stop this progress in any way. If we try to force a halt in the development of some areas meant to increase the power of AI-based algorithms, our results can just be the opposite. Regulation is needed but in a slightly different context. If we want to use such algorithms in, e.g, making key security decisions, then regulation might turn out to be a necessity.


I mean for instance the situation in which AI finds a use in autonomous vehicles, where algorithms are used for steering the car. This is an area that relates to the safety of a passenger and other traffic participants. Another example would be using artificial intelligence to invest our money. In this respect, AI will show us some solutions, but they may not be entirely correct, because that’s how the algorithms work. They approximate some decisions on the basis of the knowledge they amassed. Regulations are thus needed to establish in what way this kind of solutions must be verified and tested so that they can be done correctly.

Currently, we are limiting the AI debate to ChatGPT and its newer, enhanced versions. Doesn’t arguing the potential threats of artificial intelligence, as comprehended in this sense, obscure other – perhaps more important – fields of AI use?

This is definitely the case. We are now seeing AI through the prism of such solutions as ChatGPT and skipping over other fields where AI operates smoothly and is present in our everyday lives even if we have no idea. There is plenty of artificial intelligence in any given smartphone and in applications that we use. There is more and more of it in cars we travel in. If we use such devices as coffee machines, fridges, or vacuum cleaners, the AI is present there as well, and somehow we are not afraid about that. Let me stress once again that fear is the product of no awareness.

How should the awareness-raising look in your opinion? Should we start with child education in schools with a view to the challenges of an AI world?

By all means, especially as the progress of new technologies is changing the way knowledge is acquired, stored, and processed. In connection with that – in my reckoning – the teaching standards from primary schools to secondary school are losing their currency when pitted against what the new tech solutions are offering. Hence, we need to change the whole way of thinking and teaching the young generation in this regard. This involves the adults, too. We need to learn to use the artificial intelligence competently and reasonably, because if we use it in a mindless way or take as read what the chatbot suggests, it will do us no good for sure. And a well-ground fear is creeping in that a mindless AI use can bring actual threats, as some decisions dictated by AI might really not be valid. If we are aware of this, we can find ourselves up against a wall. These issues are crucial, which is why the AI challenges are going to be a topic of debate during Krynica Forum 2023.

Artificial intelligence, the internet of things, automatisation, 6G networks, and the related challenges are no doubt going to be among the hottest Krynica Forum 2023 focuses. Prof. Paweł Skruch is slated to take part in the discussion.

See more news

Subscribe to the newsletter