Guide
artificial intelligence article

At a time when artificial intelligence is conquering broad areas of the world of work, healthcare and finance, questions are increasingly being asked about the benefits and how these algorithms are designed. Particularly as AI is already being entrusted with significant fields of activity. But how can the phenomenon of artificial intelligence be linked to responsibility? Business information scientist Prof. Dr. Marcus Becker has chosen illustrative examples to systematically play this out. Part 1 of the three-part series looks at this topic:

The lack of transparency in AI processes

Marcus Becker, Professor of Quantitative Methods in Information Systems at ISM Dortmund, has been working with machine learning methods for many years.

What surprised you when analysing the learning methods of AI?

What surprised me the most was that artificial intelligence - or as it turns out in most cases: Machine Learning (ML) - is the pure application of maths. As a mathematician, I kept asking myself: What is this good for? Machine learning algorithms (MLA) combine several areas of mathematics in an extremely elegant way: optimisation and numerics, but also basic knowledge from stochastics, analysis and linear algebra. Maths is like a language that you have to learn if you want to move confidently in the field of machine learning.

What also surprises me is how easy it is nowadays to convert complex MLA into programming code, such as Python. It used to take several hundred lines of code. The initialisation of an artificial neural network (ANN), for example, can be implemented with just 10 lines of code. With Chat-GPT, users do not even need to be proficient in a programming language. All it takes is a short prompt input and you get a reasonably usable code. Of course, this can be both a blessing and a curse.


Marcus Becker, Professor of Quantitative Methods in Information Systems at ISM Dortmund, analyses how light can be shed on the darkness of algorithms.

In circles that place a lot of hope in AI, there is the opinion that AI searches for the "best" facts and combines them according to the principle of probability in a similarly best way - in other words, it draws on existing knowledge and combines it "sensibly" according to the intelligence pattern of humans, only faster. What do we make of this idea?

First of all, it must be said that neuroscientists have not yet formed a conclusive opinion on what human intelligence actually is. However, there are established concepts about how neurons process information and interact with each other. Artificial neural networks attempt to build on these concepts and convert them into machine code. It is therefore fundamentally difficult to define "artificial intelligence" when we do not yet have a conclusive definition of (human) "intelligence".

But the success of artificial neural networks basically proves the constructs right. They cannot be too bad, otherwise the performance of these models would not be so high. A general problem that arises here is the comprehensibility of these complex models. ANNs belong to the so-called black box algorithms (BBA). This means that we cannot determine a priori exactly how the model will decide for given input information.


Uncertain environment with incomplete information

To address the question posed above in more detail, we need to find out what information the algorithm identifies as the "best facts" and what "weights" it uses to do so. Depending on the model, e.g. in the area of self-enforcing algorithms (reinforcement learning, RL), the decision-maker operates in an uncertain environment with incomplete information. RL algorithms learn and improve themselves through continuous interaction with their environment. The assessment of the probability of how algorithms will be able to continue interacting with their environment in the future naturally plays a decisive role here. Probabilities are also decisive for the large language models (LMM) that are currently the subject of much attention, such as chat GPT. However, the calculation of probabilities is about events in random experiments.

These are based on a large number of training events with which the algorithms are fed. The more the better. However, there is always the risk that our training data is subject to a certain bias, or BIAS as we say. To give a small example: Imagine that you want to develop an AI that automatically reads applications and decides whether to issue an invitation to a job interview. To do this, the AI is trained with historical data from applications and decision results (rejected or invited). If the decisions made in the past, which were primarily made by humans, discriminate against a certain gender, the algorithms notice this and adopt this BIAS for future predictions.

Back to overview

This might interest you

Do you have any questions about studying? Please contact us!


+49 151.41 97 68 03
Feel free to send us your questions from Monday to Friday. We'll respond to you from 10 am to 3 pm (CET) on weekdays. Looking forward to chat!

Back to overview