Guide
artificial intelligence article3

The last part of the three-part series addresses questions about the liability of AI methods. When these are raised, things get tricky. Prof. Dr Marcus Becker, Head of the Master's degree programs Business Intelligence & Date Science (on-campus) and Applied Business Data Science (online program), and his team found in analyses, that the lack of transparency of algorithmic systems makes the legal examination of liability rules more difficult, as shown by the example of investment advice (robo-advisors):

The problem of causality in AI applications

Anyone asking the question of culpability in the case of AI-driven investment advice will hardly get any further. Why is that the case?

With black box algorithms, the question of how decisions are made is often difficult to answer (so-called opacity risk). Even the developer is often no longer able to find or determine the error. There is a causality problem. This causes major problems for claimants. Breaches of duty cannot be proven, so it remains unclear who bears the burden of proof in such constellations. This is because AI systems lack an independent legal personality. According to the current legal situation, it is therefore necessary to refer to the misconduct of the user who uses the black box algorithm in accordance with Section 280 (1) BGB (Civil Code).

Accordingly, the user himself would have to pay for any damage caused to him by AI. In this view, however, the inventor of an AI application is not liable unless it can be proven that he or she made grossly negligent errors of judgement.


„It is often no longer even possible for the developer to find or determine the error," says business IT specialist Prof. Dr. Markus Becker from his research.


Has the question of liability been forgotten, neglected or faded out step by step in the hope of cost savings?

Technical innovation and the adaptation of case law rarely go hand in hand. As is often the case, lawyers initially try to answer the question of breach of duty using existing case law. At some point, they reach a point where the current case law no longer clearly applies or can at least be interpreted contradictorily. Only then does the legislator react with appropriate adjustments.

The draft legislation issued by the EU Commission this week, known as the EU AI Act, is a first step in the right direction. In principle, standardised AI regulations are to be welcomed, but whether they will reduce the risks associated with the use of AI remains to be seen. It will be interesting to see whether future AI systems, such as artificial general intelligence, will have an independent legal personality.

Can the creators of AI instruments in the sector of Financial assets provide a remedy here or has their instrument "taken on a life of its own"?

I would not yet speak of such so-called robo-advisors becoming independent. As I already suggested in 2021, machine learning algorithms are not yet used as frequently in automated asset management as one might think (Becker, M., Beketov, M., & Wittke, M. (2021). Machine Learning in Automated Asset Management Processes 4.1. Die Unternehmung, 75(3), 411-431).

Basically, we are also moving in a highly regulated area here. According to the principles of the EU directive on the harmonisation of financial markets MIFID-II, investment advice, whether human or automated, must always comply with the "suitability" principle. This means that pensioners cannot simply be sold "junk bonds".

Without transparency algorithms (XAI), however, the algorithms simply act arbitrarily among themselves. This can sometimes even lead to "flash crashes", as the past has shown. To name just two examples: Algo Trader Softwares sometimes facilitated the stock market crash on 19 October 1987, the so-called "Black Monday".

Another example is 6 May 2010, when the Dow Jones fell by over 1000 points within eight minutes. This crash was triggered by a simple sell order from the trading house Waddell & Reed. This cannot even be described as AI, but was simply an automated sell order that was triggered when a certain threshold was reached.

My prediction is that if opaque back-box algorithms are unleashed on the financial markets without any validation, flash crashes could repeat themselves at ever shorter intervals. Fortunately, there are already laws that allow certain financial instruments to be excluded from trading (see Section 73 WpHG / Securities Trading Act). This limits the potential for price slides.

What could a solution also look like for the AI-controlled investment area with robo-advisors?

There are a large number of pre-implemented explanatory models (such as LIME and SHAP) that are virtually universally applicable. However, companies hardly use them, partly because they are not prescribed by law.

Due to the "suitability" requirement in accordance with the MIFID II directive mentioned above, the use of transparency algorithms is, in my opinion, absolutely necessary. They would increase user confidence, create associations between input and output and generate reliable and fair RA systems that are also in line with current data protection regulations and thus protect the identity of the user.

Without explainability through transparency algorithms, no interpretation and therefore no validation can be made by the verifying specialist.

In practice, the right to explanation sometimes gives way to the right to be forgotten - what do you think about this?

Our recommendations for the use of transparency algorithms are based on five quality criteria: Trust, associations, reliability, fairness and identity.

  1. Trust is generated by developers minimising the number of false-positive predictions. False-positive predictions, for example, could lead to risk-averse investors being incorrectly categorised as risk-takers.
  1. Associations are needed to separate plausible and implausible correlations between the various input-output variables of black box algorithms. These can then be checked for economic credibility. For example, a robo-advisor provider would have to explain why the algorithm does not increase the proportion of equity ETFs again in phases of economic upswing.
  1. Reliability of the systems ensures that the robustness of a model is tested in different data environments. For robo-advisors, this means that they also make "good" investment decisions in different market situations, i.e. in line with investors' risk/return preferences. It will be interesting to analyse how robo-advisors have navigated through recent times of crisis.
  1. Finally, every investment model should also be ethically justifiable, i.e. it should not differentiate between religion, gender, class and race when investing. That's what we call fairness.
  1. Finally, the black box algorithm should preserve the identities of the investors, i.e. no conclusions should be drawn from the investment methods to a person.

Is strict liability feasible?

At best, aggrieved investors of AI-driven investment advice can assert a claim for damages for breach of duty against the financial service providers. It is debatable whether the operator of the robo-advisor can be held responsible for a programming error. Particularly in the case of black box algorithms, which no longer rely on explicit programming logic such as simple if-then relationships, it is very difficult to identify an "actual" error in the implementation. The error could merely lie in the improper control of the input parameters. 

If liability of the manufacturer is ruled out due to a lack of infringement of legal interests, the introduction of independent strict liability of the operator of robo-advisors in the context of product liability would be welcome.

This might interest you too

Back to overview


Do you have any questions about studying? Please contact us!


The Whatsapp service is currently not available.

Back to overview