Rise and Regulation of Algorithms

Rise and Regulation of Algorithms

 

Algorithms increasingly have an impact on our everyday lives; it will therefore become more and more important for us to understand what they actually do and to think about governance approaches.

 

By Prof. Dr. iur. Melinda Lohmann*

 

Our everyday lives are increasingly being impacted by the extensive use of artificial intelligence and self-learning algorithms. Algorithms already influence the way we form our opinions, and they take more and more decisions independently of us.

 

There is no doubt that algorithmization is a force for innovation. On the other hand, the process does carry considerable social risks. We are witnessing a growing digital paternalism that also poses a risk to what has been termedpeople’s “informational self-determination” (the German Federal Constitutional Court ruled that this encompasses the protection of the individual against unlimited collection, storage, use and disclosure of his or her personal data).It is particularly the extent to which personal data are collected and exchanged that raises questions from the perspective of data-protection law.

 

Moreover, many instances of discrimination on the basis of algorithms have come to lightin the last couple of years (so-called “algorithmic bias”). In one instance, women were shown online advertisements for highly-paid jobs on news websites significantly less often than men; in another, the search for “professional hair” led to images of white women, whereas the search for “unprofessional hair” led to images of black women. It would be an oversimplification, however, to say that the search engine Google is sexist or racist, because an algorithm simply reproduces and amplifies existing social bias—and the training data will also be affected by these prejudices. But this implicit bias is especially questionable where it affects individuals adversely, e.g. in the case of the automated processing of online loan applications or during online recruitment procedures.

 

Given these risks, there is growing pressure for algorithms to become transparent. From a legal perspective, transparency is relevant in a number of ways: if it were clear how self-learning systems work, issues of liability could be clarified more easily and people would also be able to pursue their claims in a simpler manner. This will become an important issue in the case of accidents involving autonomous vehicles.

 

From the perspective of data-protection law, the issue of transparency is a key factor in putting “informational self-determination” into practice, particularly with regard to algorithmic decisions. It has repeatedly been claimed that the European General Data Protection Regulation (GDPR) provides for a “right to explanation.” However, the wording of the regulation is very general and its provisions are open to interpretation. First of all, one needs to clarify what a right to explanation should consist of: it could conceivably comprisea person’s right to have the various functions of a system explained to them (ex ante or ex post), or it might include their right to have an individual decision explained (as a rule ex post). The GDPR provides solely for the former. However, as algorithmic decisions become more complex, and their consequences become more serious, such a general form of explanation will not be sufficient to provide adequate legal protection. In the interests of legal certainty, this lack of clarity in the GDPR should be resolved as soon as possible.

Then again, there are various aspects that need to be taken into consideration in discussing a potential right to explanation, in particular the impact such a right would have on hubs of innovation. A comprehensive right to explanation might well hinder innovation. In particular as compared to the United States or Asia, the level of data protection that could potentially be applied within Europe might undermine the competitiveness of European companies. Particular attention should be paid to the trade-off between the right to explanation, on the one hand, and the trade secrets, and intellectual property rights of those employing the algorithms, on the other. The latter might be infringed by an obligation to disclose an algorithmic process, particularly if this would have to be done in great detail. An example of this is the scoring algorithm used by the German private credit bureau Schufa: this algorithm is a key element of the company’s business.

 

Whether such a right to explanation could be implemented from a technical perspective cannot even be taken for granted, given that self-learning systems are by their very nature constructed as “black boxes”. The machine learning (ML) community has been working on solutions to this issue for some time, for example within the framework of the “Explainable AI” (XAI) project that was started by the US Defense Advanced Research Projects Agency (DARPA) in 2016. Ways of making opaque ML models transparent and explainable are being discussed under the headings “Explainable AI” and “Explainable ML”. A possible practical solution could look like this: individual decisions would be explained to the user; the general strengths and weaknesses of the model would also be explained; and people would be provided with information enabling them to understand how the system will act in the future. One critical point associated with this approach is whether a balance can be achieved between the predictive accuracy of a system and its transparency. Artificial neural networks and deep learning processes, for instance, have a high predictive accuracy but are (still) difficult to understand.

 

Further governance approaches might include developing a testing procedure for the approval of complex algorithms that are used in sensitive areas; such an approval procedure could make the way algorithms work more transparent. Another option would be the introduction of a “labeling requirement” for certain algorithms. Such solutions could help to build confidence, and they might increase transparency, but they would come with a high administrative burden and would also restrict entrepreneurial freedom.A new digital anti-discrimination law or the amendment of existing laws, e.g. the German General Act on Equal Treatment (AllgemeinesGleichbehandlungsgesetz), could be a further possibility, with a regulatory authority tasked with monitoring the application of the law. In view of the uncertainties involved, the practicability of such an approach is questionable. Drafting the details of such a law would be difficultand the resulting law wouldrun the risk of being outdated by the time it was implemented.

 

A promising approach is to improve algorithms already at their developmental stage using organizational measures, for example by making the teams of developers more diverse. Industry-wide self-regulation of self-learning algorithms, not only with regard to their development but also with regard to their use, could also prove beneficial and might gain wider acceptance.

 

From the perspective of society, we need to clarify to what extent we want to let algorithms control our lives. An information and awareness-raising campaign would be beneficial in this respect. There is also an urgent need for action within the field of education: digital literacy will be a vital ingredient in any future success. Children need to be introduced to computers and information technology at an early age, and they need to learn to use such technology in a responsible, independent and informed manner.

 

Artificial intelligence is probably one of the most transformative innovations in human history. We urgently need an informed and sober debate on the technology; otherwise there is a real danger of algorithmization turning into an algocracy.

 

 

*Professor Melinda F. Lohmann is Assistant Professor for Information Law and Director of the Research Center for Information Law (FIR-HSG) at the University of St. Gallen (Switzerland).

 

This article is based on an article published by the authorin “Frankfurter Allgemeine (F.A.Z.),Einspruch”on August 22nd 2018.