The Montreal Declaration for a responsible development of artificial intelligence (2018)

Die 2017/18 von Wissenschaftlern/innen der Universität von Montreal initiierte und unter Beteiligung von über 500 Bürgern, Wissenschaftlern der Sozial- und Geisteswissenschaften, Informatik-Experten und Interessenvertretern entwickelte Montreal Declaration ist gedacht als “starting point for an open and inclusive conversation surrounding the future of humanity being served by artificial intelligence technologies.“ Sie ist aus meiner Sicht auch gut geeignet als Strukturvorschlag und inhaltliche Anregung für die auch in der Medienpädagogik intensiv zu führende Diskussion. Übergreifendes Ziel der Montreal Declaration ist es, fachlichen Einfluss auf die als notwendig betrachtete Entwicklung von verbindlichen Richtlinien und Vorschriften zu nehmen.

"The principles of the current declaration are like points on a moral compass that will help guide the development of artificial intelligence towards morally and socially desirable ends." (S. 5)

In der Erklärung werden zehn Grundsätze formuliert und jeweils in mehreren Unterpunkten erläutert: menschliches Wohlbefinden, Respektieren der Autonomie, Schutz der Privatsphäre, Solidarität, demokratische Beteiligung, Gerechtigkeit, Vielfalt, Besonnenheit, Verantwortung und nachhaltige Entwicklung:

  • WELL-BEING PRINCIPLE
    The development and use of artificial intelligence systems (AIS) must permit the growth of the well-being of all sentient beings.
  • RESPECT FOR AUTONOMY PRINCIPLE
    AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
  • PROTECTION OF PRIVACY AND INTIMACY PRINCIPLE
    Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS).
  • SOLIDARITY PRINCIPLE
    The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations.
  • DEMOCRATIC PARTICIPATION PRINCIPLE
    AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.
  • EQUITY PRINCIPLE
    The development and use of AIS must contribute to the creation of a just and equitable society.
  • DIVERSITY INCLUSION PRINCIPLE
    The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences.
  • PRUDENCE PRINCIPLE
    Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them.
  • RESPONSIBILITY PRINCIPLE
    The development and use of AIS must not contribute to lessen the responsibility of human beings when decisions must be made.
  • SUSTAINABLE DEVELOPMENT PRINCIPLE
    The development and use of AIS must be carried out so as to ensure a strong environmental sustainability of the planet.

Diese Prinzipien werden jeweils in mehreren Unterpunkten erläutert und differenziert. Ein Beispiel:

DEMOCRATIC PARTICIPATION PRINCIPLE
AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.

  1. AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.
  2. The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision.
  3. The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes.
  4. The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation.
  5. In accordance with the transparency requirement for public decisions, the code for decision-making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused.
  6. For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use.
  7. We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for.
  8. Any person using a service should know if a decision concerning them or affecting them was made by an AIS.
  9. Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person.
  10. Artificial intelligence research should remain open and accessible to all.

Das komplette Papier kann als PDF-Datei heruntergeladen werden.

Siehe hierzu auch das eindringliche Plädoyer von Yoshua Bengio, einem der Initiatoren.

Nachtrag: siehe auch das Interview vom 4 April 2019 mit ihm: "The dangers of abuse are very real"