Webbläsaren som du använder stöds inte av denna webbplats. Alla versioner av Internet Explorer stöds inte längre, av oss eller Microsoft (läs mer här: * https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Var god och använd en modern webbläsare för att ta del av denna webbplats, som t.ex. nyaste versioner av Edge, Chrome, Firefox eller Safari osv.

Foto av Charlotte Högberg

Charlotte Högberg

Doktorand

Foto av Charlotte Högberg

Stabilizing Translucencies : Governing AI transparency by standardization

Författare

  • Charlotte Högberg

Summary, in English

Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been problematized. In the empirical sample of standardizations, transparency is largely presented as a static, measurable, and straightforward information transfer, or as a window to artificial intelligence use. The standards are furthermore described as pioneering and able to shape technological futures, while their similarities suggest that artificial intelligence translucencies are already stabilizing into similar arrangements. To rely heavily upon standardization to govern artificial intelligence transparency still risks allocating rule-making to non-democratic processes, and while intended to bring clarity, the standardizations could also create new distributions of uncertainty and accountability. This article stresses the complexity of governing sociotechnical artificial intelligence principles by standardization. Overall, there is a risk that the governance of artificial intelligence is let to be too shaped by technological solutionism, allowing the standardization of social values (or even human rights) to be carried out in the same manner as that of any other technical product or procedure.

Avdelning/ar

  • Institutionen för teknik och samhälle
  • AI och samhälle

Publiceringsår

2024-02-25

Språk

Engelska

Publikation/Tidskrift/Serie

Big Data and Society

Volym

11

Issue

1

Dokumenttyp

Artikel i tidskrift

Förlag

SAGE Publications

Ämne

  • Sociology (excluding Social Work, Social Psychology and Social Anthropology)
  • Information Studies
  • Information Systems, Social aspects

Nyckelord

  • Artificial Intelligence
  • Algorithms
  • Transparency
  • Standards
  • Governance
  • Uncertainty
  • Standardization

Status

Published

Projekt

  • AI in the Name of the Common Good - Relations of data, AI and humans in health and public sector
  • AIR Lund - Artificially Intelligent use of Registers

Forskningsgrupp

  • AI and Society

ISBN/ISSN/Övrigt

  • ISSN: 2053-9517