Nordic Trust Model for AI Systems
The Nordic Trust Model for AI Systems is a service to support primarily public organizations in providing services with AI ethically, responsibly, and transparently. In this beta version of the trust model, verification is primarily supported through self-assessment and analysis.
The model is developed by Swedish Digg, the Agency for Digital Government, Norwegian Digdir, the Digitalisation Directorate och Finnish DVV, the Digital and Population Data Services Agency
* The beta version was developed as part of a project funded by the Nordic Council of Ministers.
Privacy Policy and Cookies
Is there sufficient AI competence?
Assess the ability to develop, procure, and use AI services, and identify any competence gaps.
Is AI used ethically and responsibly?
Consider what is required for ethical and responsible AI and clarify roles and governance.
Is there sufficient transparency about the AI system?
Ensure sufficient transparency and access to relevant information about the AI system and the responsible organization.
