Sveriges dataportal

Trust Model

Assess your AI system

General information

Type of AI system

Legal and ethics

Which groups will be affected by the AI system?
Will individuals and companies be informed about how the AI system works?
Is it legally clarified that you can use the data you want to use when developing, training, or using your AI system?
Do you process sensitive personal data when developing, training, or using your AI system?
Is there a risk that developers, AI trainers, or other staff handling data could influence the AI system so that outcomes may discriminate against someone based on protected grounds under anti-discrimination law?
Does the AI system and service involve social scoring (“social scoring”, ranking individuals based on their behavior, attributes, or traits)?
Does the AI system use biometric identification (e.g., facial recognition) in public places for real-time surveillance?
The AI system is used within:
Are there any licenses associated with the AI systems or AI models included in the IT system?

Data and information security

Do you know what data and information the AI system consumes and generates, from an information security perspective?

System information

Are you aware of your role and responsibilities as an organization for the AI system, under the EU AI Act?
Are there systems and software in your environment that interact with the AI system and affect the generated outcome?

Data and information security

Will collected data and/or generated results be shared with third parties?
Are processes and solutions in place to prevent data from being used or misused by others?
How is data stored?
Are there solutions to prevent information and data in, and from, the AI system from being misused?
Which methods are used to protect data?
Are there processes and solutions to delete data, when needed?
Is there a risk that the data collected and used for the AI system contains bias that could lead to discrimination or unfairness?

Model and performance

Is the AI system updated automatically with new data?
Does the AI produce the same result if it receives exactly the same input?
How do you measure that the AI system is trained well enough?
Have user tests been conducted with documented results?
Is there a systematic way to ensure traceability, monitoring, and results for the AI system?
Are there plans and processes for how to act when the AI system does not work as intended?
Is the AI system optimized after the initial training and testing?

Transparency and control

Are users informed that they are interacting with an AI?
Is it disclosed that generated content was created by an AI system?
Is the system used for decision-making?
Is the AI system designed to provide explanations for any decisions?
Is there a process for informing users or affected individuals when AI is used in decision-making?