EU AI Act

AI Inventory

A key element of the EU AI Act is the requirement of all organisations to maintain an inventory of all AI systems and usage. The AI and Automation CoE is responsible for providing and maintaining the HSE’s AI Inventory.

AI Inventory Form

  1. HSE National Directors and Regional Executive Officers must work with their teams to identify all AI solutions currently in use within their respective regions, services or functions.
  2. Each AI solution must have an assigned solution owner. Solution owners are responsible for ensuring ongoing maintenance of AI solution inventory records.
  3. Solution owners must contact the AI and Automation CoE to request access to the AI Inventory form.
  4. Upon accessing the form, the solution owner will be responsible for registering their AI solutions and ensuring regular updates thereafter to maintain compliance with the EU AI Act and other relevant regulations.

Key points of the EU AI Act include:

  1. Risk-based approach: It focuses on AI systems that pose the highest risk, such as biometric identification or critical infrastructure applications.
  2. Transparency: High-risk AI systems must meet transparency requirements, providing clear information to users.
  3. Accountability: Developers and users of high-risk AI must ensure accountability, including maintaining logs and providing explanations for decisions made by AI systems.
  4. Human oversight: Certain AI applications, especially in high-risk categories, must have human oversight to prevent harm or misuse.
  5. Penalties: Non-compliance with the Act can result in significant penalties, including fines.

EU AI Act Risk Categories 

  1. Unacceptable Risk - AI systems that pose a clear threat to safety, rights, or fundamental values are banned under this category. E.g. Social Scoring, Facial recognition
  2. High Risk - AI systems that have a significant impact on safety or fundamental rights, requiring strict compliance with safety, transparency, and accountability standards. E.g. Diagnostic tools, clinical decision support.
  3. Limited Risk - AI systems with moderate impact, requiring transparency but not as strict oversight. These systems might not directly endanger health but still require accountability. E.g. AI chatbots, AI powered appointment scheduling.
  4. Minimal Risk - AI systems with negligible or no impact on safety or rights. These systems are encouraged to adhere to ethical guidelines but require no specific regulatory compliance. AI systems that pose a clear threat to safety, rights, or fundamental values are banned under this category. E.g. Workforce scheduling, fitness apps.

 

The EU AI Act is designed to foster innovation while ensuring public safety, fundamental rights, and trust in AI technologies. 

For more information click here