2–3 minute check: is your AI use case likely high-risk under the EU AI Act?

Quick, non-exhaustive check to help you understand where your AI use case might sit.

Does the AI influence decisions about a person or their rights/benefits?

This is the strongest separator for likely high‑risk use cases.

In which area does this system operate? (select 1–2)

Roughly mapped to Annex III categories for high‑risk AI systems.

Do you use biometrics / facial recognition / identification / emotion recognition?

Is the AI a “safety component” of a regulated product (e.g. medical, automotive, aviation)?

Does the system run in an enterprise / regulated environment where customers expect evidence (SOC 2 / ISO / vendor questionnaires)?

Does the end user interact directly with the AI (chat/agent) or see AI‑generated content that could look human?

Does the AI operate on sensitive data (health, biometrics, children, political views) or large‑scale personal data?

Do model / prompt / tool versions change frequently (e.g. weekly) without a formal change process?

Do you have monitoring and logs (who/when/what was generated/changed) plus a basic incident response process?

Share results and get a short follow‑up

By submitting, you agree that we may contact you about this assessment in line with our Privacy Policy.