Exclusively for military purposes
Used by public authorities for international law enforcement or judicial cooperation
Used solely for research and development
None of the above
Public real-time remote biometric identification by law enforcement
Law enforcement agents are prohibited from using AI for facial recognition or other biometric identification in public spaces
Social scoring by public authorities
Public authorities are prohibited from scoring individuals based on behavior, socioeconomic status, or other personal characteristics
Subliminal techniques that distort human behavior
AI systems are forbidden from influencing behavior using hidden elements in sounds, pictures, or videos that are too subtle for humans to notice
Cognitive behavioral manipulation of vulnerable groups
The EU defines vulnerable groups as children, people with disabilities, migrants, asylum seekers, and refugees
To manage, operate, or manufacture safety components for public utilities
For example, to make flame sensors or to monitor combustion in gas tanks
To grant, reduce, revoke, or reclaim public benefits
For example, to determine the amount of medical reimbursements
To evaluate creditworthiness
For example, to estimate credit scores
To prioritize emergency first response dispatch
For example, to send firefighters or medical aid
To assess eligibility for public services
For example, to determine eligibility for housing subsidies
To recruit individuals for job positions
For example, to advertise vacancies, screen applications, or evaluate job interviews
To make talent management decisions
For example, to determine promotions or allocate tasks in a work setting
To determine test scores or other access to education
For example, to analyze test scores for remedial coursework
To make individual risk assessments
For example, to predict the likelihood of criminals to reoffend
To predict the risk of future criminal offenses
For example, to create a heatmap of likely future property crimes
To detect deep fakes
For example, to create videos that appear convincingly real but is actually fabricated by AI
To evaluate the reliability of evidence
For example, to determine the authenticity of digital evidence
To identify hidden relationships in complex large data sets for crime analytics
For example, to analyze financial transactions to uncover links between individuals
To detect emotional or physiological responses
For example, to analyze a polygraph
To assess risks of an individual entering a country’s borders
For example, to determine the risk of overstaying a tourist visa
To verify the authenticity of travel documentation
For example, to determine if a visa recommendation letter was fake
To examine applications for asylum, visa, residence
For example, to prioritize visa applications or monitor complaints
To research, interpret, or apply the law
For example, to create legal briefings
Emotional recognition
For example, to automatically analyze facial expressions for customer sentiment analysis
Biometric categorization
For example, to scan human fingerprints against a database for security access
Generating or manipulating media content
For example, to create text, audio, or visual images of fake events
Direct human interaction
For example, to enable a customer service chatbot