The European Union has taken a crucial step by agreeing on an Artificial Intelligence Certificate that ensures the safe development of AI. It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability facing AI, while boosting innovation and building a leading Europe in the sector.
The Regulation sets out a number of obligations for AI based on its potential risks (unacceptable, high, limited and low risk) and its level of impact. Through this law, artificial intelligence applications that infringe citizens’ rights, such as biometric categorisation, the indiscriminate collection of facial images, and the recognition of emotions in work and educational environments are prohibited. The use of real-time biometric identification systems by law enforcement agencies is prohibited, unless prior authorisation is granted. High risk AI systems must assess and mitigate risks, establish transparency requirements and have human supervision. Finally, testing and development spaces will be provided for SMEs and start-ups so that they can develop and train the AI. To ensure compliance with these regulations, significant penalties have been established for non-compliant companies, demonstrating a commitment to protecting the rights and safety of citizens.
Biometric identity verification considered low risk
In those systems categorised as low risk, it is important to allow citizens to decide freely, voluntarily and unambiguously about the use of these technologies, therefore, verification through biometrics has been categorised as low risk. The European Data Protection Board (EDPB), in its EDPB Guidelines 5/2020 on consent, indicates that users can freely consent when they have been informed of the risks and benefits of the processing and there are no adverse consequences of giving or withholding their consent. The Artificial Intelligence Regulation highlights the active involvement of data subjects in all stages of the process, ensuring that their privacy is respected and their information is protected through advanced encryption techniques. In this way, this form of identity verification is positioned as a safe, efficient and low-risk option in the field of AI.
Implementation of the AI Regulation
Implementation will be gradual and become final in 2026. Competent national authorities will be appointed to oversee the implementation of the rules, as well as the creation of other European instruments to effectively regulate artificial intelligence. Consent and digital identity are established as key elements in a world driven by artificial intelligence, where transparency and control are key to building trust and safety in digital interactions. To do so, the need to regulate artificial intelligence in its early stages of development is highlighted.
The EU’s AI Regulation represents a fundamental step towards a future in which technology advances rapidly in harmony with our values, finding the balance between innovation and regulation to ensure an ethical and safe use of artificial intelligence.
Facephi and its commitment to the AI Regulation
With the same objective in mind, Facephi seeks to ensure that the user is fully informed and is able to decide whether or not to make use of this technology. All our solutions are created under the principle of ethical biometrics, meaning that they are designed exclusively for consensual identity verification. We seek to guarantee their safety; with full respect for the right to privacy, as well as the data protection legislation in force in each country. For us, this data privacy and the voluntary will of the user is totally non-negotiable.
Identity verification technologies must guarantee compliance with the main standards regulating the sector, with the objective that all information obtained from the user’s consent is used to protect what is most valuable: their digital identity.