As stated in the post on ethical biometrics, technology should be at the service of humans and not the other way around. Human action and monitoring is essential. That is why we must use it as a tool to expand possibilities, whilst always remaining respectful and consistent. And in order to come up with an inclusive technology, we must first acknowledge the biases that exist in society and work on algorithms that ensure responsible behaviour. If we do not take this into account, prejudices or discriminations may be reflected in the outcome, then we will have failed. It is not about replicating the world we live in, but rather about creating a new, better one.

At FacePhi we believe that artificial intelligence must consider the user as its main priority. That is why we design our algorithms according to the mandatory and necessary requirements so that, with the help of accurate biometric recognition, they do not create or reproduce biases.

To this end, the data on which our algorithms are based, related to biometric recognition, are designed to ensure equal distribution with sufficient variety, quantity and quality across groups. Race, gender, age, religion, as well as other technical characteristics such as the type of capturing device, or object positioning, will not affect the result. We ensure that there is no deviation or bias that could favour or disadvantage any of the groups.

“The data embeds the past, and not just the recent past, the dark past. So, we need to constantly monitor every process for bias” – Cathy O’neil, Coded BiasHowever

If this inclusive technology is properly designed and used, artificial intelligence systems can contribute to reducing existing prejudices and structural discrimination. It can also facilitate more equitable and non-discriminatory decisions such as hiring staff or applying for credit.

For this reason, artificial intelligence systems must be technically sound to ensure that the technology is fit for purpose and that false positives or negatives do not disproportionately affect minority groups. To avoid this, we must train and test them on sufficiently representative data sets and then ensure that they can be addressed by appropriate bias detection and correction measures.