Today, the intersection of gender and technology is reproducing and even amplifying the patriarchal model. For example, of all the professionals who work on artificial intelligence (AI) only 22% are women. Google and Facebook reported that women working on AI represent the 10% and 15% respectively. In Latin America only 38% of internet users are women and, in the UK, women who work on cybersecurity earn 16% less compared to men.

If we consider race, the scene is even worse. In 2018 Google reported that 25,7% of women work in technical positions, but this number decreases to 0,8% if we consider black women. We see the same tendency in other technological giants, such as Facebook, Apple and Microsoft. This makes the diversity crisis within the technological and computer science ecosystems evident.

¿Why is this so relevant? Today there is a fast-growing increase in systems and apps that automate processes, using AI. Every day there are more countries implementing IA national strategies to promote its use, since it helps solve highly complex problems, makes automatization easier and makes a more efficient use of resources.

MACHINES NOT ONLY OBEY, THEY LEARN (AND CAN LEARN ALL WRONG)

The field of AI is extremely broad and somewhere in there appears Machine Learning (ML), the automated learning of machines. This system is widely used these days and it basically consists of computers learning. In other words, these machines’ performance improves with experience. For example, Netflix uses ML to recommend personalized titles that users will probably enjoy based on the data users provide. This data can be explicit, like “liking” a certain movie or series or, as it usually happens, they are implicit, like pausing after 10 minutes or taking two days to watch a whole movie. All this information we provide Netflix with, together with that provided by all the other users, is the “dataset” that the system uses to constantly improve its recommendation system.

There are diverse and concrete cases deriving from this phenomenon that prove how the use of IA to automate processes replicate gender biases. For example, in 2014 Amazon began developing a program which, using ML, allowed those in charge to automatedly evaluate hundreds of CVs. The system learned from the CVs received during the previous decade and from the performance of the people hired within that period. In 2015 the company realized that this new system ranked women lower than men for software development or other technical labor. This happened because the dataset mirrored the historical male dominion of the technological industry. Despite the efforts, Amazon could not revert that lesson learnt by the machine and was forced to drop the program.

Joy Buolamwini y Timnit Gebru, MIT researchers, evaluated IBM, Microsoft and Face++’s facial recognition softwares and discovered that all three companies had better performance recognizing male faces. And white skin. When analyzing sub-groups, facial recognition of black women turned out to have the highest error rate in the three companies, with an average of 31% error. This result contrasted with the less than 1% error for white males.

These error rates are critical. ML technology is used today by governments in the detection of criminal tendencies. And we know that it can be used to generate reports on recidivism probabilities or that it can influence how sentences are determined in court.

These examples show the importance that AI has in the creation of automated systems. The facts that said systems are trained with a dataset generated by humans (and their life experience) implies that the systems will learn and maintain the subjectivity that can come from them. Summing up: biases that exist in the offline world are replicated online. It is necessary to promote strategies that guarantee the massive adoption of new technologies that do not create or further gender and race inequalities.

This is a scary reality, and it becomes a critical scenario when we think that this is, and will continue to be, the technology that states prefer. It will directly influence people’s freedom and rights. It is highly probable that this is how scholarships, housing benefits and immigration permits will be decided. And it will have a key role in public safety

In Ciudadanía Inteligente we constantly work to find ways in which society’s biases are not replicated in these systems, and that these can be used to reverse them, even. Together with Women at the table (W@tt), a civil society organization that promotes gender equality, we fight to visibilize existing prejudices, to move relevant agents to defend transparency and to pilot affirmative actions in algorithms. We adhere to W@tt’s declaration inviting people to:

  • Defend and adopt guidelines that establish transparency in algorithmic decision-making.

  • Include an intersectional variety and an equal number of women and men in the creation, design and coding of decision-making algorithms.

  • Engage in international cooperation and the promotion of a rights-based focus in the development of these systems

We are at a decisive moment in history. The amount of automated decision-making systems is unprecedented and these are spreading fast. We can allow technology to continue building barriers for women and sub-represented groups or we can use it in our favor. In Ciudadanía Inteligente we will continue working to develop affirmative actions to correct the biases that hamper the full participation of all people in a diverse society.