LOGO

tips for applying an intersectional framework to ai development

AVATAR Kendra Gaunt
Kendra Gaunt
December 18, 2020
tips for applying an intersectional framework to ai development

It’s now widely understood within the technology sector that the biases we, as humans, naturally hold are inevitably reflected in the artificial intelligence systems we create – systems that have grown increasingly advanced and now significantly impact our daily routines and even the choices we make.

As AI systems gain greater prominence and capability, it becomes increasingly urgent for the industry to consider questions such as: What steps can be taken to transition away from AI and machine learning models that exhibit demonstrable unfairness?

Furthermore, how can we utilize an intersectional approach to develop AI that serves all populations, recognizing that individuals experience and engage with AI differently due to the complex interplay of their various identities?

Intersectionality: What it means and why it matters

Prior to addressing complex issues, it’s crucial to establish a clear understanding of “intersectionality.” This concept, originally defined by Kimberlé Crenshaw, provides a structure for understanding how an individual’s multiple identities combine to influence their experiences and how others perceive them.

This encompasses the associated advantages and disadvantages linked to each individual identity. Many individuals possess multiple identities that place them in marginalized groups, and they are often aware of the cumulative impact when these identities intersect.

At The Trevor Project, the leading global organization dedicated to suicide prevention and crisis support for LGBTQ young people, our primary goal is to offer assistance to every LGBTQ young person in need. We recognize that transgender and nonbinary individuals, and/or Black, Indigenous, and people of color, encounter distinct pressures and difficulties.

Therefore, when our technology department began creating AI designed to serve and operate within this diverse community—specifically to improve the evaluation of suicide risk and ensure consistently excellent care—we needed to be mindful of preventing results that would exacerbate existing obstacles to mental health support, such as a deficiency in cultural understanding or prejudiced assumptions about someone’s gender based on their provided details.

While our organization supports a particularly varied population, inherent biases can be present in any situation and adversely affect any group. Consequently, all technology teams should strive to develop equitable, intersectional AI models, as intersectionality is essential for cultivating inclusive environments and creating tools that better serve individuals from all walks of life.

This process begins with determining the range of perspectives that will engage with your model, as well as the groups where these different identities converge. Clearly defining the problem you are addressing is the initial step, because understanding who is affected by the issue allows you to pinpoint a solution. Following this, chart the complete user experience to identify the points where individuals interact with the model. From this foundation, organizations of all sizes—startups and large enterprises alike—can implement strategies to integrate intersectionality into every stage of AI development, from training and assessment to gathering feedback.

Datasets and training

The effectiveness of a model’s performance is directly connected to the data used during its training process. Datasets can unintentionally include biases stemming from how the data was gathered, measured, and labeled – all processes influenced by human choices. For instance, research conducted in 2019 revealed that a healthcare risk-prediction algorithm exhibited racial bias due to its dependence on a flawed dataset when assessing patient need. Consequently, Black patients who were eligible received lower risk assessments compared to white patients, decreasing their chances of being chosen for intensive care management.

Developing equitable systems requires training models using datasets that accurately represent the individuals who will utilize them. It also necessitates identifying any deficiencies in your data concerning populations that may be underrepresented. However, a broader discussion is needed regarding the general shortage of data concerning marginalized communities – this is a widespread issue that requires a systemic solution, as insufficient data can hinder the ability to determine both fairness and whether the requirements of underrepresented groups are being fulfilled.

To begin evaluating this within your organization, examine the scope and origins of your data to pinpoint any inherent biases, imbalances, or errors and determine how the data can be enhanced in the future.

Another approach to mitigating bias in datasets involves increasing the weight or prominence of particular intersectional data points, as defined by your organization. Implementing this early in the process will shape your model’s training methodology and assist in maintaining objectivity – otherwise, your training methodology might inadvertently become optimized for producing inconsequential outcomes.

For example, at The Trevor Project, it may be necessary to prioritize data from demographic groups known to experience greater difficulty accessing mental health resources, or from groups with limited data samples compared to others. Without this essential step, our model could generate results that are not applicable to our users.

Evaluation

Assessing a model’s performance is a continuous undertaking, enabling organizations to adapt to evolving circumstances. Initial assessments of fairness concentrated on isolated factors – such as race or gender or ethnicity. The current challenge for the technology sector lies in determining the most effective methods for contrasting intersecting demographic categories to assess fairness comprehensively across all identities.

When quantifying fairness, it’s beneficial to identify demographic intersections that might experience disadvantage, as well as those potentially benefiting from advantages, and subsequently investigate if specific performance indicators (such as rates of false negatives) differ between these groups. What insights do these discrepancies provide? What additional methods can be employed to further investigate which groups are inadequately represented within a system and the reasons behind this? These are the key inquiries to address during this stage of development.

The most effective approach for organizations to attain fairness and mitigate unjust bias is to develop and oversee a model with consideration for the demographics it will serve from the beginning. Depending on the evaluation results, a subsequent action could involve intentionally prioritizing service to statistically underrepresented groups to aid in training a model that reduces unfair bias. Given that algorithms can exhibit partiality stemming from existing societal biases, proactively designing for fairness helps guarantee equitable treatment for all individuals.

Feedback and collaboration

It’s also crucial that teams include a wide range of individuals in the creation and evaluation of AI products – individuals who vary not just in backgrounds, but also in their expertise, familiarity with the product, professional experience, and other factors. Seek input from those affected by the system and relevant stakeholders to uncover potential issues and biases.

Utilize the expertise of engineers during the problem-solving process. In the case of defining intersecting demographics, The Trevor Project collaborated with teams directly involved in our crisis-intervention services and the individuals who utilize them – including Research, Crisis Services, and Technology. Following deployment, it’s important to re-engage with stakeholders and users to gather their feedback.

In reality, there isn’t a single, universally applicable method for developing AI with an intersectional lens. The Trevor Project’s team has established a process grounded in our current practices, knowledge, and the specific communities we support. This is a dynamic process, and we are committed to adapting as we gain further insights. While other organizations may employ different strategies for building intersectional AI, we all share a fundamental ethical obligation to create more equitable AI systems, given AI’s capacity to emphasize – and even amplify – existing societal biases.

The amplification of specific biases by an AI system, depending on its application and the community it operates within, can lead to negative consequences for groups already experiencing disadvantage. Conversely, AI also possesses the potential to enhance the lives of everyone when developed using an intersectional approach. The Trevor Project firmly advocates for technology teams, subject matter experts, and leaders to carefully consider establishing a core set of principles to drive widespread change – and to guarantee that future AI models accurately represent the communities they are intended to serve.