A submission by Kalinda Ukanwa

AI (artificial intelligence) is the study and practice of intelligent agents….They are designed to work, learn, solve problems, and react like humans (Russell and Norvig 2002).

All types of artificial intelligence have two things in common: they run on data and algorithms.

Strong AI
(a.k.a., Artificial General Intelligence)

Weak AI
(a.k.a., Artificial Applied Intelligence)

Least common, possibly doesn’t exist yet.
Most common, exists everywhere.  
Think of robots like R2D2 from Star Wars, or Jarvis and Vision from the Marvel Universe.  
This one is more familiar and includes things like personal digital assistants (Siri) or personalized content providers (Spotify).  
Strong AI systems can handle a large number of different tasks, can make a variety of different decisions and analysis, and can think and function similar to a human brain.  
Weak AI systems perform a very few or even a single task very well.  

They usually work on automated or repetitive tasks and are involved with very limited decision-making.  

Insufficient data on certain groups or data that mostly represents one group.


Data that is not representative and has errors in how it was measured.


Data that embeds historic and or systemic societal biases.


An algorithm that was not designed to represent the people it impacts.


An algorithm used in a way that it is not intended to be used.

When we don’t have the right contexts, or frames of reference, the algorithms that are created not only fail at capturing the full picture, but often make improper decisions based on a limited amount of information.

For example, an algorithm designed for one context (e.g., designed for US English speakers) but used in a new context (e.g., same algorithm used in a city of Korean speakers) may produce unanticipated outputs that are biased in the new context.

Algorithmic bias is already showing up in your life or your friends’ lives.

And if you’re Black, Indigenous or a person of color, how this bias shows up could potentially be more harmful for you than others.

Artificial Intelligence

Face recognition technology, (which is being used today) has much higher error rates in recognizing black, brown, and Asian faces as well as women than it does in recognizing white men (Buolamwini and Gebru 2018).

Patient Care

Medical care software which makes decisions about whether to offer medical services to help kidney patients is much more likely to offer it to white patients than Black patients (Mullainathan and Obermeyer 2017).


Auto insurance algorithms on average charge higher prices to people living in zip codes that are in communities of color compared to zip codes that are primarily White (Hall and Desai 2020).

Some social media platforms are more likely to position Stem career ads to men than to women (Tucker and Lambrecht 2019).


Medical Treatment

Unpredictable correlations between data can produce algorithmic bias (e.g., medical algorithm prioritized the least sick asthmatics for care because of higher correlation to better survival rates.  But for this very reason, sicker asthmatic should be prioritized for medical care).

Higher Education

A standardized test algorithm used for the A-levels test in the United Kingdom (which impacts what college students get into) boosted private school student scores and downgraded poorer public school student scores (Allegretti 2020).


Fixing algorithmic bias can be difficult to do especially in algorithms have already been built, but it’s definitely not impossible.

Additionally, there is a higher likelihood at preventing algorithmic bias when multiple BIPOC professionals can join the teams that are building and evolving algorithms and AI technology.

Click here to learn more about this report by Kalinda Ukanwa.