Building a Conscious Culture is, in part, recognizing that some of the things we regard as “defaults” are worth revisiting. In some ways, the whole idea of Conscious Culture is about challenging the status quo that work is supposed to be something you slog through. That’s a dated paradigm, and it’s what prompted us to create this library of information and the movement around it.
In a similar way, creating a Conscious Culture should lead us to rethink biases in the technology we build. This isn’t an easy topic to discuss, but it’s a necessary one. There’s often an unseen, unspoken tilting of the tables, in ways that we overlook because we’re too busy rushing from one item on the to-do list to another.
In the process, we build products that we think appeal to and work for everyone—but end up leaving out big segments of the population. Or, to put it another way, we miss out on reaching people who might otherwise benefit from what we build.
What is bias in tech
Bias in tech is the term we use when talking about how the technology systems we build reflect the biases of the creators. Bias in tech is difficult to avoid because we as humans are inherently biased.
Whilst avoiding bias is almost impossible, being conscious of bias is important, especially at work where many tools that are used in work, such as AI hiring systems and algorithms that measure performance. These systems need to be examined if you intend to build an inclusive workplace and a good company culture.
Some famous examples of bias in tech are:
- Amazon’s AI recruiting system that was trained to vet applications based on patterns of resumes submitted to the company over the last ten years. This led to the AI discriminating against female applicants because of the historic trend of tech being a male dominated field.
- Twitter’s cropping algorithm which was meant to crop picture previews for the timeline but showed a bias towards prioritizing white people over black people in the crop.
- A hiring AI tool used by Unilever and Hilton that analyzes candidates facial movements, speaking voice, and word choice to assign an “employability score” that by definition of the tool penalizes non-native speakers and those with speech impediments or facial differences.
How biases in data lead to biases in tech
There’s the bias issues we see: a team that looks entirely the same, or a marketing campaign that perpetuates negative stereotypes inadvertently. But then there are the ones we cannot see right away too.
Consider, for instance, the bias baked into the data sets from which products are built. If the data you are using to refine and improve products is biased, then you’re building atop a faulty foundation.
How does data become biased? Plenty of ways:
- How the data is gathered
- Who the data is gathered from
- Why the data is gathered
- How the data is presented
- Who is interpreting the data
When we don’t think about bias in data—everything from how it’s gathered to how it’s presented—we can end up making unconscious mistakes and assumptions, or just gathering faulty facts.
The problem is that when something is presented as “data,” it comes with a veneer of seriousness and sterility—these are just numbers right? As thinkers like Cathy O’Neil have written, numbers aren’t just numbers. In everything from credit scores to college rankings to teacher quality, O’Neil has explored and examined the ways that data can become encoded with racism, ageism, and other troubling -isms.
Founders and creators of products should keep these matters in mind—that even something like an algorithm doing “simple math” can be a reflection of biased data, and that ignoring those biases can lead you to products that miss some consequential subset of the market.
How bias in tech be a force for good
That can all sound abstract, so let’s use a recent example to illustrate the point. In 2021, the MacArthur Foundation announced its latest “Genius Grant” winners. These are extraordinary researchers, creators, artists, scientists, and others across a range of fields who are doing amazing work. One of the people selected in 2021 was Joshua Miele, an adaptive technology designer who works at Amazon.
When he was a child, Miele was blinded and disfigured when someone poured sulfuric acid on his face. Miele turned that experience into a lifelong career and quest to aid the blind and visually impaired access technology. At Amazon, he’s helped build Braille-compatible Fire tablets, Alexa’s Show and Tell feature on Echo Show devices, and VoiceView, Amazon’s screen reader. Each creation helps those with visual impairments enjoy the benefits of technology as everyone else.
It is easy when we look at a screen for the hundredth time that day to forget that many people cannot do the same. But in Miele’s experience lies a lesson: You can build adaptive technology products that expand your potential market—and manage to widen the circle of people who can benefit from your work.
How diversity can challenge bias in tech
There are other venues that are packed with information and tools about the “how” of building diverse and inclusive teams. For our purposes, we can simply start by saying that the technology industry has a problem with building wide-ranging teams—and the net effect is that those workforces aren’t going to pick up on biases.
Framed differently, there’s a lot of upside to a team that includes an array of perspectives. You can find problems that you missed; you can see opportunities that are outside of your immediate worldview. Whole markets—whole industries!—have been built by someone serving an unrecognized need, but often, we cater so closely to the needs we personally know, that we overlook biases elsewhere.
The future challenges of removing bias in tech
Then begins the next challenge: So much of the bias in our tech products has been years in the making, so removing it will likely take years as well. And many instances of discriminatory tech may be missed, because those doing the evaluations may not take their own biases into account.
There’s no simple guide here, and no one-size-fits-all answer. Every company will have to decide for themselves how to make their workforces and products more inclusive.
The point, as with much of what we write about at Conscious, is to begin by being aware of the problem. Not as a way to signal your moral superiority or to respond to the fashions of the moment—but as a way of actually building products and services that have a wider reach and remit.
One powerful way to think about diversity in both your hiring and your product is that you’re missing out on talent, customers, and true diversity and inclusion if you don’t take time to consider who you hire, what they build, and how they build it.
Just as every company is going to adapt our guides for meetings and recruitment in their own way, so they should adapt these sets of thoughts in a way that fits their context. Bias in tech exists, but removing it can present opportunities. Just as in the example of Amazon and Miele, remarkable things can come when you think about products that serve a slice of the market that is too-often overlooked.
More importantly, the work we begin now compounds—and in time, these biases get chipped away. That’s conscious work, and it’s part and parcel of building a Conscious Culture.