Introduction

With not For – Episode 2

With not For – Episode 2

Listen to the podcast

Transcript

Manisha: Welcome to With, not for, a podcast from the Center for Inclusive Design, I'd like to begin by acknowledging the Cammeraygal people, the traditional owners of the land on which we record this podcast today and pay my respects to their elders past, present. And of course, emerging. We're so excited to have you all here today with us. So often things are designed for us, but here we explore the magic that happens when we design with people, not just for them. In this episode, we speak to David Dylan Thomas, the author of Design for Cognitive Bias. David's work focuses on the intersection of bias design and social justice. His work in content strategy focuses on how it can mitigate bias or use it for good and help people to talk to each other more clearly and treat each other more equitably. I love speaking to David and it's an honor to welcome him to the podcast today.

David: Thanks, it's a pleasure to be here.

M: So quick question for you to start with how did you end up in this space in the first place? What put the fire in your belly in regard to its inclusion and then more specifically cognitive bias?

D: Well, they all sort of happened at the same time. I had been in content strategy and UX for a while. Now I'm working at different firms and I stumbled upon a talk by Eras Bonnet called ‘Gender Equality by Design’ at South by Southwest. And your listeners can find this talk on YouTube. It's fantastic - highly recommended. And she starts to put the connect the dots for me around, you know, cognitive bias and social justice, which is to say that you might have someone who's hiring a web developer and the image in their head when they think of a web developer, you know, might be a skinny white guy. And it's not because they explicitly think that men are better at programming than women. And if you ask them, they'd say, Oh, absolutely not. But the pattern that they've seen throughout their lives and movies and television and the way people talk makes that equation. And so if they see a name at the top of a resume that doesn't quite fit, that they start to get that resume the side-eye.

And so seeing that something as terrible as a racial or, or gender bias could come down to something as simple, you know, even as human as pattern recognition, I just sort of threw myself into learning about cognitive bias. And I would literally look up one cognitive bias a day and then move on to the next one, which made me insufferable in conversations. So my friends are like, Dave, please just get a podcast. And so that interest kind of starts to align with my day job in UX and, you know, led us to where we are now with a design for cognitive bias

M: And a question about biases. So when you Google cognitive bias online, there are hundreds and hundreds and hundreds of biases that we are had by. And obviously some, you know, biases are our brains way of storing things when their unconscious were had by them. How did you, when, when you said that you looked up bias after bias, how did you actually organize this into something that made sense for you?

D: Well, it's funny. I kind of organized it by not organizing it site. I went to literally the irrational Wiki page of cognitive biases and there's like a hundred over there. And literally I just went in and in order as they went down the page, which is sort of a mixed of alphabetical order, and they're also kind of sorted by a general topic like decision-making biases or like probability biases or whatever. And if you actually like, watch my podcasts, like the secret sauce is that it's like four seasons and every season corresponds perfectly to like sections on that Wikipedia page, like I am totally transparent about that. There was no like, you know, I borrowed the rhyme and reason from the way that page was organized when I did the podcast. But yeah, that was how I got my head around it. But what's interesting is even though there are lots of biases, they do generally after you look at enough of them, follow a pattern of just making it easier to get through the day. Cause that's all a bias really is, is your mind taking a shortcut, you know, to get it through the day. And it's, it's necessary. Like we, if we didn't have all these shortcuts, we'd have to make like a trillion decisions every day that were really carefully thought out and we would never get anything done. Right? So it's actually a good thing that we spend so much time on autopilot. The problem is when the autopilot gets it wrong or gets it wrong in a way that it causes harm. But, but even though there's hundreds of biases, they all boil down to one version of, or another of, I am really busy, please make this easier for me.

M: Right. And so when you operationalize that within an organization so that we don't end up in the situations you're talking about where the person in our brain is the skinny white male, when it comes to technology, what do we do?

D: So, I'm glad you brought it up in terms of operationalization. And there are any number of, of techniques or processes, which we can get into that, like either mitigate the bias or use it for good. But the key is to make sure that those processes, those steps are part of the budget, right? Because if you work in tech, if you work in, you know, a client facing situations, what you learn very quickly as if it's not in the budget, it doesn't really exist right. In the budget. It doesn't make it into the project plan. And once something's in the budget and better yet once it's in the template for the budget, because that's how a lot of organizations work, they make a template for a project type. And then that's the way, every time, once someone wants to spin up a new project, it's embedded in it, like that's really wanted to get to. So there's an exercise called red team blue team, where you bring in kind of like an outside, you know, perspective on the work you're doing to make sure that it isn't going to cause harm. That costs money. It takes time and you have to pay the people who are doing it and all that stuff. So you need to make a budget for it. And hopefully if it becomes part of the budget template, every time you do a project, it's like guaranteed, there's going to be this level of like ethical QA built in. Right. but I think the operationalization part is, is really critical to making sure it isn't just, Oh, I heard a guy give a talk about bias. It was interesting. Like you want to go from that to no, really every time we do a project, we do a red team to, you know, mitigate bias.

M: So can you talk about I guess a real world example of where that has worked and what the sort of costs are that we're talking about, the benefit of that as well?

D: Sure. So, so going back to the skinny, white guy example, right? So the whole concept of anonymized resume says, Hey, why you need to see the name at the top of the resume in the first place? Like, what about that is actually helping you decide who to hire. Like if you thinking of it like a signal to noise problem to signal, the thing you really need is qualifications, the experience and the noise might be what you're reading into the name in terms of gender or race. So if I know there's this vulnerability where if I see a male name, I'm just going to be giving it more credit, whether I want to, or not, maybe I just remove the name. Right. And city Philadelphia actually did a round of anonymized hiring for a web developer position. And that the cost, if you all the sort of difficulties like one is you have to print out the resume. Like that's actually the best way to anonymize it. It's to have an intern who has no say in the process, print it out physically and like redacted it, like a say document with them. And then there's a little extra work you've got to put in because if it isn't a web developer, one of the things you might want to do, which they wanted to do was to look at the GitHub profile of that web developer and kind of see their portfolio, so to speak. But the second you do that, you see all their personal information, it ruins the experiment. So they literally wrote a Chrome plugin that would anonymize that content as the page, all the personal information that has the page loaded. And then they put that code back on GitHub. So if you want, if you want to try this yourself, right, you can do it. So the cost usually means, Hey, you have to take some extra steps and extra steps, take time, extra time makes it takes money. But it's, so I like to say, if you really want to, you really have to want it, right. Because if you're doing it right, it will cost you, it will, you will have to sacrifice some things.

M: And I think that's one of the things that sometimes people forget that there is a cost to actually changing things, just like there's a cost to you know, product design or making something beautiful or the materials that we use, the finishes we have, just recruiting costs. So if we want to do recruiting differently, that will also cost. With something like that example, have you got any information about how they saw the benefits?

D: Well, see, that's the exactly, that's the thing. It's like, what's the cost of not doing it and there's certainly a social cost, but I think there's also a benefit around what you learn. So another example is a company I used to work at think company did a round of anonymized hiring for, I think it was like an apprenticeship for a web developer position. And in trying to figure out what information they needed to redact from the applications, it helped them understand what they really needed to know about who they were hiring. It's like, okay, well, we don't need to know their name and it's going to distract us. We don't need to know where they went to college because that could also distract us. It distract us because if they went to Harvard, instead of some school I never heard of, I might think they're better, but you know, George Bush and Barack Obama went to Harvard and that's two very different people, right. So that's not going to tell me anything, even if I know the name of the company they worked at, that could mess me up. Cause if like I could assume, Oh, they were at Facebook, they must be great. It's like, well, wait a minute. They could work at a company you've never heard of and also be great. So, but what that forced them to do is to sort of like find what actually matters. Right? Well I do need to know like what, they're, what they've, you know, what they've worked on, right. And how they work and like, you know, what skill sets they've developed regardless of necessarily where they've developed them, right. It forces you to ask the question, what am I actually hiring for? And if you do that, you'll find that you'll have a much shorter resume, right? You have a much more efficient process because I'm not asking for all this useless information, like ideally where you want to get to a say, I don't want to accept the given LinkedIn profile approach to resumes. I just want to start, if I build this from scratch, what would I ask? Or if I only had five things to ask for, what would I ask for? And you'll find yourself being much more efficient, right. And much less biased about the information you're getting.

M: Absolutely. And then, you know, this is a really lovely example. So I'd like to just sort of continue with it a little bit more and think about, and I know you've spoken about this a lot in the past as well, this idea of, you know, we bring the, in that person, we talk about how we'd like diversity in organizations, but when the person comes in, we don't necessarily want them to behave or act differently. So we'll often see situations where people will say, well, we'd like more say women CEOs, however, they're just not outspoken enough. Or they don't have the right leadership qualities or et cetera. Or we we'd like more cold people in leadership, but they're not speaking out the way that we'd like them to. So in fact, sometimes one of the things that we see is that people would like diversity, but when the diversity comes into the organization, we'd also like to kind of beat them back into fit again. Can you tell me a bit more about that and your thinking around this?

D: Yeah. I think this is, this is a big problem. And I think it comes back to like what people say when they say they want diversity. Like, do you want diversity? Or do you want a group of people who look different than you, but behave exactly as you do because that isn't diversity, that's assimilation. Right. And, and I think that rather than look for culture fit, we should look for culture growth. Right? So if I am bringing someone into my organization and I'm thinking about culture fit, I mean, that's exactly what I'm gonna do. I'm going to like assimilate them and say, this is how we drink our tea. Try to the way, you know and you know, I'll feel good because Hey, that people who looked like that didn't get to use to drink my tea, but now they do. So I'm being diverse, right. Versus saying, Hey, I don't want this company to stay the same forever. In fact, that would be bad. Right? I want it to grow. It is a living thing. It should change over time. And in order for it to change, I can't just keep hiring people who look and act like us. I need to hire people who are not like us. That's the only way we're going to change. But that's my attitude. If that's my goal, when they come in, I'm not going to say, Hey, drink our tea the way we drink our tea, I'm gonna say, Hey, what do you drink? Is there anything else we can drinking, eating? We never thought of that. Let's try that. Right. I want you, I want our company to be different after we've hired you versus,

M: Oh, sorry. I just interrupted you. I got quite excited because really what you're talking about is with not for,

D: Right. Exactly. Exactly. That's the whole principle of, with not for it's this notion of, if I invite you to design something with me to build something with me, right. I'm doing it because I expect a better result with you there rather than the same result faster. I mean, that's right. I like to the, the analogy I like to use, which will at least work for comic book nerds is the Avengers. Right. if I already have iron man on my team, I don't need another iron man. I need a Thor. I need a captain Marvel. Right. I need a Scarlet witch. I need somebody who has a different power set. It would be useless for me to just keep, or it will be useless for me to hire a Scarlet witch and say, okay, great. Here's your iron man suit. No, that's not why I hired you. I hired you because they have a different, you know, power set. So that's, I think that's how we should be approaching hiring. It's like, how do we make this teamwork interesting, powerful, different than it was before? Not how do we have the same thing bigger?

M: And that's really, you know, I think that often when we speak to organizations, that's what they want. However, when people come into an organization it's really hard not to be assimilated. It's really hard for people around that person to actually unless we build a culture that is with not for culture, it's quite difficult for the maverick or the, the, the person who's different to actually be able to sit in that space. So do you have any advice on or experience on how people can deal with that? Because my sense is this is actually a cognitive bias issue.

D: Yeah. I mean, I think that you have to, I think there are a lot of things, but I think that the fundamental paradigm shift that has to happen is and the metrics that you have to say that we, if we have a different goal, right. Then simply grow in a way where we're the thing that is growing stays. The only growth is in size. Right. because think about the things companies usually measure to indicate health, they indicate okay, did we make more money last year than we made it the year before? Do we have more people at the company than we did the year before? Right. Right. And these are things that indicate it's a mean analogy, but cancerous growth, right? Like it's sort of like we got bigger and bigger and bigger. And we used up the resources around us to get bigger and bigger. Right. Kinda what a cancer does, but, you know, that's, that's, that's kind of flattering comparison, but it's, it's meaningful right. Versus organic growth that has more to do with Harmeet, which is to say, okay, we want to know more, we want to have more knowledge than we did last year. Right. That's a more interesting goal. Right. Because A, if I'm just concerned with, you know, metrics around how much money each division makes, that's where you get siling from, because now I'm in competition with the different silos. But if the goal is for the whole company to be smarter, okay. Silo one might have something that silo two doesn't know about. So it benefits the whole company. If they share knowledge. Right. All of a sudden communication is incentivized. We often muck up the incentivization. But if we make the goal around, how are we different at the end of the year than we were at the beginning of the year. Right. And what are the ways that we can be different that get us closer to our values. That is a much more interesting approach than just did we make more money? Are we bigger? I don't think you stop measuring those things, but I don't think you prioritize them in quite the same way. Right. It's like, are we bigger, but are we also smarter or do we get dumber when we got bigger? Do we get meaner? When we got bigger, do we become less trusting of our employees as we got bigger, because that gives you a whole host of problems. Right? I think there's ways you need to balance that.

And I think that's a really interesting conversation for the people who are listening to this podcast who happen to work in diversity and inclusion because often the conversation is about the numbers, right? Like how many people have a different type or a different mix, or how do we slice and dice people according to their categories rather than how how trusting are we, how safe is this? How mean is this how comfortable are people, how much knowledge do we have, which are far more difficult to measure in some ways but far more interesting.

But I think there's, there's, there's, there's ways to think about it. Right. So another good analogy to think is how gross national product, right? The GDP gross domestic product growth potential while the GDP is sort of like very time-tested accepted measure of how healthy a country is. Right. If a country has a great GDP, they must be doing great. Right. and by the way, it was just so amusing. My kid is learning from home and he was, you know, he was like 12 years old and they were talking about GDP and like, cause like social studies class. And he was like, he was like throwing shade at it, like the whole time, like they couldn't hear me, it was on mute, but he's like, I don't know if that's really a good measure. I was like, Oh, I thought I was going to have to have a conversation with you about this, but you figured that out already. So, so GDP. Right.

M: Can I just say that your son is there already? How do you think it?

D: No, like I, like, I didn't bring it up. Oh. So I was so proud of him, but but yeah, it's this, like if you just look at the economic measures, like you can have a whole bunch of stuff wrong with your country. Like we've got a pretty good GDP and we have huge inequality. I mean, I'm in the US for those, you know those of you listening. So we have huge inequality and you can have huge inequality and still get a great GDP out of it. You can have slavery and have a great GDP, right? Because you're not paying for labor. Right. So the GDP on its own, isn't that awesome a measure. But if you look at places like New Zealand where they've tried to start to incorporate gross national happiness, and health and wellness as a measure of success, which again, changing what the goal is, and then downstream from that, changing the incentives, all right. You start to see a different response, right. So I don't specifically know how they are measuring gross national health or gross national happiness, but they're using something. And that as a metric has been around for at least 10 years. So it's not like you have to start from scratch. There are people who are working on this problem. And I think if we, you know, it's just like with alternative energy, like if you devote time and energy to it, you get results. It isn't an unsolvable problem.

M: Absolutely. So when we think about products and services, we've spoken a little bit about recruitment as, and we've got all the way up to how companies can actually measure happiness and how we can look at I guess, different metrics. But when we think, I guess, close to home, when you think about products and services that could have been designed better and designed for the end user rather than for them, what do you think about?

D: So I, I think a lot about Facebook, honestly Facebook, I think is like the poster child for like good intentions and not so good intentions gone, even worse. Right. because Facebook, there was a time when I was like this big social media defender and there was a lot of like outrage around social media and that it was making us dumber and that it was making us meaner and all this stuff. And at the time I was sort of clinging to like real examples of things that, that social media is doing. Right. So there was a straight clay Shirky used to tell about I think it was called the women of ill repute, which is a group of women. I believe this was in India that were being assaulted by religious fanatics. And if there were seen on the street, they would be attacked. And there was this event coming up where if they were seen on the street on Valentine's day, this religious group announced we are going to attack you if we see you on the street. And so basically they banded together on Facebook and they created this, this group called the women of ill repute. And this started making fun of that group to the point where it was clear how many people were part of this Facebook group. And they basically became, instead of a bunch of individual women who were being harassed, became a constituency, a constituency to which the police now had to respond. And it got flipped to where that religious group, if they were out on the street on Valentine's day, trying to attract attack women the police would actually attack them. So that Facebook became this way. And there's countless examples of how like the Catholic church, victims of abuse of them were able to use Facebook to mobilize like, and so I had this vision of that, and it's not that Facebook didn't do that, but I learned later. And what I understand now was that it did it in spite of how Facebook was designed. Like that was a side effect, not the intent because the intent has more to do with how Facebook makes money. Facebook makes money on ads, ads require attention and outrage gets attention, bots, get attention. Russia hacking the US election gets attention, right? It's like it turns out to be a great way to spell ads. And so I think that there are so many different ways to design and, and that's what I've been learning and talking about. There's so many different ways to design places like Facebook for pro-social, right. Ends and means, and to optimize for things like the women of ill repute instead of make them kind of like an unexpected, surprised, good thing. Right. and that's kind of like what I'm preaching now is like, so Facebook, I think like I can name 10 different things, right. That you could do differently on Facebook, looking at things like V Taiwan, for example, that's a completely different approach and much more conscious approach to, Hey, we're making this specific design choice to add friction to trolling and incentivize a nuance. Right. There are ways to use design to do that. And so that's why, like Facebook, to me, it's sort of like, if you can fix Facebook, you can fix anything.

And I guess so as well. Right. So it's the lessons as well. Right. So when you think about that example, if we don't do it like Facebook, what does that look like?

Yeah, and I think that it would be, there's like, there's like, there's always two approaches in any revolution. Right. There's the, how do we fix what's there and is accepted and institutionalized and already has momentum and scale. And how do we just build from scratch? I forget the exact phrasing, but there's an insistence design. There's a sort of like, I think it's H one and H two, but basically there's a type of approach to fixing some that says, okay, let's tweak what we've got now to make it less harmful. Right. And then there's like an H two approach where it goes beyond that curve into, Oh no, let's start with a completely different paradigm. So I think the color different paradigm for Facebook would be like, what if we built Facebook in a socialist society, right? Like what if the goal of Facebook was not extreme growth, rapid growth, but you know, pro social ends. Right. And I think there's, again, I think there's, there's precedent there. If you look at a company like duck, duck go there's a great testimony that the founder of duck duck go gave for the Senate, which should be like required reading for any startup founder, where he basically talks about how duck duck go, which is a search engine that doesn't track your data. And duck, duck go was GDPR compliance before there was GDPR, California compliant before there was California, because he was never trying to screw anybody over. He never wanted your data. He just wanted to give you a useful search experience in a way that wasn't going to built you. Right. He already started from a moral, ethical place. That's still produced a very successful business. Right. And I think that's another paradigm shift we need to make from our business or ethics. You gotta choose one. It's like that, that should scare us right off the bat. As an attitude. It's like, well, no, let's start with an ethical framework. And within that, let's talk about how we can create something that's sustainable and that will help you send your kids to college.

M: Yes, absolutely. And I think that's what we all want. Right? Like most people don't want to work for an unethical organization. Most people don't want to be seen as unethical. So how do we make sure that we're doing both all the time?

D: Yeah. And how do we start from that place? I mean, I think another good place to look here is design education. Most people don't learn ethics when they learn design. I mean, most people don't learn ethics when they learn anything. Right. Like, and I think that like your, one of any practice should be, let's make sure you understand ethics. Let's make sure you know, how society works. Let's make sure you understand history and how this thing that you're learning how to do has been used well and used horribly throughout history. Right? So if I'm teaching data science, I want people to understand the role IBM played in the Holocaust, right? Like that can be used really destructively data could be used to help ends pandemics. Right. It's but understand that whatever practice you have is not a neutral practice, but that should be the first thing you learn. And then, okay. Here's how you make a wire frame. Here's how to use the research. Like, and again, for anything, I want you learning ethics first, right. If you're going to learn how to, how to collect garbage, I want you to learn ethics first, right? That's your 1%, I definitely want you to learn ethics for history.

M: Exactly. And then when we think about this and we think about I mean, I'm going to use your book, I guess, as an example, because I think it's such a great practical way of people thinking about some of these problems that are so big, but the can actually also be broken down into things that are very manageable. You know, it's been a couple of years. Have you seen a difference from when you first started thinking about writing a book to where you are now? In terms of the way companies have been thinking about these issues and engaging with them.

D: So in terms of companies not so much, I mean, there's a handful of like isolated incidents. Like you have the myriad of reactions to black lives matter in the summer of 2020, which range from legit. IBM saying we just going to pursue face recognition technology anymore, which I think at least it felt like a legitimate thing cause it hurt. Right? It meant that that is probably a bunch of research. Now that that is going to be it's going to be a loss on their balance sheet, right. That's going to be, you know, versus, you know, a competition of saying, Hey black lives matter and then like firing their head ethicists because she found that their language algorithms were unethical, you know that's Google in case anyone was missing. So, I think individual companies, I'm not seeing as much difference, where I am seeing a difference is at the ground's level. Right? If you think about collective action, everything from literal unionization where medium now has their has unionized, their tech workers have unionized. So Google where they're starting to unionize. So that notion of collective action, that we're more powerful together as people who care about this and that there are things that we can do together. I think that ethos is fairly new. Like there was no talk of tech unionization in like the early two thousands or even most of the 2010. So like that's really a growing movement. And, and I'm finding like group after group, after group, I'm having trouble keeping track of them, but they're making a dedicated effort to organize around these ideas. So everything from the design justice network, the tech workers coalition, right. Or the pro-social design movement, like there's all these different groups now that are saying, we care enough about this, to organize around that and actually try to come up with approaches and strategies. I mean, it's, it's not unlike the black lives matter, but, but it's a sort of safe, like if we can organize together and know what we want and what we need, we can get more done than if we just individually yell on Twitter. Hey, I wish you were more ethical.

M: Absolutely. Absolutely. And it's really lovely. I think to finish on this note, because I think sometimes we feel like there's not enough we can do and that these problems are way too big, but it's lovely to hear about that change even at a time when there's so much chaos going on. So thank you so much.

D: Yeah, my pleasure.

M: Thank you for listening to this podcast. If inclusive design is something you'd like to learn more about, or you'd like to work together with us. So some of the things we've spoken about today, connect with me on LinkedIn or head to our website centerforinclusivedesign.org.au. All the links can be found in the show notes created by Amy Melson. Thanks so much. I hope you enjoy it.