How do we harness AI for good? How can we make it accessible and valuable for as many people as possible? A panel of industry experts, including Dona Sarkar from Microsoft, tackled these pivotal questions in a special online seminar recorded Tuesday, 27 June. Have a listen to what the future may bring on our two-part special podcast.
Transcript
Manisha: Thank you. Now, we have a lot of questions coming in through the question and answer function.
Dona: I’ve launched into tirade. You asked me the question.
Manisha: No, no, it’s great, and I think this is what I really wanted, right? I think, to Ed’s point as well as Wayne’s point earlier, sometimes in these conversations the words we use seem very ephemeral and, you know, everyone talks about co‑design, we all talk about working together, but we don’t actually talk about what that means and so I’m really excited to have this conversation with you all.
On that note, one of the questions that has come through, Wayne, that I’d like to pass to you first is really a question around what happens when you adapt a product or service to suit one disadvantaged group using AI, but end up alienating either the majority or a sizeable group of existing customers and how do you manage that conundrum?
Wayne: That’s a very good question. I mean, a lot of the work that we’ve done around, you know, the disability sector, you know, it’s like a solution for one group is a barrier for another group within the community. You know, the idea that disability as a group is this homogenous group where one solution will solve everybody’s problems is not helpful at all. There’s very different aspects.
But I think, you know, in prep for this discussion, you know, I went back and I looked at Gerard Quinn’s report, he’s the Special Rapporteur to the UN on people with disability, and he in 2021 gave a report to the UN specifically on AI and disability and one of the big key takeaways from that report is disability and human rights need to be at the centre of any development of AI products and services.
When we look at it from that perspective, there aren’t going to be people that are going to be left out, it’s going to be inclusive of all of us and I think that’s really the thing. You know, developing something that suits me as a blind person, but disadvantages somebody who’s deaf is not an inclusive product or service, so why would we even consider developing that product or service if we know that people with different capabilities are going to be excluded? It just doesn’t make sense. It certainly doesn’t make sense in this conversation that we’re having today. You know, we all need to be included, everyone from the X to the mes to the people like Dona and people even further from the centre.
So I think it’s a really interesting question and it’s one that comes up a lot, but if we’re going to do this right with AI and new technologies, nobody gets left behind, nobody gets shut out.
Manisha: Wayne, I have a follow‑up question for you on that as well because one of the things ‑ and we have spoken about this a lot and we tend to end up speaking about this a lot, but in terms of trying to solve for everyone at the same time, would you advocate for having ‑ you know, we reframe this question a little bit. You know, 10 years ago I personally didn’t realise how gender sat on the spectrum the way I do today and in the future there might be things that I might not know from a disability perspective that I know today ‑ like, sorry, that I don’t know in the future that I know ‑ you know what I mean. When we think about solving for everyone at the same time, how do you think we should think about this?
Wayne: I think that’s, you know, those two key pieces, you know, human rights and if you work from the point of most exclusion and work inwards. I mean, I will say ‑ so people are probably rolling their eyes and going, “Oh, my God, what is he talking about?” I’m an idealist. You know, I’m not a technology expert, I don’t know how to code, I don’t know how to do that stuff, but I do know what I think is necessary for everyone to be included. You know, I’m not saying ‑ you know, clearly it’s a wicked problem or we would have resolved it already.
But, yeah, I think if we come from a place of what is enabling everybody’s human rights and if we start with the people that are most marginalised and work from there in towards the middle, then we’ve got a much better chance of coming up with an outcome that will suit everybody and not just the individual pockets.
Manisha: Thank you, and I hope, Paul, that answers your question as well. I have another one here that I quite like. This is for you, Dona, and it’s actually following on from what you just spoke about, but also this notion of we need to solve for everyone and the conversation is actually around how ‑ what are Microsoft’s procedures or steps to collaborate with people with disability, deaf people especially, in that fourth role of validators?
There’s another piece here around are there any products for deaf people or actual products in the market and how do we get hold of them? And it looks like Dona might have frozen on the screen here and we’ll move her ‑ I think Dona is just hopping out. She’ll hop back in in a minute.
But while we’re waiting for Dona to come back, Ed, I have a question from Nad for you and this one is about embedded bias in AI and how do we protect marginalised communities against societal bias being instantiated in AI, so when we think about things like race and gender via various studies.
Ed: Yes, so I’m sure there’s a lot of people on this call that would be really familiar with this problem, but for many, including me, a couple of years ago it was kind of hidden.
So the basic phenomenon here is that if you’re using machine learning to do a task, right ‑ let’s say you’re using machine learning to be able to recognise, you know, what is a person and what is a tomato, right? Your machine will not be able to do that at the outset. You have to teach it. That’s the whole thing about learning, part of machine learning. So you teach it by giving it as many different pictures of people and as many different labelled pictures of tomatoes and over time it will learn those to be able to distinguish those two things.
The problem, of course, is that sometimes that data is, to use a term that you used before, you know, imperfect, right, and so let’s talk about a different context. Let’s say the decision you’re trying to work out is how to teach a bank how to make home loan decisions. You train it on 40 years of previous home loan decisions and it will learn what the bank sees as, you know, good customers and what it sees as bad customers. But when you think about that, if you take 40 years worth of previous decisions, that includes a whole bunch of decisions from the 1980s and 90s, when there was a lot of prejudice built in against people of colour, against women, against people with disability and others being able to be kind of good loan customers.
So what I’m trying to say in maybe a slightly convoluted way is problem 1 is the training data. So if you’re using a machine learning system, your training data is ‑ you’ve got to assume is going to be imperfect because most real data is imperfect and the more training data you include, the more likely it is that you’re going to improve your model in one sense, but also include more imperfections as well. So you’ve got to confront that head on, you’ve got to do your best to improve the quality of the data.
But the second point is you’ve got to assume that even having done all of that work, it’s not going to work ‑ the results are not going to be perfect, and that’s why we talk about these sociotechnical systems. If what we’re trying to do is take a process that is entirely a human decision‑making process and completely automate it in one single step, that is a really risky process, a really risky aim, especially if it’s quite a complex decision‑making situation.
So the difficulty there, I guess, is we want to make sure that when we use AI to make decisions, we’re creating decision‑making systems that allows the best insights from AI to come to the fore, but that we’re also empowering humans to be able to weigh those bits of insight, evidence, whatever you want to call it, and ultimately make the right judgment about people.
So what we’ve found is that kind of the complex decisions if you wholly automate, you tend to get high rates of disadvantage, discrimination, injustice, unfairness. If you only use human decision‑making, you can also get high rates, but if you’re able to design the system really well, where you get the best of the human and the best of the machine working kind of in a complementary way, then you’re starting to really cook with gas ‑ or electricity, I guess, these days ‑ because that’s something that can be much, much more effective and robust at making a good decision.
Manisha: I really like that, Ed, and, you know, the back of my brain is going what’s the balance, how do we get the balance right, and my sense is, Dona, some of the things you spoke about in terms of those four steps and involving people through the process are part of that as well.
So one of the questions that has been asked is really around that validation point and the person has asked the question related to people who are deaf and the process that they would have to go through in order to ‑ or the process Microsoft uses to include deaf people, but if you could talk about that not just specifically for deaf people but even just more broadly as well. And then the second part of that is ‑ we’ve heard a lot of different products mentioned today and this person would like to know where they can access some of those products based on their disability here, which is, you know, being deaf.
Dona: That’s a great question. So one thing we do a lot is have disability boards who we work with specifically for user studies from the beginning, again from researching and saying, hey, come into our lab or our virtual lab with your device and show us how you use this.
And this might be an interesting story for some of you. Before pandemic, we had a user study physically in our lab in Seattle where we had people with disabilities come in with their devices and we noticed that many people had modified their devices. They put glue dots on certain keys, they’d attached like stickers and drilled holes in the tops and put like pieces of rope through various holes and we were wondering like what is happening.
It turns out it was to make it easier for people to open their laptops, find specific keys, find the power button, find the print screen, the join button, all of these things because especially people with limited mobility or folks who are blind, they didn’t need to go and like mess with finding which was the top of the device or where was the on button or all of these things.
So we thought that was such an interesting revelation and this was not the purpose of the exercise, it was to come in and try some new product, that we decided to build the surface adaptive kit, which were these ‑ they’re kind of like hardy stickers that you get and you put on your laptop and I have them on right now to help me open this device easier because I have arthritis in my hands, so it’s easier for me to grab a loop and open my laptop, and because I’m dyslexic, I can’t read my keys, so I have a big raised X sticker on my print screen key which I use a lot.
So I realised like wow. Some of the most simple solutions are co‑created with people with disabilities from the research phase. But before we even get to the product development, it’s more observing how people use products in the wild, what are the problems they run into that they’ve already come up with some interesting and genius solutions and how can we make that more available to everyone because a big problem people are having when you modify your device, like drill a hole in the top or, you know, any of these things, you lose the warranty and we don’t want that, so you don’t lose a warranty if you put a sticker or something like that. So that was a very interesting co‑design thing that we solved before pandemic.
But to answer your question, we work with a lot of communities around the world to identify who are some people in the blind community, low vision community, colour blind community, deaf community, hard of hearing community, limited mobility, upper body limited mobility, lower body, speech, neurodiversity, mental health, et cetera, and we tend to work with these folks because they are people who are deeply connected in the world with disability groups from every phase, from if we were to build a sign language view in Teams, what would some must have scenarios be, that was a real question we asked two years ago, and we worked with deaf communities all over the world to build the team sign language view which exists now so that the sign interpreter will always be pinned and they’ll always be 50% of the screen. So it doesn’t matter what’s going on over here, the deaf person will always be able to pin the sign interpreter view only in their screen so they will always have the sign interpreter up no matter what content is being shown or the speakers being shown.
That was something that we did in collaboration with the deaf community starting 2020ish and it launched last year and it’s been received very well because it was co‑designed with the deaf community by the deaf community and the head of development is deaf for that product. So those are the kinds of things we do quite a bit.
And then the second part of the question was what are some deaf‑specific products and features that we have implemented ‑ this is near and dear to me because my boss is deaf, so we think deeply about how can we make life easier and more equitable for people like her, but for also the global community of people who are either deaf or losing their hearing. The losing their hearing audience is very interesting because that is all the people who listen to music very loudly with headphones like this ‑ I’m holding up a pair of DJ‑looking headphones ‑ and they ignored their parents when they said, “You’ll lose your hearing”.
So that’s about a billion and a half people, by the way, who’ve lost their hearing in at least one ear by the age of 40. That is a big audience, that is not a small audience. So probably our most signature product is called live captions. It’s done through our add your service and any business can implement live captions within their app and it can work offline or it can work online and they work within Microsoft products such as Windows. You can turn on Windows live captions. Even if you’re not connected, you can have captions on. They work within Teams, they work within PowerPoint. I always have them on PowerPoint. I just check the box, always have them on.
I present a lot in different countries, so I’ll switch the language. I was in the Netherlands and it picked up pretty good, was able to translate my American accent into Dutch without too much of an issue. So live captions is a big one, the sign language view is another one.
And then the third one that I think is very interesting is all of the investments we’re making into the operating system like Windows around hearing aid integration. So LE, the sound, the low energy hearing aid ‑ it’s not called a connector ‑ the protocol, the protocol, how can we use some of the noise cancelling technology that we’ve been putting into our devices as well as Teams over the last few years and how can we put that into various hearing aids that would like to connect to the devices. So those are the three we’re looking at.
But again, deaf population and hard of hearing is an amazing opportunity because in the US it just became legal to sell hearing aids over the counter. So I believe we’re going to see a huge Warby Parker type situation with hearing aids take place over the next few years. So there is going to be a Warby Parker for hearing aids. I don’t know who’s going to build it, but there’s a billion dollar business in there somewhere.
Manisha: Thank you. Now, we are starting to get to the end of this session. So I have one last question for you all and that is that if there was a takeaway for people listening to us here today, what would that be and what can we do to ensure that all technologies that we have or that ‑ what can we all do to ensure the technologies will benefit us? Oh, I’ll start with yourself, Wayne.
Wayne: I was hoping you were going to say somebody else then.
ManishaL Oh, not a problem. Would you like me to start with Ed?
Wayne: It’s fine. It’s fine. I think the takeaway is really that, you know, it’s possible. You know, it’s possible for technology and new technology and AI specifically to be harnessed for good and it’s possible for us to develop products and services using new technologies that don’t create unintended harm or unexpected exclusion, but in order for that to happen we all have to be on the same page thinking about how do we make this work for everyone in the sense of not that every product is going to be suitable for everyone, but that every product ‑ there’s going to be no exclusion from technology that’s developed for various groups because we haven’t thought about that right from the beginning.
So to me that’s the big takeaway is that, you know, it’s possible to use this new technology to make ‑ it’s like an acceptance speech for Miss America, to make the world a better place, you know, to bring us all together to benefit from because there’s huge, huge benefits for all of us, for individuals and society at large, from these new technologies and if we can focus on making sure that everybody’s brought along, you know, like the rising tide lifts all boats, then we’ll be in a much better place. So for me it’s that, it’s that it’s possible to harness AI for good.
Manisha: Thank you so much, Wayne. And Ed?
Ed: Yeah, I’m going to yes Wayne. I agree with what he is saying, I agree that it’s definitely possible. I often feel like at this juncture when AI has skipped over from the laboratory into the real world and increasingly it’s part of, you know, our lives in quite profound ways and also in quite non‑profound ways that we do have this cross‑roads moment, right, we have to make some really difficult choices.
So for me, accepting that it’s possible, then the question becomes well, what allows us to really grasp that AI for good and there are sort of three key things it identifies. The first is what I think everyone has talked about, which is this idea of bringing people into the room who are not always invited into the room, but it’s not enough just to kind of, you know, show them a seat. The real hard work is enabling people to speak enough of each other’s languages so that you can have a genuine useful conversation that is generative of good ideas and, you know, better outcomes. That’s point 1.
Point 2 is, you know, my hobby horse here, which is to look at the law, right? Ethics are useful and they fill gaps in the law very usefully, but the vast majority of these questions are not ethical questions as much as they are legal ones. So we need to be laser focused on that because that also makes us much more kind of attuned to what’s truly at stake because the whole thing about these ethical statements is that if you kind of violate them, there’s no consequence. If you violate the law, there is and increasingly I think, as the regulatory co system is kind of being activated in this space, I think we’re seeing some really important enforcement of the law starting to happen.
And then the third thing I think is to have kind of key people who are able to kind of act as translators, to kind of, you know, help people who are not kind of technically across all of the issues here, help them understand ‑ you know, help with their minimum viable understanding of these issues so that you can draw those people into the public debate. I mean, I saw there’s a question there from Catherine Ball which kind of touches on this and I think it’s a really good question and she’s exactly the kind of person that I think does that beautifully well.
Manisha: Thank you so much, Ed, and knowing that there’s so many questions in the questions and answers too, I hate leaving questions unanswered, so what I might do is see if we can grab some of these questions and get some answers for people to send out the thank you email as well where possible.
But Dona, I’d like to finish with you ‑ final words, what can people actually take away from this conversation, what’s one key takeaway?
Dona: I think the biggest thing people can do is get involved because I’m a control freak. I’m an American, it’s what I do, is get involved where I don’t need to be. But I truly believe we need more creators and less consumers in AI and I want people with disabilities, people from marginalised backgrounds, to play a creator role. I don’t want them to just consume this technology and say, “Oh, it’s not for me.” Make it for you, get involved, build the start‑ups, join up with companies doing the work. If people aren’t willing to hire you, start your own company.
Get with enough people and have majority and quorum that you can actually change the shape of this industry. AI is completely within our control. It is not some mystical thing that’s going to take over the world, it’s not going to enslave humans. None of these things are going to happen. It is a bunch of zeros and ones that we have 100% control over and what I want are people with disabilities to be involved to make lots of money, honestly, and go and build the world that they wish existed, and if that is what I’m going to spend the rest of my career loudly advocating for, so be it. Thank you.
Manisha: Thank you so much, Dona. Thank you, Wayne, and thank you, Ed, for that discussion. I hope people have found it as stimulating as I have. I am now going to hand over to Verity to say some final words.
Verity: And I’ll say a few final words. What a wonderful way to end ‑ more creators, less consumers. I think that’s a message of power to everyone out there to start creating.
Thank you again, everyone, for joining us today. We’ve had a huge number of people sitting in the virtual room throughout the webinar. It’s been really fascinating conversation. Thank you to Ed, Wayne, Dona and Manisha. As I said at the beginning, this is a partnership between the City of Sydney’s Visiting Entrepreneurs Program, the Centre for Inclusive Design and the Centre for Social Justice & Inclusion here at UTS and we’ve been delighted to put this event on.
A few little announcements before I go. Firstly, this event is being recorded and will be turned into a podcast which will be shared with all people who registered for the event. There will also be a fully accessible online version available through YouTube, which again will be shared with all registered. If you’d love to continue the conversation, the Centre for Inclusive Design releases a monthly newsletter with up‑to‑date news on all things inclusion, diversity and accessibility which you can find on their website.
So thanks again to everyone for joining. I think you’ll agree with me that was a really interesting and dynamic conversation and we’ll see you all next time.
Thank you for listening to the final part of harnessing artificial intelligence for good. We hope you enjoyed this special presentation. So until next time, I’m Manisha Amin from the Center for Inclusive Design.