How do we harness AI for good? How can we make it accessible and valuable for as many people as possible? A panel of industry experts, including Dona Sarkar from Microsoft, tackled these pivotal questions in a special online seminar recorded Tuesday, 27 June. Have a listen to what the future may bring on our two-part special podcast.
Manisha: Artificial intelligence is certainly grabbing the headlines, but is it really safe? And is it for the better? Welcome to a special edition of With Not for. My name is Manisha Amin from the Center for Inclusive Design. And what you’re about to hear is part one of a panel discussion Harnessing Artificial intelligence for good. We recorded it earlier this year on Gadigal land in Sydney. The discussion was part of Sydney City’s Visiting Entrepreneurs Program in conjunction with the Center for Social Justice and Inclusion at the University of Technology, Sydney and of course, the Center for Inclusive Design.
Before we begin an apology, The quality of the audio isn’t great, but what’s being said is really important, so welcome to part one. The first person you’ll hear is Professor Verity Firth, Pro Vice Chancellor, Center for Social Justice and Inclusion at UTS.
Verity: So it’s an honour today to welcome a really great set of speakers. I’ll start with Dona Sarkar. Dona Sarkar is the Director of Technology at Microsoft Accessibility. Dona’s current focus is using AI inclusively to ensure Microsoft products are usable by people with disabilities, neurodiversity and mental health concerns. She also runs her own fashion business, coaches professionals and is a published fiction and non‑fiction author. Welcome, Dona.
Ed Santow is an Industry Professor and the Director of Policy and Governance at the UTS Human Technology Institute. From 2016 to 2021, Ed was Australia’s Human Rights Commissioner, where he led major projects in the areas of new technology and human rights, LGBTI rights, refugees and migration. Welcome, Ed.
And Wayne Hawkins is the Director of Inclusion with the Australian Communications Consumer Action Network. Wayne joined ACCAN in 2010 and has led ACCAN’s work on telecommunications access for consumers with disability, broadcast television access for people with disability and emergency services. Welcome, Wayne.
And finally, I’d like to introduce our facilitator for the discussion today. Manisha Amin is the CEO at the Centre for Inclusive Design. Manisha is a thought leader in the power of thinking from the edge. She has a unique talent for seeing beyond the horizon to emerging trends, defining them and building powerful communities. She is the host of the With Not For podcast. Welcome Manisha.
I am now going to hand ‑ I’ve got my coffee. I’m going to sit back and listen with much interest to the panel and hand over to Manisha to lead our discussion.
Manisha: Thank you, Verity. It’s such a privilege to be here. It’s a privilege to be here with everyone on this call. I’d also just like to acknowledge any First Nations people on this call here today with us as well and their knowledge and the knowledge that they have brought to us from, you know, time immemorial and the knowledge that we need to look at for creating the future that we want to create as well.
And I’m going to get right into it with my first question because, you know, there have been a lot of presentations recently on AI and a lot of presentations on why AI is great and a lot of presentations on why AI is terrible. What we’d like to do today is really focus on how we can make the world a better place and what we need to do from a human perspective to make sure that AI actually works for us and with us, not against us.
I think one of the interesting things about this is, you know, when we think about AI or technology, I think it’s only as good as the humans that are wielding that technology and creating that technology. So my first question to you, Wayne, is really around when we think about this notion of harnessing AI, who do you think about and who do you want it to be good for and how do you focus your work in that area?
Wayne: Thanks, Manisha. Firstly, a quick thank you for being invited to participate. It’s really a great pleasure to be able to participate in this discussion today.
Getting to the question, I think, you know, from my professional role as a member of ACCAN, which is the consumer representative body, you know, I think primarily when I think of AI and harnessing AI for good, you know, I think in the terms of all consumers and all end users, you know, so that AI benefits us as a community without having those harms that most of us are aware of, the potential harms that can come from that.
As the Director of Inclusion at ACCAN and somebody who lives with a disability, as a person who’s blind, you know, my focus is also on how can we harness technology to improve the lives of people with disability. You know, when I say that, I also mean seniors who have age‑related impairments and who may not identify as people with disability, but can benefit from the same assistive services of accessibility features.
I think there’s two parts of this for me. One is about, you know, the end user’s services and products and then there’s a second piece about AI, which is a sort of broader community society question about the decision‑making aspects of artificial intelligence. So just focusing on the assistive part of it for end users, you know, there’s such great potential with the emerging technologies for everybody, but specifically for people with disability it can be really transforming in their lives.
You know, my own experience using ‑ I use Seeing AI, which is a Microsoft‑developed app for people with vision impairment which I can use to get around, find my way, which is, you know, remarkably helpful. I also use the benefit of speech recognition software, which sometimes that’s helpful and sometimes I end up yelling at the machine because it doesn’t understand what I said. But, you know, the benefits of that in the context of like home assistance for people with disability are just profound, you know, the way that that can really change somebody’s life by having that capability embedded in their home to make their lives better.
And then I think in that context, you know, really what we need to do is make sure that the people who are developing that really are looking at co‑design, you know, so people with impairments are being used in the discussions and development of the products, a human rights approach so that they understand that by making these products usable for all, you know, helps enable everybody’s human rights and I also think that one of the key aspects of what needs to happen and what a big focus needs to be is in education across the whole AI ecosystem so that the developers, the coders, everybody who’s involved in creating new technologies understands that everybody has to be included ‑ not just the typical consumer, but all consumers.
So that’s what my interest is in this and, you know, I see huge possibilities, but I am also very aware that there are potential issues that need to be acknowledged and addressed if everyone is going to benefit.
Manisha: Thank you, Wayne. And Dona, from your perspective then within Microsoft, who do you actually look at when you’re thinking about harnessing AI for good and what’s the focus in your world?
Dona: First, Wayne, thank you so much for your comments because I’m sitting here nodding aggressively because I agree with everything you’re saying.
My work specifically is I head up the Inclusive AI program for Microsoft and that includes a big spectrum of things and the area that I am personally very invested in is around accessibility and that includes people with disabilities, folks who are blind, folks who are deaf, limited mobility, speech impediments, whether they’re born with it or something we age into, right, because both of these things are true and we are all going to develop a disability at some point or another ‑ through accident, age, just existing and the human condition. So that’s one thing.
The other part is looking at neurodiversity, which I am neurodivergent, I’m dyslexic, and a very large percentage of the tech industry is neurodivergent. Folks have ADHD, autism, dyslexia ‑ very, very common, I would say it’s about 50% ‑ and then of course folks with mental health conditions, and around the pandemic it became very obvious how many of us do live with a mental health condition, whether it’s temporary or situational or permanent, whether it’s social anxiety, seasonal depression, et cetera. So the gamut of accessibility needs is way broader than what I think we think of as a society because we think of it as invisible disability, but most disabilities, 80% are invisible.
So as we build products, it’s so important to keep this in mind. There are four roles that I like people to play when building AI products and this is how we do it at Microsoft, it’s how we do it on our team. The first role is actually the AI or machine learning expert. They are extremely good at identifying the models that need to be trained, training the models, finetuning the models, identifying how the model works. So this is someone who’s deep in calculus, they probably have been doing this work 10, 20, 30 years. This is a very large percentage of people in the world. I am not from this world, to be clear, but we need diversity of people with disabilities to play these kinds of roles. The good news is neurodivergent people do extremely well at these kinds of roles, so that’s good.
The second part are the actual business expert, like what problem are we trying to solve with our AI. Like, for example, when you talked about Seeing AI, the business problem there is how can blind people see the world autonomously, so that is the business problem. The person who understands the problem best is a blind person, so we need to make sure that the product management teams either have blind folks on them or consult with blind folks on what are some of the key scenarios you have. Reading backs of medication, looking at food labels in grocery stores, these are just some ways that blind folk can live autonomously without having to rely on a sighted assistant.
The third person are the product makers and these are the actual individual designers, developers, testers, data scientists who sit down fingers to keyboard or voice to keyboard and they build the products. This is where the largest percentage of Microsoft lives because we are a company of 100,000 developers and product makers and we have a very large number of people with disabilities within our product teams by design because the more people you have with disabilities in your product team, the better that product is going to be for everyone.
And then the fourth one is a new role that is very important for AI, which is the validator, the deployer and the teacher. They’re the people who take the AI product and teach the general population how to use it. They make sure that this AI is not biassed, this AI is citing correct sources, this AI is actually solving that problem we identified in step 2 and that people understand how to use it to solve their specific problem.
This fourth role I think is incredibly interesting for people with disabilities because I would rather have a blind person such as folks on my team teach other blind people how to use AI than me. I shouldn’t be in that business. I am better at teaching other dyslexic people how to use GPT products than other dyslexic people. So I really truly believe we need more representation in these kinds of roles, whether you work for a company or whether it’s something you do independently to play a role, but honestly, everyone has a role to play in the AI verse and it has nothing to do with whether you’re a computer scientist or a data scientist by design. You have to have a keen and passionate curiosity about how this works and be in a place where you’re ready to bring your whole self to the table to be able to make these products really and truly usable for a global community.
Manisha: Thank you, Dona, and we’re going to come back to some of those points a little bit as we go through this conversation as well.
But first I’d like to move to Ed with the same question really in terms of the work you do ‑ the work you’ve done, but also the work you’re now doing in really focusing on where that biggest gap is and the work you do in resolving that.
Ed: Thank you very much for having me, Manisha, and I’d endorse everything that has just been said by Dona and by Wayne.
So when we talk about wanting AI to be good for everyone, that too often is code for actually being just good for me and I actually mean that very specifically, I mean people who are just like me ‑ white middle‑aged men without an obvious disability who are middle class, who basically the entire world seems to be designed for. I’m very fortunate to be in that position, but it’s not very helpful, right, because when we use that kind of heuristic or shorthand, that means that people continue to be left out. So it’s taking that approach I think that Dona laid out which is being quite systematic and rigorous about how you zero in on people who are not me and bring those different perspectives into the design and development process.
I’m happy to talk more about the work that we’re currently doing and I do want to kind of take up that opportunity, but I just want to go back into recent history a little bit and reflecting a little bit on some of the work that I did two roles ago when I was a human rights lawyer, so before being Commissioner. I did a lot of work with people with disability and a couple of the cases that were particularly important that we took were cases against Australia’s two biggest supermarket chains.
So they had moved on to online shopping, which of course is very important. They saw this, rightly perhaps, as a central boon for people with disability who couldn’t easily go physically into their supermarkets and do their shopping there. But what we found was that they were both not accessible the websites, right, for people who are blind or have a vision impairment. So, you know, we do what we like to do which is to say well, this is a bit like a Mafia kind of project, this is the opportunity to fix things up before we take you to court. We end up having to go to court, that was that, and we won.
But the really interesting part of this story was that one of those big supermarket chains said, “Well, look, you know, we’ve learnt our lesson, we totally get this now and we’re going to be laser focused on making sure that we’re accessible from now and forever”, but that’s not what happened. What ended up happening was that there was a key staff member who left the team and we had to run the case all over again. It really felt like Groundhog Day because with a later upgrade of their website, it then became clear that the same problems would seep back in again.
The reason I’m telling this slightly boring law story is because it is that rigorous approach. These problems as we see them are sociotechnical. They are perhaps more socio than technical. It is about that kind of approach that in particular Dona was outlining where you need to get the right people in the right room, you need to have a really clear process to make sure that they’re heard, that you’re kind of engaging the insights and making sure that they’re real. So what I was talking about there is not fancy new AI, it was something as boring and simple as a website, but the principle is the same and we need to continue to take those lessons, learn from them and apply those principles as AI becomes kind of all around us.
Manisha: Thanks so much, Ed, and I want to continue from that point, actually, because I think that it brings up one of these really interesting issues around some of the pitfalls that come not just with technology, but also with AI and it’s not one of the questions that we were thinking about asking, but it just seems to me that both yourself and Dona have touched on something really important here, which is how do we fix something when it’s not right. And it seems to me, just listening to your story as well, that often when there’s a problem and, you know, we do a whole lot to fix that problem, but then when we think about, you know, agile systems and cascading workflows and all of those people working in technology, it’s very easy for the system to almost bounce back to solving for someone who looks like yourself. So what are some of the things that you think about in this area that need to be considered, Ed, when you’re looking at developing AI technologies?
Ed: I mean, happy to take the first stab at that. The starting point is that I think we’ve reached the end of the road with kind of high‑level, vague ethics principles, right? We’ve had lots of those over the last few years and they all say things that are pretty similar and are pretty unobjectionable. So they often have, you know, do no harm, respect privacy, think about diversity. All of those things ‑ as I say, I wouldn’t disagree with any of them, but they’re all hopeless in the sense that when you actually look at the empirical data, they don’t have any discernible impact, or the vast, vast majority of them have no discernible impact.
So I just want to be really clear, that’s not the way of doing it. The way of doing it is to kind of approach it like every other field of endeavour and for some reason, we sometimes put technology in a separate category. It’s not, right? And AI specifically is not. What we need to do is we need to look really carefully at how technology or a particular product or service can engage people’s human rights. You start with the big bits, but usually it boils down to a much smaller list, and then kind of ask the question starting with the law, how do we stay on the correct side of the law here, how do we make sure that what we’re doing is not going to be discriminatory on the basis of someone’s disability or their race or their age or whatever.
Those questions can be quite difficult and I guess there’s a little point I just quickly want to make here, which is that when you have those multidisciplinary teams ‑ again, as Dona was outlining ‑ they can be especially difficult. I can’t tell you the number of times that I, as a boring lawyer, have kind of irritated the kind of more technical engineer, the data scientists, the machine learning types so much by saying, “Well, look, I’m sorry, there’s a bit of grey here” because eventually every conversation ends with someone banging their shoe on the table and just saying, “Enough, stop going on about it, Ed, just tell me what is the number.”
I remember the first time I had this conversation I said, “What do you mean, what’s the number?”, and they said, “Well, if you want to make sure that 50% of our customers are male and 50% are female or X number of home loan applicants have a disability, we can do that tomorrow, we can do that like in 10 seconds, we just change the parameters”, and unfortunately it’s often not that simple. So there are some things that are absolutely clearcut, black and white, and then there are other things that actually require some careful thinking.
So if you boil all of that down, there’s ethics which has been useful, perhaps, in one thing, which is to kind of raise people’s awareness of what’s going on here; there’s the law, which we need to be much more focused on what are the actual legal requirements; and then the little caveat to that is sometimes in applying the law, particularly in the area of disability discrimination, there is some grey area and that’s not susceptible to a simple arithmetic solution. Sometimes it requires very, very careful working through in those multidisciplinary teams.
Manisha: And Dona, I’d like your view on the same question because I think what Ed has brought up is really important and this notion of how we do it and not just how we bring the right people into the room, but what happens when there’s a problem, what happens when that complaint hits and things have to be redone? What do we do then?
Dona: This is so good because this is the literal conversation we have on our team, and I’m laughing because I work with our lawyers too and this is me. I’m like, “What is the percent accuracy we need to hit, give me the number and I’ll get you there.” They’re like, “Oh, it depends.” I say, “I don’t understand this question, what do you mean”, because engineers it’s like you need a number, hit the number. Lawyers are like, “It really depends on what you’re doing and the law.” “I don’t know what you’re saying.”
So I’m a firm believer in law makers, but law makers in the US really struggle with technology and making technical laws, right, because they don’t quite understand like how it’s made. So sometimes the laws come in ‑ like you remember when Italy banned ChatGPT randomly, right? They said, “Oh, it’s banned”, and of course every Italian knew how to figure out a way to VPN and use it anyway, which is what’s going to happen. Just outright banning things, it just does not work, especially for technical things, because young people will find a way and everyone will. I’ve seen that happen in every country that’s ever banned things.
But to answer your question, Manisha, I very dramatically wrote these four things up here and I’m going to read them to you because I wanted to make sure I didn’t forget. There are four things to keep in mind for when we are building an AI product, okay, and we revisit them over and over again throughout the product life cycle from ideating the thing, just coming up with the concept, to coming up with the first ‑ doing the research, ideating, designing the first version, developing a prototype, testing it and then going all the way back. So we go through it over and over and over again. It’s repetitive and we do this for every AI product we build at Microsoft, which is in the hundreds, probably more. So the very first one that, you know, I talked about excessively is the team, making sure we have diversity in the team of those four roles.
The second one is the core central part of AI is the data, so what data are we using, is the data clean, is it organised, is it accurate, do we have the right sources, and if we can’t say we trust these data sources to give us good information, then we’re not building a product. So for example ‑‑
Manisha: Can I ask a question around that? I’m going to interrupt you a little bit. When we think about what clean data is and when we’re talking about diverse communities and vulnerable communities, what does clean mean to you, right, because sometimes the only people with fantastic clean data ‑ and sorry, Ed, I’m going to use you now through this whole presentation ‑ sometimes the people with the clean data, the great data, are people like Ed, not necessarily the people who are on the edge, the vulnerable communities, the marginalised communities, the people who are homeless. So what does clean mean to you in that scenario?
Dona: Are you asking me?
Dona: Oh, okay. So what clean means to me is that this data is accurately representative. Now, the issue we have is that the data set is not big enough. So, for example, and we just went through this so I’m so excited we’re having this conversation. There’s a company called Be My Eyes and Wayne, you may know about this company, but it’s a service where a blind person can call and say “I need to talk to a human because I’m pointing myself at a thing and I need someone to tell me what it says”. So they work with a team of volunteers all around the world. I volunteered for this. It’s like, “We’re looking at a refrigerator, there’s a carton of milk and it’s expired two days ago, do not drink this”. So it’s really like a representative data set of things that blind people are interested in.
So again, it’s a data set, but it’s not a huge data set. What we need to do when we build an AI product is weight this data set higher than the general data set if we’re building a product that has to work well for people with disability and we know how to do that, we know how to say let’s prioritise this data set and let’s factor it into the next version of GPT, which we did. Be My Eyes was actually bought by OpenAI to up their disability data set because OpenAI knew and acknowledged oh, our blind data set is not good, it kind of sucks. So by buying Be My Eyes and their data set, they were able to like get much better representative of what blind people are looking for.
This is going to be the same thing for voice, right, people who maybe have ALS, are losing their voice, people who are dyslexic and what does the world look like for dyslexic people. So there are data sets that exist and where there are not, we have to grow those. This is where we need people with disabilities to play roles in that first area of being involved with AI machinery.
So the first category is making sure your data is clean, it’s organised and it’s accurate. It does not need to be the world’s biggest data set because we can weight it more and make it more important.
The second thing is context. I’m a huge fan of scoping AI projects. I’m like, “Oh, let’s use GPT‑4”, which is an enormous large language model, foundation model, “to solve all of our life’s problems.” I said, “What are you trying to do?” So if I’m building, say, an accessibility bot and all I want to do is answer accessibility questions, what I would do in code is use the prompt parameter and say only answer accessibility questions, stay in character, do not answer questions about anything else. I would give the system that command so my bot wouldn’t be giving legal advice or be giving like medical advice or writing poetry or thinking about its existence, all it would do is answer accessibility questions, and we have that ability through the prompt mechanism as well as the temperature mechanism.
The law of the temperature, temperature parameter zero to one, one is like be creative, that’s write haikus and like brainstorm with me. Temperature zero is answer the question, cite the sources. That’s where I would keep it. So I would be very specific with the context, like only accessibility, keep the temperature zero, don’t get creative, answer the question.
And then the third thing that I think is really important and people don’t do enough is use the moderation end points. Now, every AI product has moderation end points that once the answer is generated, you can actually use post processing moderation to say should these results show up or be adjusted in any way before they’re presented to the audience.
Now, we had a major win recently because I’m very annoying and we were able to go and “influence” positively the AI builder people to include disability bias into the moderation end points for GPT‑4 and there is a difference between GPT‑3 and GPT‑4 in terms of moderation end point because it did not used to have disability bias and now it does. So you’re going to see a difference in the results with GPT‑3 and GPT‑4 when it comes to disability. That’s what it takes. It takes people like us who do live with a disability to be that last role, right, which is the validators, the trainers, the testers to say these aren’t good answers. I’m going to need you to change this and the way we’re going to do it is through all of these steps, the data, the actual context as well as the moderation.
Manisha: Thank you for listening to part one of our special presentation, Harnessing Artificial Intelligence for Good. Until next time, I’m Anisha Amin from the Center for Inclusive Design.