SheCanCode's Spilling The T

Navigating the Ethical Labyrinth of AI and ML: A Deep Dive into Accountability, Bias, and Diversity

SheCanCode Season 12 Episode 11

We delve into the intricate web of ethical considerations surrounding the ubiquitous integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies. Join us as we confront the ethical challenges head-on, exploring the profound implications they carry for society at large. 

Charlotte Byrne, Stephanie Quan and Emily Rudolph from Capco dissect the inherent biases entrenched within training data and algorithmic decision-making processes, shedding light on how these biases perpetuate systemic inequalities, particularly among marginalized communities. We engage in a candid discussion about the urgent need for proactive measures to address these biases, both in the workplace and beyond. 

Furthermore, we interrogate the role of training programs in equipping individuals to wield AI and ML tools responsibly, and the potential ramifications on performance management when these tools are wielded without adequate understanding. 

Tune in as we navigate the ethical labyrinth of AI and ML, striving towards a future where technology empowers rather than marginalizes. 

 

SheCanCode is a collaborative community of women in tech working together to tackle the tech gender gap.

Join our community to find a supportive network, opportunities, guidance and jobs, so you can excel in your tech career.

Kayleigh Bateman:

Hello everyone. Thank you for tuning in Again. I am Kayleigh Bateman, the Content Director at SheCanCode, and today we are discussing navigating the ethical labyrinth of AI and ML a deep dive into accountability, bias and diversity. We're going to delve into the intricate web of ethical considerations surrounding artificial intelligence and machine learning technologies and we're going to confront the ethical challenges head on, exploring the profound implications they carry for society at large. I've got three incredible women with me today from Capco. I've got Stephanie, Charlotte and Emily with me today, and we're going to dissect this topic a little bit and discuss the inherent biases of AI and ML. Welcome, ladies. Thank you so much for all coming on the podcast with us today. It is a pleasure to meet you all. We have lots to unpack today, so can we kick off with a bit of background about each of you, just to set the scene for our community? So, Emily, should we start with you?

Emily Rudolph:

Sure happy to. So yes, my name's Emily Rudolph. I'm currently a principal consultant at Capco. This is my first job not being out in industry, so I'm new to the consulting world by training. I'm a lawyer. I've been in law and its intersection with privacy and security technology for about 10 years, a little bit more. My last job prior to this was I was the head of privacy for a global geospatial data and mapping company, where I had to be in the thick of a lot of these issues as they were very heavily into the machine learning space and how that intersects with location and how that intersects with location, which that leads to all sorts of interesting problems in terms of planning and similar topics, and I'm happy to be here today to talk a bit about my experience building out programs related to AI governance.

Kayleigh Bateman:

I love it when people surprise me with backgrounds that I don't think they're going to say. When you say lawyer and that you ended up working in tech, is this going to be one of those really interesting backgrounds of how you found your way into tech and just a really interesting viewpoint. I love it when people just don't come up with the traditional route of falling into tech. So yeah, Stephanie, yourself.

Stephanie Quan:

Yes, hi, thanks, hi, nice to meet you and I'm glad to be on the call today. So my name is Stephanie K. I'm a senior consultant with Capco. I've been with Capco for almost five years now and my background in AI really started with my second master's. I have an MBA. I started with a business degree in undergrad and then an MBA and then I got my Master of Management in Artificial Intelligence from the Queen's University Smith School of Business in Canada and then I got my postgrad from Harvard University, from the Harvard Business Analytics Program, and that really just helped to continue with my upskilling with the intersection of business, the use cases around AI and the development of AI. I mean most of the consulting work that I do today is mostly around digital transformation, but there's always a discussion around how can we apply analytics and artificial intelligence as we progress through the development, for example, data pipelines and also the regulations that surround it, especially in the financial services industry, where it's a highly regulated environment industry, where it's a highly regulated environment.

Kayleigh Bateman:

Amazing, amazing. Stephanie, it's great to have you on here and to hear a very different background as well. It's going to be a great discussion if everybody's got really different backgrounds and what they studied and where they come from. So thank you for coming on and sharing your story with us today. And, and last but not least, we have Charlotte.

Charlotte Byrne:

Hello, hi, nice to be here. So, yeah, my name is Charlotte and I am the Gen AI lead for Capco. I recently joined, about eight weeks ago, and in my career prior to that I worked as the AI product lead in one of the big four. So I've got a lot of experience over the past number of years of building and deploying AI and LLMs at scale in organizations and scrappling with some of the things you touched on on the intro. So the ethics around it, the governance, how do we make sure what we're doing is right and responsible, and also what are some of the opportunities, but also, I suppose, greatest risks, of using AI more broadly.

Charlotte Byrne:

And I think a lot of people who know me from school are surprised at where I, where I sit and what I do. And I, you know I wasn't the best student in school, let's say, and I didn't excel when it came to kind of exams and grades. But one of the areas that interested me most was business and I just really got that subject and so I ended up starting out doing a diploma in business when I graduated school and moved on through my degree that way and then, I would say, fell into technology. I never considered myself a tech enthusiast. Um, but solving business problems over the past number of years kind of led me down that technology route. And and yeah, I've now been working in AI for close to a decade, so, yeah, glad to be here incredible and, uh, we love that.

Kayleigh Bateman:

We love people falling into tech and squiggly career routes. That's why this um podcast was started, and also, eight weeks in and you're already on a podcast. I mean, it's quite a quite a story. So, uh, thank you for volunteering to come on and have a chat with us today. Um, ladies, you all have very different backgrounds, um, and all really interesting um roles at Capco. So, um, we have a lot of questions to get through, and I'd love to kick off, stephanie, with what are some of the most pressing ethical challenges you see in the widespread adoption of AI and ML technologies today.

Stephanie Quan:

So I thought I would start this question with an interesting little experiment. I ran chat GPT-4 on my phone it's on an iPhone and anyone can do this and so I put in the question that you were going to ask me. I'm going to just see right now, in real time, what ChatGPT would say. So it says. Chatgpt says several pressing ethical challenges accompany the widespread adoption of AI and ML technologies. These include issues like bias and algorithms, privacy concerns, job displacement, accountability for AI decisions and the potential for misuse, such as deepfake technology or autonomous weapons.

Stephanie Quan:

Now we can see how coherent that answer is, and that's actually quite where I want to start with this technology, especially in LLM, which is what a chat GPT for is in the state of the art. If this falls into the hands of a bad actor, we can see the implications that can stem from this technology. You can just turn on the TV today and you'll notice that without the right guardrails around the LLM or the AI technologies, we can fall into serious situations where it has huge societal implications. And so that's where I wanted to start was stating that for today's pressing issues around ethics in AI, it's really around the output and the decision-making that comes from the AI, especially if there's associations made in the data that's inputted in the AI algorithms that was used to pre-train the algorithms that would affect um, the, the output that would others would see in society, for example love the fact that you mentioned guardrails.

Kayleigh Bateman:

That was just just just that phrase. There is just so important because people are really worried about you know, as you said, pre-training um, uh, ai and and the things that it is learning from us is just so, so important. There is a and I don't know if you have all seen it, but there's um and uh. In the uk there's an advert that has come out recently I won't name the beauty brand, um, but they are talking about ai and that they make a um, a pledge that they will not put images in that are distorted or that have been changed in any way, because they want real beauty to be looked at as real beauty.

Kayleigh Bateman:

So if you were to ask AI what real women look like, it comes up with, you know, all of these pictures of women that just don't look like women and they've all been doctored and their skin looks absolutely flawless and that's not actually what women look like. So this beauty brand has made a pledge to say you know what in AI, we're not going to put those type of images in. And when you ask it what real women look like, it's going to come up with people that has you know their true hair and their true skin and their true beauty, and that that comes in all shapes and forms. And you are right, it's the way that those machines are trained and the data that is put in the answers that it spits out, and that has to be a true reflection of everybody in society. So I love the fact that you refer to it as guardrails, which is so, so important.

Stephanie Quan:

Exactly. I think that without the human intervention of putting in place the associations well, the way, the data that is input into the AI models and the pre-trained weights, we don't know what the associations are coming up.

Kayleigh Bateman:

But we do know that there are inherent biases in the data, whether they were intentional or not intentional, but we trained on so much data that there could be these biases and, again, these associations that are made that we don't necessarily, that don't necessarily reflect our true values or the values that we want to put out into society exactly that I I couldn't agree more and a lot that is um down to the people that are in charge of those technologies and being mindful of that and working at companies that are mindful of diversity as well, and all of that kind of trickles into the culture of that workforce and the people that are working on those products as well. Charlotte, I wanted to ask you a little bit about biases, um, in training data, as we were talking about there. Um, how do biases in training data and algorithmic decision making perpetrate system systemic inequalities, particularly for marginalized groups? It's a mouthful, charlotte, but can you provide some real world examples?

Charlotte Byrne:

Yeah, that is definitely a mouthful, I think you know. Stephanie alluded to some of it there. But really, when we have certain groups that are maybe underrepresented in data sets that are used to train the model, it will often perform in a way that ends up being bias against those underrepresented groups in a way that ends up being biased against those underrepresented groups. Or, if biases are reflected in, you know, historical data and societal data and publicly available information, it will also reap in and feed into the models as they're being trained. You know, an example for this would say be in insurance or credit scoring from banks. They can inadvertently incorporate biases, say from postcodes, which may end up impacting people of certain race, and it's not a direct race correlation, but the neighborhood information could end up feeding through to that, but the neighborhood information could end up feeding through to that. So not having the right, yet the right data and inputs, I think an example I've liked or I've used for years rather than have liked is around the seatbelt Crash dummies for seatbelt testing used to be based on men and the profile of the male body, and therefore women weren't really considered when it came to safety features such as seatbelts and airbags, and ended up actually being at a higher risk of injury than men as a result. But that touches on, I suppose, the data that it's trained on, but it's also on decision making. So you know it will learn from previous decisions. A good example here would be hiring decisions. So if you have a hiring algorithm and it will often favor candidates that have been selected previously over the past 10 years, most of which were men or the majority were men which resulted in the algorithm kind of downgrading resumes that included reference to women, such as, say, women's chess team leader it would automatically downgrade that from the reference to women, and that is because it's learning from previous decisions. So it's both the training data that's fed into it but also, um, the previous decisions that have been made.

Charlotte Byrne:

Um, I think, uh, one example of this that really brings it to life for me is a challenge I was faced with, uh, where a large organ, a large university, asked me. They said they were having a challenge with year-on-year dropouts throughout their university years and by the time it got to say year three or four, they were running unprofitably because there was such a large dropout and not that many individuals can join a course on year three or four, and so they wanted to know how could they use historical data to better allocate offers to future students who would most likely complete. You know it sounds like. You know it sounds at a high level, maybe like a good thing to do. How can we look to make sure that we have the highest level of completion and we make sure people who are most likely to complete pass through?

Charlotte Byrne:

When you start digging down into that data, you start really carving out certain, you know, demographics who are least likely to complete, and I had to kind of I had to go back and I was quite junior in my career in AI at the time and say this is not ethical because what this will lead to is people with the deserving scores not being made the offer based on their demographics. And you know that was challenging for me. And you know that was challenging for me. It was actually it wasn't challenging for me to go and do it because it felt ethically and morally like the right thing to do. However, that could be missed and those are the kind of things that we need to look out for and, where it could, kind of historic information could seep into future decisions.

Kayleigh Bateman:

Yes, yes, especially around hiring decisions, my gosh, so, so, so important. And you're right, even having yourself and different people from different backgrounds on the team to be able to step in and say, hey, did you notice that this was doing this or did you notice that you know this will lead to future decisions that perhaps are not quite right for our company.

Charlotte Byrne:

Just even having yourself and just a different mindset to step in and yeah, and I think that and I think the kind of seatbelt example is a good one of that there was no intended bias of that. It was that there was, you know, a bunch of men in the in the kind of crash dummy test scenario trying and testing it. So it wasn't that they were prioritizing themselves, they just didn't have that breadth of diversity to bring the thinking to the team. And that is where having diverse teams and is really really important as we move forward yes, definitely, I couldn't agree more.

Kayleigh Bateman:

Somebody said that to me actually the other day, um, about, uh, they worked at a banking firm and they were rolling out a new product in the states and, um, just, they didn't have anybody on the team that was from the us. So when they rolled it out in the us, it didn't do very well. And they said after a while, they sort of noticed that, you know, having sometimes just a diversity of voices before you do things, before you have that terrible PR moment where you go, well, that didn't work or that was a waste of money. Actually there was somebody, um, you know, that was actually in the audience that you're trying to reach to be able to point out this wouldn't work for that audience or they would hate, you know, using payments that way.

Kayleigh Bateman:

Um, but yeah, the more diversity that you have on a team, the better when those questions come up. Yeah, emily, I can see you nodding away at Charlotte. I don't know what she was saying about hiring, especially hiring practices. So I wanted to ask you should workplaces be offering specific training programmes to help employees better understand and utilise AI and ML tools, and how might this impact performance management within organizations?

Emily Rudolph:

Sure and, first off, definitely, you're never going to find somebody who works in compliance who tells you to do less training. I mean, at a minimum, companies really should be updating their processes and training to account for some of these new risks and challenges that are associated with this explosion of interest in AI and machine learning technologies. From a diversity and discrimination perspective, a lot of that ends up coming down to a more critical way of thinking about these processes and how they impact your own perceptions and your work. So you brought up the hiring, because to me, this ends up being the problem is similar to things we've seen in the past, but it has some new flavors to it. So there's always been a question of okay, is your data biased? Can that bias of that data bring in problems so that your overall model ends up being discriminatory?

Emily Rudolph:

There was a different hiring example that came out into the news a few years ago. Different hiring example where I that happened that came out into the news a few years ago um, basically, where the use of an AI hiring tool was resulting in discrimination of against people, um, with darker skin and part of it. And this one, um, and yes, there's an element of on that of um bias and previous hiring decisions feeding in, but also it can it, the bias can even be more trickier than even can even be trickier than that. So one of the things that was found to feed into this was the fact that the facial recognition technology being used simply didn't work as well on darker skin, and so that led to people with darker skin faces being scored lower, and that results in recruiters and hiring managers not being provided as many dark skinned candidates, and that starts building a bias that nobody intended but was there in the underlying data. Another one which involves a bit of critical thinking based off of this kind of inherent bias is when you and this is getting back near and dear to some of that location stuff that's in my that's in my background is looking really critically at those is. Sometimes. This requires some creative thinking and looking critically at that, at those data sets. So let's say, for example, you wanted to learn, figure out where to cite certain locations for, for services, for like a, for, I don't know, maybe a branch opening or something like that, and one way you might think to do that is traffic data or cell phone location data. But what you have to be cognizant of is okay, is it possible for there to be a difference in representation within that data set, perhaps based off the smartphone ownership and usage? And this is very similar to problems that we've seen in the past, like, for example.

Emily Rudolph:

Another one that's near to my heart is someone who lives in Chicago is a bit of the history there of discrimination in housing and how that's led to developments over time, where you had the historical practice of redlining, where you had neighborhoods that became blighted due to a practice in the insurance and mortgage industries and so those properties were less valued. You had then future infrastructure decisions being made partially based on that and future infrastructure decisions being made partially based on that, where you then had highways cutting through these neighborhoods, which has further on knock-on impacts, which even today is impacting what cell signals might be available for that neighborhood. So one of the things you have to be careful of here and really try to think creatively is okay. Well, how do you avoid reinventing redlining for the 21st century?

Emily Rudolph:

So, at a broader level, I think we're only beginning to understand the true scope of the impacts of this technology, but in my view, companies absolutely should be exploring how they can integrate these tools into their ways of working. But just like any other significant technological shift, the ways of thinking of the users need to be updated and changed, the employees' ways of working, and that's something that is like with every other major technology adoption is something that will take years. One practice that I have seen which I think bears promise is there's a place for broad-based training, but there's also a place for something like an accelerator or a center of excellence that can be stood up to really explore how some of these AI use cases might be applicable within the organization and also to kind of give a common like choke point, basically for training, for reviews, just to make sure that the use cases aren't setting afoul of any of these complicated considerations.

Kayleigh Bateman:

Yes, I love all the examples that you just gave there. So, so, so important and just so important again, like what charlotte said, with somebody stepping in as well and just saying, you know, just a human stepping in and just saying, is this okay? Or you know, like that what we're doing now is going to have a huge impact on, um, things you know, decisions that are being made in the future. Um, I keep thinking, though, is somebody with a background in law, you must be having an absolute filled day with the fact that artificial intelligence is moving so fast, and when we, months ago, we had a conversation about AI on this podcast, and I remember thinking, all of the lawyers out there are now having all of these problems of like, who is responsible for something you know? And it's like, oh, oh, actually I, I asked chat gbt something. Well, who is responsible for? The answer that it that it spat out, and somebody made a big decision on something um, so yeah, from a lawyer's point of view, you must be.

Emily Rudolph:

Uh, your brain all the time must be thinking oh my gosh.

Kayleigh Bateman:

Like, who is responsible for that decision? Um, that that was my, and it's moving so fast as well. You know, like the, the industry is moving so fast to have to to keep up with, um, the speed of innovation within ai. Um, so you're absolutely right there. There were some brilliant examples in there, um about, uh, what teams can be doing, what companies can be doing to help their teams. Stephanie, how do you believe diversity within tech teams can contribute to more ethical AI development, and can you share any success stories or best practices in promoting diversity within these teams?

Stephanie Quan:

Yeah, having diversity in building these AI models and algorithms is incredibly important.

Stephanie Quan:

Well, firstly, because AI is already impacting all our lives, whether you're using Siri or Google Home, and that, as a result, it's generally important for everyone to just have a solid understanding of what is AI and what the implications are.

Stephanie Quan:

And, to go to Charlotte's point, it really is just to have that voice in that room, how we want to shape the future, of how AI is integrated into our lives.

Stephanie Quan:

And then, stemming from that, I mean it is useful to learn how to code, but some of the state-of-the-art AI now LLMs can help you with the coding process using just regular day-to-day language.

Stephanie Quan:

So really, it's more about learning what the AI models or the architectures are, how it fits into the, how the system is being fed data, for example, so that we have a wide representation of which data is being selected to create, to pre-train these AI algorithms, and also how they affect our decision making through the output of these AI models.

Stephanie Quan:

I think having a solid understanding, being an informed citizen on how AI is impacting our lives today and how we want to have AI continue to impact our lives going into the future and to be a proactive, informed individual requires a basic, solid understanding of what AI is and how AI affects us. Solid understanding of what AI is and how AI affects us, and that goes into the diversity and representation in teams, but also an equal, diverse representation of what it looks like in our day-to-day lives. We're not a homogenous group of people. We need to have all voices in the room and we need to have that reflected in our teams and that also that kind of help with from the output and decision-making of how AI models make the output or even how the AI models are developed.

Kayleigh Bateman:

Yes, I couldn't agree more, especially if you're a company that is making very big decisions based on AI and ML and just knowing where that data is coming from. It's one thing individuals using something something as simple as ChatGPT, but a company using something on a wider scale and making huge decisions, you really need to know where that data is coming from and how it got there. I don't know if you happen to see when ChatGPT went down a couple of days ago. A lot of the headlines said millions forced to use brains for the day, because it was like everybody was forced to have to think. Because it is a good thing that we we have things like that now and you can put questions in and it can spit you out answers, but it still does require somebody to check things over and just to make sure that they are correct and that, if you do use an answer from AI, that you know somebody has looked it over and that it does make sense and you can make a decision based on it.

Kayleigh Bateman:

Um, so it's great. I think it's great that we can do a lot more as society and we can free up time to do a lot more um and make you know more database decisions. But, um, I agree, you really need to know where that data came from, why it spat you out that particular answer. So so important.

Stephanie Quan:

There's also a huge discussion around what they call the jagged edge. So, in terms of how the AI is integrated into our lives and how we use the AI, ai can do certain things and it can't do certain things. So it's important to know actually what AI can do certain things and it can't do certain things. Um, so it's important to know actually the what it can do and what it can't do by actually using AI. So there's always talk about it, but I think it's it's very, it's a very consumable um platform or product today that can be used by anybody. Um, and to know how that affects your life or how it can affect your life. It's based on how you can interact with the AI models and if you're able to use them. The more you know how to use them, the more you know how it can actually fit in.

Stephanie Quan:

I think there's also a study, a landmark study that's coming out from Harvard and MIT, that was stating the change in the workforce and it does help to bring up and speed up the productivity of work being completed. So for those consultants who haven't used AI before or, like GP, and they were previously relying on their data scientists to help them with some analytics problem or solving a more complex problem. They were able to train up and upskill their teams to learn to integrate chat GPT into their daily work a classic consulting problem that they may have been given and to see if they can solve it at a faster rate, and it brought up everybody to a certain level. So it is changing our lives and to know how it's changing our lives will help inform individuals, even in the workplace how they can go about, how they want AI to further impact their life or how it's going to augment their life in the future and be part of that discussion.

Kayleigh Bateman:

Yes, definitely. Yes, yeah, using your time differently and bringing you up to a certain level with the help of something like that. I said this on this podcast before, but you just said this, stephanie, about how you interact with AI, and something I do do. I don't know if everyone else does it, but I always feel a little bit rude asking like chat, gbt questions. So for some reason, I want to put like please at the end and like thank you, and in fact, if you say thank you, it will just say no worries, you know, like good luck with that, and if you just say good morning before you start, it tells you good morning, how are? How are you? I'm like, oh my gosh, if I'm more polite to it, will it give me a different answer? Nothing has happened as yet, but I was thinking why do I interact with it that way? It's very strange, charlotte, you were going to jump in with something on that topic.

Charlotte Byrne:

Yeah, there was. Well, first of all, if you are polite to it, it actually does adapt its performance based on the language you use to it, or tell it you know. If you're asking it a complex question, uh, say you know, take your time, breathe before you answer this, and it means that it will actually take slower to process the information and compute the answer and quite often it will double check its answer rather than give you the first output. So it's actually good to, yeah, to say those things. Um, now, there's just two points that you were touched on there that came to my mind. The first is you know you're talking about where the data comes from, which is obviously really important.

Charlotte Byrne:

There's an example of two lawyers in the US I think this is kind of a year ago, I'd say way back when ChatGPT first came out who used it as part of a legal case.

Charlotte Byrne:

They were going into court the next day.

Charlotte Byrne:

They had been very busy and didn't have time to prepare, so they used ChatGPT to cite the precedent cases for them to leverage in the courtroom, and they got three of them One was true, one was kind of true, but really stretched and not accurate, and the third one was completely made up, because ChatGPT wanted to give it three cases as precedent, which is what it had been asked for, but it wasn't based on any factual data and you know a majority of that.

Charlotte Byrne:

It was made up to satisfy the user and give it an answer that it wanted to see. Of that. It was made up to satisfy the user and give it an answer that it wanted to see and that was then presented in court because they didn't do the human validation to say, okay, where has this information come from, before presenting it in front of a judge. And that got found out and I think they got slapped with a fine and a bit of a telling off. But it just highlights the importance of yes, you can use it to accelerate your work, but you need to check where the information has come from, because it's not built on reliable and grounded information in terms of the response that it has to give you.

Kayleigh Bateman:

Something so important in front of a judge as well. My gosh, you think that somebody would check that? You know we would. That's a bit of a no brainer. But I did a similar experiment with a press release recently because I was thinking could I use ChatGBT to run a press release through it and ask it just to write a news story? Because as a journalist you think surely that can't be possible. Surely you still have to have somebody that is writing you know, surely that can't be possible. Surely you still have to have somebody that is writing, you know, credible news with sources and even with a press release with quotes, how is this going to spit something out that is accurate and can be put out and we know where?

Kayleigh Bateman:

If I worked for a publication where somebody saw that that I wasn't going to get sued and it was all incorrect. So when I asked it to write a news article, it also changed all of the quotes that were included. So it rewrote all of the quotes. So if there was a spokesperson from Capco that had said something about a particular launch, it rewrote all of the quotes. And I thought if I hadn't have checked that and put that out, then that spokesperson would have been in touch with me saying I didn't say this. It literally rewrote even people's quote marks, um. So you're right, you have to double check everything. If you do intend on using anything like that um, because a lot of it is it does change things up and you didn't ask it to.

Charlotte Byrne:

Yeah, it's that human in the loop approach that I'm an advocate for when building solutions for our clients is. Accelerate what you do, but it should never be the final determined output. There has to be a human in the loop for review.

Kayleigh Bateman:

Yes, accelerate what you do. Yes, that's a nice point. No-transcript.

Charlotte Byrne:

Okay. So I think this is actually probably one of the most asked questions that I have when I speak to clients around AI and Gen AI, particularly when it comes to the moving from pilot to scale. So you can kind of try and test these in isolation and prove that it works, but actually when it comes to scaling it, the governance is really important. So we focus on kind of an AI enhanced governance framework, and what we mean by that is most organizations have existing governance frameworks and guardrails in place and I see this as being an enhancement of them rather than something that is net new and separate, because often, if you try and build it separate to your existing processes, it can cause confusion and, frankly, decrease the likelihood of success of implementation. So when we approach this, or when I approach this, I look at it in four or five, actually five core pillars. The first being the roles and responsibilities. So I think we touched on it earlier who is the human stepping in? So where is the ownership and accountability for the decisions and outputs? The second one being accounting for regulation, so ensuring that your governance process aligns with existing regulation but is also built to be agile to adapt to incoming regulation. So we have the EU AI Act that's been passed and is going to be drip fed in over the next few years. So how do you make sure that what you're building is adaptable for that, for example? The third one is testing. So how do you validate and test these models before you go live is really important, but equally, if not more, important is once they're live and in production and in use. How do you build in a continuous testing mechanism around that?

Charlotte Byrne:

Policies, which we've kind of touched on. How do you put those guardrails in place and ensure that people are using certain approved technologies and they're using the right data in the right places? And those policies and guardrails are really important across the organization? And then, finally, is the kind of tools and processes. So what are you actually going to use to enable all of the four previous points? What technologies do you need to roll out? What processes need to be put in place and they're kind of, I suppose, the bucket level approaches, um to setting the government's framework for organizations amazing, um, brilliant, uh, brilliant points there on on to lay out for organizations, also on regulations.

Kayleigh Bateman:

You mentioned the words agile, um, and to be able to adapt. I suppose that that applies to all of those points actually, because, yeah, it's just moving so fast that for companies to not be agile at this point in time and to be able to adapt to the moving changes must be so, so important to make sure you're always ready for an update.

Charlotte Byrne:

Yeah, even from everything from kind of your operating model. Now, obviously, people want to update their operating model to account for Gen AI technologies, but there's going to be new technology, new products that will build in to different parts of our roles and they will come quickly. So everything needs to be adaptable, through to even the technology itself. So how do you build a technology in a way that you can embed emerging models and latest versions as and when they're released, rather than that kind of traditional static implementation but yeah, it applies across the board is being able to move at pace, which is not necessarily what all large organizations are built to do.

Kayleigh Bateman:

No, no, inherently not built to be agile. You're absolutely right, emily. We touched upon regulations a little bit there. I wanted to dig into that a little bit more with you. In your opinion, what role do regulatory bodies and governments play in ensuring ethical AI development and deployment, and are the current regulations adequate, or do we need stronger measures?

Emily Rudolph:

So I think that regulatory bodies do have a critical role to play here, and we're already starting to see it appear. We're seeing this in the European Union with their AI Act, which is an exceptionally broad and comprehensive attempt to put in requirements around AI. We're seeing this in the US with, for example, state laws, which they come in at varying levels, but they tend to focus more on factors around discrimination. We're seeing this among the regulatory agencies as well. So like, for example, including within the financial industry, where they're starting to release guidances on the use of AI, and often in the financial industry, these are're starting to release guidances on the use of AI, and often in the financial industry, these are focused on things like impacts on models, discrimination, risk and so on, and I think that's so.

Emily Rudolph:

There's a couple of different roles that regulatory bodies are already playing, and I think that it's also a good example of how that role can vary for different bodies.

Emily Rudolph:

So what you typically have, at least here in the US, is legislation setting a very high level requirement around.

Emily Rudolph:

You have to have certain processes in place, whether that's an analysis or the ability to opt out no-transcript.

Emily Rudolph:

Then you also see, like the regulatory agencies, where they take this guidance to the next level, start incorporating that guidance for a particular risk, which that gives companies much more concrete details about what they should be doing, and particularly in an area that's as kind of fast-moving and complicated as this one, where we're already seeing guidance coming down to things like model drift or even taking a look at things like challenger models and that kind of thing, which that ends up being both very helpful and also definitely makes implementation more complicated as far as whether the current regulations are adequate or if we need stronger measures.

Emily Rudolph:

So I think, in general, we need smarter measures and a bit more comprehensive ones. To me, this isn't a fundamentally different problem from a risk perspective than what we've seen before, and from the regulatory perspective, it's kind of all part of the same moving target that we see across, like security and privacy regulations, for example, where updates they're constantly required as the landscape changes. So on that front, I think that that's just we're going to see more of the same on that, which, as the risks increase, as the as the adoption increases, we will, we will continue to see that, that that increasingly regulated landscape.

Kayleigh Bateman:

Yes, definitely. I love the fact that you phrase that, as you know, companies should be making that move from. Should we be doing something? Do you have to be doing something? And you're absolutely right that that will continue to move as the industry moves at pace. I know there has been some panic, especially in the press, recently. They'll have a good headline around AI, but you know what do we? How do we keep up with the pace of change and are businesses at risk? And what can companies do? And do we need bodies to step in and to, you know, make sure that companies are, you know, have certain frameworks in place and how to account as well if things are not quite what they should be. So, so, so important. I agree, we are nearly out of time, but I wanted to ask you all a quick question about looking ahead.

Kayleigh Bateman:

I'm going to throw this out to the floor, so please jump in with an answer, one by one. But looking ahead, what do you envision as the future of ethical AI development, and what steps can stakeholders take to ensure that AI technologies are developed and deployed in a responsible manner, I think, starting with what steps can people take to deploy this in a responsible manner?

Charlotte Byrne:

I think the first one I already touched on, which was establishing an AI enhanced governance framework. For me, that's absolutely critical and touches on, you know, the majority of points that need to be considered in this space, and and the second one would be to upskill your people stephanie and mentioned it, but the importance of and understanding this technology, the opportunities and the risks are really important and in this space, knowledge is absolutely power In terms of what I think ethical development will look like in the future. I think one of the big areas of focus at the moment is explainable AI. There's a number of you know, particularly when it comes to Gen AI, that explainability piece is really challenging and it's not really there yet, and I think that will really help build trust and understanding of the technology as it rolls out. So I think that's what a lot of organisations are focusing on.

Kayleigh Bateman:

Definitely yes. And Stephanie, would you echo that Charlotte said about upscaling your people so you understand the opportunities and the risks? Is that kind of where you see the future going?

Stephanie Quan:

I do. I think the technology is in this exponential growth development right now, and to keep up with the pace of change and having every voice at the table to help become part of that representative body of what the future of integrated AI in our daily lives looks like, everyone needs to at least spend a couple of hours utilizing some form of AI in their life we already do it with Google Home or Siri or Alexa, but even things as state-of-the-art as chat, gpt-4 and large language models and to really understand just how powerful they are and how it will augment our lives because it is and to be part of that conversation. I think that's the most critical thing is that, as long as people have the opportunity to you know, just try it and realize how powerful it can be in impacting and augmenting our lives, they can have a say in how they want AI to continue to impact and augment our lives. Going to the future, Definitely yes.

Kayleigh Bateman:

Having a say in the future of that, yeah, really important. Emily yourself, what do you envision for the future?

Emily Rudolph:

So for the future of development in this area. I think companies are going to have to make sure that they are very closely examining how the AI is developed, what data is being fed and how the tool evolves over its lifetime, both in terms of what it's actually doing and how users are interacting with it. It's going to require some creativity and flexible ways of thinking to tackle these problems that we've been discussing as they come up, but I also ultimately think it's just going to become another piece of the puzzle, but one that comes at you faster, more intensely and with a different, with a threat profile. But companies have had to deal with threats of these classes before.

Stephanie Quan:

I mean there's nothing.

Emily Rudolph:

There's nothing preventing a person in the organization from engaging in unethical or discriminatory practices. Companies have been dealing with data leakage, shadow IT, ip theft and so on, and, like Charlotte was mentioning earlier, there's a lot of this. To make your AI governance work, you often have to integrate it with some of the things that the company is already doing. So they need to take a comprehensive look at their controls, their risks, their processes and make sure that AI considerations are incorporated across them into things like their system development lifecycle, their change management and privacy by design processes for where they're developing and deploying AI in-house. It touches things like third-party risk, data loss prevention, it portfolio management and so on. As more and more vendors are cramming AI into every tool that's on your system. It's very tricky to navigate that right now with that level of hype, that level of interest, both from the vendor side, from the employee side and so on. So a lot of this is going to fall to companies making sure that their processes are robust and comprehensive enough to be up to the task.

Kayleigh Bateman:

Yes, definitely. A lot is going to rely on organizations in the future and you're right if they're using AI in all aspects of their business in the future as well just being more mindful of how they're going to integrate that and what it means for the business, what it means for the employees, how they're going to integrate that and what it means for the business, what it means for the employees. Um, I couldn't agree more um on on what you said. We're nearly out of time. I just wanted to pick your brains quickly. Did any of you think when you were younger that you would be working not only in tech but in ai and machine learning? Charlotte, you mentioned that if people knew you from school, you would not be in this area, absolutely not Emily, stephanie, did you?

Kayleigh Bateman:

did you know that? You know? Would you ever dream that you'd say you'd have such a cool role at Capco?

Stephanie Quan:

no, the opportunity even to study AI wasn't wasn't on the table when I was in high school, so that was going back many, many years. So I think even the changing of opportunities to learn and develop skills in higher education now, yeah, it's changing a lot, and I think that perhaps that also reflects the need for, like even schools today, or and and just generally, that there should be opportunities for people to to learn about ai, what it really is and how it cannot impact your life. Um, just having just to become an informed citizen and making informed decisions of how you want ai to augment your life.

Emily Rudolph:

Yeah, for me. No. I have ended up in a wildly different position than I would have ever thought as a child. As a kid I would have told you I was going to be a musician. But even when I was in law school I wouldn't have imagined that I was going to go this far onto the tech side of things, where it's moving further and further away from just the typical legal practice and more and more into the operational and technical details of this stuff. But I'm loving where I'm at now, so it's not so bad.

Kayleigh Bateman:

Yeah, yeah, charlotte, you said something similar. You didn't envision that as a child, or you did. And also, you know, you don't know those options are available to you either as well, until you start moving into the workplace. And Emily, you said you wanted to be a musician, I wanted to be a vampire slayer, and that didn't happen. So you know, you never know what's going to happen as you grow up. But we all find ourselves in the tech industry somehow. But we all find ourselves in the tech industry somehow. Thank you all so much for coming on here and chatting about AI and machine learning with me today. It's been an absolute pleasure chatting with all of you. So thank you all so much for coming on and spilling the tea.

Emily Rudolph:

Thank you so much for having us.

Kayleigh Bateman:

And to everybody listening, as always. Thank you so much for joining us and we hope to see you again next time.

People on this episode