Get new episodes right on your device by following us wherever you get your podcasts:

This episode originally aired in June 2023.

Click for the podcast transcript

In this episode, we bring you another instalment in the occasional series where leaders at Scotiabank interview experts on an issue that resonates with them. Today, you'll hear a conversation about data ethics hosted by Grace Lee, Scotiabank’s Senior Vice President, Chief Data and Analytics Officer, with Anton Ovchinnikov, Distinguished Professor at the Smith School of Business at Queen’s University and a Scotiabank Scholar of Customer Analytics. They discuss what exactly data ethics means, why financial services are well positioned to take a leadership role in this emerging field and much more.  

 

Read Anton’s research study on AI and gender bias here.

Other episodes of the Leadership Series:  
The power of allyship in the workplace 
Challenging the narrative around women in farming

Key moments this episode:

1:59 — What exactly is data ethics? 
3:41 — How does the study of data ethics relate to traditional ideas of ethics?
6:30 — Technology is evolving faster than our norms and ethics can keep up it seems, how can we feel comfortable around things like AI?
9:04 — How should can people play a positive role in implementing data ethically as more and more people have access
11:50 — How is ethics around data evolving? 
13:47 — How can people concerned about AI ensure it’s doing more good than harm
15:53 — Why AI’s use ultimately is a business decision
16:52 — What Grace sees as the hardest part about implementing AI in a business
18:58 — Why increased awareness of AI has been helpful
20:03 — What Grace has learned from risk management that she applies to her approach to implementing AI

Transcript:


Stephen Meurice: This week we have another instalment in our occasional Leadership Series, where we hand the mic to leaders at Scotiabank to interview experts on an issue that resonates with them. And today that issue is data ethics. Hosting the conversation will be Grace Lee. Grace is Senior Vice President, Chief Data and Analytics Officer at Scotiabank. She’ll be speaking with Anton Ovchinnikov, a distinguished professor at the Smith School of Business at Queens University and a Scotiabank Scholar of Customer Analytics. Grace and Anton will talk about what exactly data ethics means in this day and age, establishing societal norms around technology like AI, why financial services are well positioned to take a leadership role in this emerging field and much more.  I’m Stephen Meurice, and this is Perspectives. Now, here’s Grace Lee in conversation with Anton Ovchinnikov.

Grace Lee: Anton, so pleased to have you here to talk about AI and data ethics.

Anton Ovchinnikov: Thank you so much for having me.

GL: What we're talking about today, I think, is very relevant and topical for a lot of our listeners, which is AI and how we should be thinking about the ethics of using data, especially now that it's exploding in the way that it has with recent developments like Chat GPT and other things that are hitting the media. I know that you know this, but AI’s been around for a really long time, I think almost a century since we actually started calling it artificial intelligence. And yet, I think every single day we're finding something new within this space. I know that Hollywood's done a really good job of painting a dystopian picture of AI and lots of concerns arising from individuals, researchers, scientists, non-scientists around what AI means and how it's going to impact our future, and certainly, there are some really important ethical considerations to think about. But before we dive into the interesting research that's happening in this space, would you mind describing for our listeners how we are thinking data ethics as a discipline?

AO: Alright, so what is ethics? The definition of ethics, I believe translation from ancient Greek is the theory of living. So ethics is a discipline that studies or governs how we live. Are we making good or bad decisions? Right? This is the discipline about decision making ultimately. How do we make decisions? Now why all of a sudden do we talk about data ethics, A.I. ethics? We talk about that because the decisions that are historically been done by humans and therefore the ethics was in the context of human decision making, as more and more such decisions are done by algorithms, we need to understand how are those decisions made, right? So going back to the theory of living, so how do we live in a life where many decisions are made by algorithms? And here I want listeners to understand is that this spectrum of social decisions is enormous. Now, if our listeners wake up in the morning and start scrolling their social media feed, what they see on that feed is a decision made by an algorithm. If they ask, you know, Google Maps how to navigate from A to B, that's a decision that is made by an algorithm. And then, of course, we can go all the way to numerous other decisions. In a financial institution, there are numerous machine learning models that are responsible for numerous kinds of decisions, right? They impact how people get access to credit. They impact hiring decisions, promotion decisions and numerous others. This is the fact of life is that today in a large, sophisticated, data driven organizations, there are just many decisions that have algorithmic input. And that's why we need to think about those decisions and that's where the ethics of AI or data ethics comes in.

GL: And when you think about data ethics versus more traditional ethics, I don't know that I can find a comparable control system for what you would describe as more traditional human based judgment ethics versus data ethics. How have you found that corollary to how we govern the ethics outside of data versus the ways that we're thinking about governing within a database system?

AO: Okay, so let's just paint a little bit of a spectrum, right? So think about it, on one end you’ve got really bad decisions. So these decisions have been primarily made illegal; that is, there is legislation that made sure that those decisions are illegal. Then evolutionary through the process of interacting with each other, people have developed numerous norms and expectations about what is acceptable and what are good decisions. So that's kind of on the opposite end of the spectrum of those that are very bad and downright illegal. Of course, in the middle there are things that kind of that are a little bit blurry, right? So there are certainly things that would be considered unethical but would nevertheless be legal. Let's admit there also there are certain things that would be considered illegal given the current laws of a country or a place at a time that would nevertheless be considered ethical in another place, or maybe a later on. But bottom line is that with humans, it's just this combination of decision governance that combines laws and various norms that have been developed over centuries, over maybe even more than that. And now we have in this kind of decision making or interacting system, we have this new type of creature, call it AI, algorithm or anything else, right, and we need to understand exactly what that is. So that's why we oftentimes talk about various legislation related to AI systems as being closely tied to this question of ethics, right. We need the governance around those decisions. But also, we don't have centuries and we don't have millennia of this habit formation. And therefore, we do find ourselves in situations where things I would say go wrong. Now, let me give you a specific example. So when people are talking about data ethics, oftentimes we say that we want consumers to feel that they have a sense of agency, that they should not be surprised. Right? And on one end, you can say, ‘Oh, you know, there is a small font agreement and the middle of page 41, you know, there is this particular clause. And look, you signed it, right?’ But in all practical consideration, consumer will say, ‘Well I, of course, did not agree to that.’ And that's the norm, right? People interacting with other people in more or less the same communities historically have developed those norms. The fact that something will be hidden from me in the middle of page 41 of the small font client agreement, for example, that has not been evolutionary common. So that is why this comes up as a new. But in an algorithmic world, it could be very important because that little detail could be what the algorithm is using, and they will feel that they kind of lost somewhat their ability to govern their life.

GL: Mm hmm. And so we're playing against the clock in a couple of ways here then, because norms and even regulations are slow and it requires a lot of education. It requires millennia, you say, of experiences for us to get to the right muscle memory around it. We contrast that with the speed at which AI is continuing to evolve. And even if you think about AI today, how quickly algorithmic decisions can be made. What can we do to accelerate to our ability to kind of feel comfortable and not to control, but to gain some comfort around this technology?

AO: So, me as being a professor, researcher, I would, of course, say that we need to start by understanding this a little bit better by researching. And in fact, I and my research team have a research study on that topic. So the title of the research study is Anti-Discrimination Laws, AI and Gender Bias.

GL: And I understand that that's won an award this morning. Congratulations.

AO: Yes, yes. Thank you so much for mentioning this. It's a great pleasure for us that this study is being recognized. But let's go back to your question and to what this study is actually doing and observe how nicely you put things together, right. So there are anti-discrimination laws that are currently in place. They were in placed with the situation in mind where there are people who are discriminating against other people. And these laws were, our study talks of, let's say, about laws in the United States, right. So they're in late 70s. So think about it, situation of late 70s people are making decisions that could discriminate against other people. Today, many of these decisions are not done by people, but rather than by algorithms. But the same laws govern those decisions. And so what we study in this paper, we study how do these laws that were designed to protect humans against humans, what do they do when the decisions are made by algorithms? And we look specifically in non-mortgage lending situations, we ask a question whether inclusion or exclusion of gender data, which according to those laws is prohibited in some countries, is actually helping or hurting, in our case, women. And we find surprise, surprise that in fact, those laws hurt the people they're supposed to protect rather than help them. We can go a little bit into the detail a bit later if we find the time. But I can explain how and why and why this works. But bringing it back to your main point is that indeed the evolution of these laws is not catching up with the evolution of the technology that is much, much faster, and that could create unintended situations where something is perfectly legal but nevertheless would be considered downright unethical.

GL: And so when we talk about norms and we talk about what role we all need to play, you know, I think for a time, AI, data science was the realm of a few. A few technical people, people like yourself, who spend a lot of time and have garnered a lot of expertise and experience in these technologies. But now that we've democratized so much and now that we have to establish norms, societal norms now, around how we deal with it, what would you tell our listeners that they should do to educate themselves and how they can play a positive role in thinking through and implementing ethical use of data?

AO: Well, it's a tough question, but let me start a little bit afar. So I would like to highlight a word that you said, and I want our listeners to think about it carefully, the word democratized. So what really happened in the last few years I would say 2018-ish, maybe onward is where sophisticated machine learning — and machine learning is what underlies A.I., right — sophisticated machine learning algorithms and systems and tools essentially are becoming automatized. And therefore, they are now accessible to a much, much larger group of people. On one hand, this is good. Now, why is this good? Because when only few people deal with data innovations, those few people do not really understand what the problems are. Maybe not “don't understand,” they don't see all the problems. They are removed from the customers, they are removed from operations, so businesses, right? And that's where we understand, like you, right? People who work in industry, right. You understand what the real problems are. So we want those kind of people to be able to use AI, machine learning to address those problems, address those opportunities. So therefore, if only data science people do it, they will never be able to even know about the existence of those opportunities. So on one hand, the big positive thing is that we need to educate more people about the opportunities that this kind of data driven decision making enabled by this machine learning and the AI system, about the opportunities that they provide. And that way we will certainly expand the benefit. Now, of course, as more people start using it, we also are kind of expanding the risks. I feel that the kind of organization like yours in fact, a large financial institution, is in a wonderful position because financial institutions have been managing risks for years, since the beginning of their existence. Realize that numerous other organizations were not managing risks. And so, in my opinion, it's those organizations that are potentially in bigger trouble because they don't have this risk management culture that financial services institution have. So maybe society at large, really should be looking at the financial services as a risk management type of body. And what can other sectors of the economy and even not business organizations, right, various organizations can learn from financial services firms like yours.

GL: And I'm just thinking when we say data ethics, data ethics is so multifaceted. So, you know, for a time I think it was likely mostly related to privacy and related to bias. And now that there's so much, especially in the generative space, we're looking at patent law and copyrights and ownership. So how do you think that data ethics will continue to evolve just given the evolution of the technology?

AO: Ah, Grace. All of these are such tough questions. [both laugh] I would say yes, so the data ethics per se, I think, will continue to evolve, but perhaps slowly, because more and more we are in a situation where it's not really the data per se that could create ethical problems, but rather these kind of large models that are trained on data. And here the interesting thing is this: that with that much data that these models are trained on, almost each individual data point becomes too small, right? So if we remove your data point from a very, very large dataset, the amount of information, what the models will learn from the data set will not really change very much. So in a sense that once these models have learned something and now that learning is captured in, well the technical term is weights, right. Basically the parameters of these models to put things in perspective, GPT 3.5, which is the model on which Chat GPT was initially released, had 175 billion parameters and there are like what, seven or eight or so billion people, right? So therefore, once these parameters have been calculated, the data on which they have been calculated becomes relatively irrelevant. The new data that these systems are generating, that's what becomes relevant. As people started interacting with it and we realize that certain outputs are, let's say, unacceptable for various reasons. That now becomes very important data, that becomes a new data on which we can use to guide these models kind of one way or another. That's a long answer, but I'm not sure if I responded to even a fraction [both laugh] of what you ask just because it's such an involved question.

GL: Yeah, and I'm sure we'll figure out the answer over time, but I do think that many people are concerned. That concern might have shape, it might not have a lot of shape, but constantly I'm sure you encounter the same questions. It's, ‘How do we know? Should we not use it? How do we understand whether AI is here for good or for evil?’ And I would say that irrespective of what we believe the intent is, AI is here to stay. So understanding that some of these concerns do exist, what do you recommend for individuals? What do you recommend for large institutions as to how they can put up systems to replace some of those norms that are still developing and ensure that if AI is here to stay, that it's doing more good than harm?

AO: I think that the first step probably would be to be very cognizant about this, so essentially to ask the questions that you are asking. Because if these questions are not asked, then we would not even be able to see what's going on. So for instance, this gender bias study. So realize that in the United States where financial services institutions are prohibited by that law to collect gender data, they cannot even very well check if this discrimination happens in the first place. They just don't know who are men and who are women. So I acknowledge that gender is non-binary, but in our data it actually was. So, our study is actually done on a data set from other parts of the world where it is possible, it is allowed to collect data like that. At the very minimum that allows us to check how biased or unbiased the systems are. So I would say that the first thing would be to start by asking these questions. The second thing is to put systems in place where you can answer these questions and then what do you do after that? I would say it's a third question. Because organizations have to make a call about what they will and will not do. Now, of course, some of these calls are made for them by the law, but others are made by the ethical principles and ethical guidelines that the organizations adopt. The organizations need to decide where the boundary is and what is okay. And I think that starts with asking questions and having systems and data to give answers to those questions.

GL: Yeah and ultimately, it's a business decision.

AO: Exactly.

GL: It can't be something that you ask one data scientist to make on behalf of an organization. It actually has to be established as a management and as a business decision.

AO: Absolutely, in fact here I hope our listeners realize that we’ve kind of have a full cycle that we've looked. So this democratization could open more opportunities. It means that more people can start adding value to this and more people can start thinking about this. But this democratization kind of also loops the circle the other way. People can ask these questions. More people can be engaged in these conversations, which I think at the end of the day would allow organizations to make a more cognizant decision.

GL: It's fascinating how much the space has changed in terms of what sorts of questions we were asking versus now how much we can operationalize. So, you know, we've had standards around what we believe. We've had vision statements for this. But I think the rubber does hit the road once we've actually established that into business process, into business strategy. Because what I found is the hardest part is actually translating that good intent into process. And that process needs to, at least in our case, hit 90,000 plus employees across the world with different norms, different standards, different regulations. So, you know, some of the stuff that we've done has really been around change management. So how do we start to embed some of those questions into the way that data scientists actually, not only question their own models, but we bring the business into the discussion. There is a tool that we call the Trusted Data Use Tool, which effectively asks a lot of those really basic questions that people who study it are really familiar with could rattle those questions off very, very quickly. But if you have a young, aspiring data scientist fresh out of university who's never had the scars that we've had in accidentally, and it's almost always accidentally, creating bias in a model, I have seen that our ability to integrate ethics into business process by design has been, I think, the most impactful thing that we've done. Of course, we put out statements, of course we have great intent, but being able to actually codify these beliefs into something, not regulation, but our own policies and procedures I think has been very instructive for our data scientists. Typically, when you add bureaucracy or process to a modeler creating an analysis or a report or a model, you usually get a lot of resistance. We've been very fortunate within the organization that data and analytics, both report to me. So there's a kind of shared incentive model there. But what I've found and what I've been so grateful for is that when you expose people to these kinds of ethical questions, they are so interested in doing the right thing and they're actually very purposeful and very collaborative, very proactive in terms of adding to these questions, adding to the process, helping us to refine it over time. And I'm really proud of the work that the team's done to not only say the right things, but walk the talk too.

AO: So then would you say that general knowledge or familiarity of the people with the possibilities of AI, with opportunities that it brings. Do you think that is helpful?

GL: Oh, it's absolutely helpful. And I think one of the things that I'm seeing more and more of is that it's not just a community of practice around data scientists. It is actually working directly with the business to talk about, ‘Hey, what are some of the friction points in your process? What are some of the opportunities that you see? Where do you think that we can embed some of these capabilities and describe very simply that will help to gain an edge or a bit of competitive advantage in a market?’ And, you know, one of the things that's so great about AI is that we're in a service business. We are meant to provide an excellent customer experience, great advice to all of our customers, but we have millions of customers, again in various jurisdictions across many different cultures. The ability for us to deliver on that brand promise to every single one of those customers is incredibly difficult.

AO: Now here again, you are a specialist on this management, not me. So maybe let me ask, is there anything that you learned from risk management that you apply to managing AI risks?

GL: Yeah. You know what, I think, ‘Do the right thing,’ is a cultural standard for, I think, the banking industry and specifically to Scotia. That is part of the tone from the top. Everybody is responsible for risk management, and I think that that has translated really well into whether it be cyber risks or AI and data risks or credit risks. Everybody feels that level of ownership. I think where AI has helped with risk management is risk management used to be a sampling exercise. So, get to 5% random sample, hope that the controls prove to be effective and then we move on. With AI, with models, with better data, we actually can get to 100% sample. We can take those same learnings, we can build it back into our processes. And I think that's a really nice mutual benefit that we've been able to enjoy by applying AI to risk management. But also then in using AI to assess AI. So we have many different ways that we are going to be applying the very same things that you talked about before around just asking the right questions, being able to show distributions. Do we believe in these decisions that we're making? But we can do it at scale in a way that I don't think we would have been able to do before. So I think the principles that we've applied from risk management are: it's everybody's problem, let's use technology to ensure that the technology is safe and ensure that it is part of the process, not just the mechanics of the model building itself, but the process around it. So where did the data come from? How do we feel about the features that we're engineering? How do we think about the impacts that it has on our customers and our employees? Are we getting the right people around the table to ensure that we're asking those questions not just in front of a computer, but also as part of our business strategy? So lots of parallels there.

AO: Fantastic. Fantastic. So interesting to hear.

GL: Well, Anton, thanks so much for coming here and having a terrific conversation on such a topical subject.

AO: Thank you. Thank you for having me.

SM: You’ve been listening to Grace Lee, Senior Vice President, Chief Data and Analytics Officer at Scotiabank. She was speaking with Anton Ovchinnikov, a distinguished professor at the Smith School of Business at Queens University and a Scotiabank Scholar of Customer Analytics. If you liked this chat, then be sure to check out the previous installments of our Leadership Series. There’s been a conversation about allyship as well as an episode about economic resilience for newcomers to Canada. You can find those in our podcast feed and we’ll link to them as well in the show notes. The Perspectives Podcast is made by me, Stephen Meurice, Armina Ligaya and our producer Andrew Norton.