A few years ago, a video of people using an automated hand soap dispenser — an otherwise mundane daily activity — went viral. The reason it caught attention: When one person put a hand under the sensor, the soap came out right away. But when a second person did it, the dispenser didn’t work at all.
The only difference: skin colour. The automated dispenser didn’t recognize the person with dark skin. It was further tested using white paper towel — soap came out immediately. The video was posted on Twitter by an employee at Facebook, along with the Tweet, “If you have ever had a problem grasping the importance of diversity in tech and its impact on society, watch this video.”
The so-called “racist dispenser” was likely designed by people with white skin, says Dr. Florian Martin-Bariteau, professor of law at the University of Ottawa. He likes to use this example to describe how technology can inadvertently impact society.
He cites another cautionary tale: Five years ago, a man logged into Google Photos and found facial recognition software had placed images of him and his friends in an automated album titled Gorillas.
“I often say in AI — artificial intelligence — there is nothing intelligent and everything artificial.”
He says algorithms have to be taught to identify differences in skin colour and language, among other things. If not, they will reproduce biases and fail to recognize diversity in our society.
Dr. Martin-Bariteau and a team of researchers will examine these challenging issues as part of a new AI & Society Initiative at the University of Ottawa. The program will be supported with a $750,000 donation over four years from Scotiabank that was announced this week.
The university, already a leader in interdisciplinary research and AI, will establish itself as a national and global leader on ethics and AI and develop the first research network of its kind in the country.
“While there are hubs across the globe, there is no hub in Canada,” says Dr. Martin-Bariteau.
The university will gather research from around the world and delve into some of the most complex and critical issues facing businesses, governments and end users of AI.
The University of Ottawa team will be led by Dr. Martin-Bariteau, a law professor and director of the university’s Centre for Law, Technology and Society (CLTS). His past research has focused on technology and intellectual property law, with a special interest in blockchain, AI and cybersecurity.
His group will operate under the CLTS and bring together disciplines across the university — from medicine, law and engineering to arts and social sciences — to look at the impact of AI on youth, women, Indigenous Peoples, LBGT+ and minorities.
AI can impact Indigenous and Francophone communities because of language differences. In the case of women, many diagnoses and drugs, particularly older ones, were developed based on men so there is already bias in some of the data that could be used to develop AI in health care.
Dr. Martin-Bariteau and his team will examine privacy issues around AI, including the use of data, and will work with industry and government to help inform policy, regulations and frameworks.
The university will create two postdoctoral fellowships: one on AI and inclusion and one on AI and regulation. It will send students and professors to centres of excellence in Israel, Brazil and Mexico to learn best practices. It will also run workshops and host international speakers.
It will conduct its own research, as well as create a network of experts from across the world.
“We are in the midst of profound disruption,” says Lora Paglia, Senior Vice President of Global Risk Management, Analytics at Scotiabank. “In order to futureproof the Bank, it’s critical to plan for the skills of the future, which begins with investments in students and research in this burgeoning area of applied ethics in technology.”
Paglia added: “On this journey towards ethical AI, there is a unique opportunity for the industry to work with regulators in formulating global governance policies and standards.”
AI is already changing how we drive cars, bank and shop and is having an impact on health care, agriculture, immigration and the justice system. It has many benefits, but also comes with risks.
Jason Millar, one of the professors who will be part of the university’s research team, will study the ethics of AI and driverless cars with the aim of working with industry on designing ethical frameworks.
As an engineer and philosopher, he’s been studying the design, ethics and governance of robotics and AI, including driverless cars, unmanned aerial vehicles and so-called “social robots” that interact with humans. In 2015, he was invited to the United Nations to give expert testimony on lethal autonomous weapons (also known as robotic weapons).
There are a number of questions that motivate his work, including with driverless cars: What types of decision-making can we delegate to machines, and how can we design ethical design-making algorithms?
Two years ago, a 49-year-old woman was walking her bike at night in Arizona when she was killed by a self-driving Uber. There was a human being behind the wheel at the time but the automated system was fully in control.
It was the first death of a pedestrian by a driverless car. The case was settled privately with the woman’s family but it led to broader questions about the ethics of AI.
As Harvard Magazine explained in an article last year: “What moral obligations did the system’s programmers have to prevent their creation from taking a human life? And who was responsible for [the woman’s] death? The person in the driver’s seat? The company testing the car’s capabilities? The designers of the AI system, or even the manufacturers of its onboard sensory equipment?”
It’s highly complex, as AI involves systems designed to take cues from the environment — based on those inputs and data, machines solve problems, assess risks, make predictions and act. All of which raises much bigger questions, says Dr. Martin-Bariteau.
“Should the system make the decision?” And he adds, “Can we, as a society, accept the fully automated vehicle? We don’t know.”
The idea behind the university’s new initiative, he says, is to jump into these types of issues and “look at Canada but in the global [context] to learn from best practices — and how Canada could play a leadership role in the world.”