Why transparent, interpretable, and unbiased AI is more crucial than ever (VB Live)

Enterprise

Presented by Defined.ai


What does it mean to build responsible, ethical AI? What government policies will shape the future of AI? Join Intel’s Melvin Greer, IBM’s Noelle Silver and Daniela Braga from Defined.ai as they discuss how we can ensure that our AI future is a just one, in this VB Live event.

Register right here for free.


Artificial intelligence use cases are proliferating, from the many business applications to more and more facets of day-to-day living. And as awareness of AI becomes more prominent, justifiable concerns about the fairness and power of machine learning algorithms, and the effects of AI on privacy, speech, and autonomy are growing. In the private sector, businesses must grapple with how to develop and deploy ethical AI, while in the public sphere, government policy is being crafted to ensure safe and fair AI use.

What does responsible and ethical AI look like? “Ethical” is a subjective term, says Noelle Silver, Partner, AI and Analytics at IBM, while responsibility, or being accountable for your choices, is essentially doing the right thing when it comes to implementing software.

“It’s less about what you perceive as right or wrong, and more about how you’re going to be held accountable for the outcomes of the things you build,” Silver says. “I feel like every company can move in that direction, regardless of where they are on the spectrum of ethical-ness in their AI.”

Being accountable for the outcomes is important, agrees Melvin Greer, Intel Fellow and Chief Data Scientist, Americas, but he points out that it’s not about whether the system is biased or fair, but rather whether it does what is claimed. The importance of transparency in data sets, and testing evaluation can’t be overstated. As part of that, the focus is often on the human factors, such as participatory design techniques, multi-state coding approaches, and human-in-the-loop test methods, rather than the bigger picture.

“None of these really are a panacea against the bias that’s part of a broader socio-technical perspective that connects these AI systems to societal values,” Greer says. “And I think this is where experts in the area of responsible AI really want to focus to successfully manage the risks of AI bias, so that we create not only a system that is doing something that is claimed, but doing something in the context of a broader perspective that recognizes societal norms and morals.”

He continues to explain the broad consequences of failing to have the necessary guardrails, even if unintended. 

As Greer explains, “It could decide where we go to school, who we might marry, if we can get jobs, where we’ll live, what health care we get, what access to food we’ll have, what access to capital we’ll have. The risks are high, and they require a serious evaluation of the way that we implement them.”

The imperative for ethical guardrails

Unfortunately, many of the data scientists and business unit experts who are in the position to design, build, and implement machine learning models or algorithms are not ethicists by trade. They generally didn’t study ethics in school, or have the opportunity to learn about the concept of questioning in product design. They don’t know what questions to ask, or can’t identify what they can be held accountable for in terms of the performance or intention of their models, and the data that’s being used to train them, Silver says. And employees lower in the business hierarchy tend to assume that these ethics questions are above their pay grade.

“With every line of business now leveraging AI, we need to each take responsibility for understanding and finding a defense for why we’re using this technology and what the scope of that use is and how we’re collecting the data that creates those predictions,” she says.

Greer also points out that all humans have developed their own idea of what is ethical or non-ethical. And if they are building AI systems, they’re imbuing their own view of ethics and ethical behavior into the system — which may or may not have an alignment with societal practices or societal values that we want to propagate.

It’s critical to start pulling in people more from the social sciences, Silver says, and critical that data scientists start thinking about the human dynamic in the relationship with AI, so you don’t end up building something that hurts a person.

“That’s ultimately the biggest failure, building an AI that infringes on someone’s rights, hurts someone’s ability to do something that they would have had a right to do, but your AI models inadvertently decided against it,” she says. “That’s something most companies are battling with, how to do that well.”

Implementing responsible and ethical AI

To start on the path to ethical AI, an organization needs an AI manifesto, Silver says. And leaders need to understand what it means to be a data-driven business, and then set an intention that they’re going to build it responsibly. When you build an AI solution, it needs to include transparency, and interpretability of the models such that someone who’s not necessarily the data scientist can understand how the models operate.

A focus on privacy is also essential, especially when building the right data sets. It’s expensive to do that responsibly, Silver says, and it’s expensive to make sure that every constituency is represented, or at least empathically noted, in your training data. It’s where a lot of organizations struggle — but it’s worth it, as it ensures that the software is fair and equitable and avoids potential setbacks or even company catastrophes, Silver emphasizes. Ethical AI also requires a feedback loop, so that anyone working on the models can raise their hand to flag any issues or concerns.

There’s also the need to expand beyond the machine learning and technical capabilities of transparency and responsibility to remove bias, and drill down to how the systems are being created, and what impact they’re going to have on society, even when on the surface, they’re good at what they do. For instance, using algorithms for crime prevention and prediction has been relatively successful in helping law enforcement; at the same time, they’ve had a disproportionately negative impact on some communities in society because of the way that those algorithms are implemented.

“While as a data scientist I can tell you I’m bullish on AI and the prospects of using it for good, the fact is that because it is so focused and capable of rippling through our broader society, when it doesn’t work the way we want it to, the scale of the damage and the speed with which it can be perpetuated across the entire society are very vast and very impactful,” Greer cautions.

For more on how AI is being used for good, how to be a part of the broader efforts toward responsible and ethical AI, and where these efforts are leading companies, organizations, and society at large, don’t miss this VB Live event.


Don’t miss out!

Register for free here.


Attendees will learn:

  • How to keep bias out of data to ensure fair and ethical AI 
  • How interpretable AI aids transparency and reduces business liability
  • How impending government regulation will change how we design and implement AI
  • How early adoption of ethical AI practices will help you get ahead of compliance issues and costs

Speakers: 

  • Noelle Silver, Partner, AI and Analytics, IBM
  • Melvin Greer, Intel Fellow and Chief Data Scientist, Americas
  • Daniela Braga, Founder and CEO, Defined.ai 
  • Chris J. Preimesberger, Moderator, VentureBeat

Author

Topics