65% of execs can’t explain how their AI models make decisions, survey finds

Enterprise

Elevate your enterprise data technology and strategy at Transform 2021.


Despite increasing demand for and use of AI tools, 65% of companies can’t explain how AI model decisions or predictions are made. That’s according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they’re ensuring AI is used ethically.

“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” FICO chief analytics officer Scott Zoldi said in a press release. “Organizations are increasingly leveraging AI to automate key processes that — in some cases — are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.”

The study, which was commissioned by FICO and conducted by Corinium, found that 33% of executive teams have an incomplete understanding of AI ethics. While IT, analytics, and compliance staff have the highest awareness, understanding across organizations remains patchy. As a result, there’s significant barriers to building support — 73% of stakeholders say they’ve struggled to get executive support for responsible AI practices.

Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

According to Corinium and FICO, while almost half (49%) of respondents to the survey report an increase in resources allocated to AI projects over the past year, only 39% and 28% say they’ve prioritized AI governance and model monitoring or maintenance, respectively. Potentially contributing to the ethics gap is a lack of consensus among executives about what a company’s responsibilities should be when it comes to AI. The majority of companies (55%) agree that AI for data ingestion must meet basic ethical standards and that systems used for back-office operations must also be explainable. But almost half (43%) say that they don’t have responsibilities beyond meeting regulations to manage AI systems whose decisions might indirectly affect people’s livelihoods.

Turning the tide

What can enterprises do to embrace responsible AI? Combating bias is an important step, but only 38% of companies say that they have bias mitigation steps built into their model development processes. In fact, only a fifth of respondents (20%) to the Corinium and FICO survey actively monitor their models in production for fairness and ethics, while just one in three (33%) have a model validation team to assess newly developed models.

The findings agree with a recent Boston Consulting Group survey of 1,000 enterprises, which found fewer than half of those that achieved AI at scale had fully mature, “responsible” AI implementations. The lagging adoption of responsible AI belies the value these practices can bring to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t.

This being the case, businesses appear to understand the value of evaluating the fairness of model outcomes, with 59% of survey respondents saying they do this to detect model bias. Additionally, 55% say they isolate and assess latent model features for bias, and half (50%) say they have a codified mathematical definition for data bias and actively check for bias in unstructured data sources.

Businesses also recognize that things need to change, as the overwhelming majority (90%) agree that inefficient processes for model monitoring represent a barrier to AI adoption. Thankfully, almost two-thirds (63%) respondents to the Corinium and FICO report believe that AI ethics and responsible AI will become a core element of their organization’s strategy within two years.

“The business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards need to be aware of the risks associated with the technology and the best practices to proactively mitigate them,” Zoldi added. “AI has the power to transform the world, but as the popular saying goes — with great power comes great responsibility.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member