Why Responsible AI Practices Are Important to an Organization
Artificial intelligence (AI) comes with big risks and big rewards.
Ignoring the risks comes at a cost; companies clinging to the “move fast, break things” mantra are increasingly becoming cautionary tales. As intelligent technologies rapidly evolve, executives are thinking twice before diving in; 72% of executives say that their organizations will forgo generative AI due to ethical concerns.
Yet, this hesitation brings costs as well; for every dollar invested in AI, there’s an average return of $3.50. Meanwhile, 5% of organizations are generating closer to $8 in return. For most companies, the question is not whether to dive into AI, but how to do so in a way that captures this value while minimizing the risk.
To disregard the risks would be irresponsible. But is it responsible to forego such impressive returns? Executives are under immense pressure to reconcile these risks and rewards and they can do so with Responsible AI, a set of AI governance frameworks, policies, and controls designed to monitor and manage AI implementation and use. As companies develop these capabilities, they minimize the risks and unlock the value available in their AI projects.
In this piece, we will highlight three of the most common benefits of AI-worker productivity, personalized customer experience and enhanced decision-making-and discuss why Responsible AI is necessary to protecting the value that these benefits bring to the enterprise.
Worker Productivity: A delicate balance
AI’s potential to turbocharge worker productivity is undeniable; the promise of doing more with less has never been more tangible. Take GitHub Copilot’s, which allowed developers to complete coding tasks 55% faster than humans alone. With these productivity gains in mind, executives should unleash automated technologies across the organization, right?
Not so fast – overdependence on automation can backfire, reducing the quality of output and ironically, necessitating more human intervention. But more recent research has demonstrated that an overreliance on automated AI tools like Github Copilot can lead to lower quality work. The study found that code churn-the percentage of lines of code that are reverted or updated less than two weeks after being authored-is projected to double in 2024 compared to its 2021 pre-AI baseline. So while heavily automated work may get done faster, these studies suggest it often needs to be re-done more frequently as well. If these mistakes are not flagged and corrected, wholesale automation can create far more problems than it solves.
Enter Responsible AI: by determining what automated systems can capably handle on their own, and not exceeding those limits, companies get the productivity benefits while limiting the risks. Unilever has determined those limits for their own operations. They have a well-defined policy for when intelligent technologies require humans in the loop: any decision that would have a significant life impact on an individual will not be fully automated and should instead be made by a human.
But they didn’t just create a policy. Unilever developed review and action protocols to ensure that all AI projects meet this (and other) criterium before a project gets approved. To put it simply, Responsible AI frameworks like Unilever’s only allow employees to unleash AI in ways that cause minimal damage to what they value most. And the proof is in the productivity: when AI is used outside of its capabilities, worker productivity drops 19%; when used within its capabilities, worker productivity improves by as much as 40%.
Customer Experience: Balancing efficiency with the human touch
The dream of the AI-enhanced customer experience is worth pursuing; personalized attention at scale has enormous potential to convert and maintain customers. Yet, the reality often falls short, particularly with the explosion of chatbots that now leave customers begging for human assistance. This gap between automation’s promise and its delivery underscores an unmet requirement of customer-facing AI projects: discernment in automation.
For example, the banking industry relies mostly on basic chatbots that send customers preset, limited responses or route customers to FAQ pages. This approach largely keeps customers stuck in frustrating loops, as evidenced by the extraordinary number of complaints the CFPB has received from customers unable to access timely, straightforward answers.
Responsible AI can protect the potential value in the automated customer experience by balancing risks and benefits. For example, automating the huge number of inquiries that banks receive about their customers’ mortgage payments could dramatically cut costs, but providing a wrong answer could upend their customers’ lives and/or result in legal consequences for banks. And while few customers call in to check their account balance anymore, answering this question with an automated system is both technically simple and presents minimal risk.
This additional layer of foresight allows companies to save big on automating low-risk scenarios while keeping humans in the loop when the potential consequences of full automation outweigh the potential gains. Responsible AI processes that proactively map out these logical flows create a balance between the drive to cut costs by reducing human staff and the risks of alienating customers and breaking laws.
Augmenting Decision Making: Calibrating AI with reality
AI’s potential to revolutionize decision-making is well documented; automated systems can offer insights based on a quantity and diversity of datasets that humans alone could never process. This potential led some companies to create automated systems that not only recommend actions, but actually take them.
Zillow created an algorithm that predicted the worth of real estate and then deployed an automated tool that made actual offers on properties that the algorithm believed to be sound investments. Simply put, Zillow’s algorithms overestimated the value of most of the homes they purchased, resulting in $500 million in loss across thousands of properties. As markets cooled down, their automated systems never adjusted.
Machine learning too often assumes that the past is a reliable predictor of the future. This is a big risk, but it’s addressable with Responsible AI processes. At the most basic level there are two critical steps that companies must take to reap the rewards of AI-driven decision-making while avoiding catastrophe like Zillow’s:
- Companies must pre-define criteria for determining when a model is accurate, robust and unbiased enough to deploy;
- Companies must proactively build a process for continuously monitoring the efficacy of a model in the real world.
Any reliable AI system needs guardrails in place to ensure that only automated systems that make predictions aligned with reality are given the autonomy to act on their decisions.
If Zillow had a process for monitoring model drift, they would have realized that their algorithm was not adjusting to dramatic changes in the market. And as automated systems are trusted to make more decisions in the coming years, establishing rules for when AI should be trusted to act will become increasingly important. According to Gartner’s 2024 strategic technology trends, by 2026, enterprises that apply meaningful Responsible AI controls to AI applications will consume at least 80% less inaccurate or illegitimate information that leads to faulty decision making. When companies apply these Responsible AI controls, they can put more trust in their systems to seize opportunities that create, rather than destroy, value.
A Calculated Leap into AI
Viewing AI adoption as a binary choice-embracing it with all its risks or avoiding it entirely-will consistently create a gap between your potential return and your actual return on AI. Responsible AI offers a systematic approach that closes this gap, enabling companies to assess and address risks methodically. By championing Responsible AI, executives can lead their companies to stop speculating and start making informed decisions about the latest intelligent technologies. And it’s past time to start making these decisions because the risks and the benefits are too large to ignore.
Need help? We love talking about Responsible AI. As Microsoft’s AI Partner of the Year, Neudesic has been helping clients navigate the dicey waters of Responsible AI and create long-lasting frameworks to guide its implementation and use throughout organizations. Contact us to get started.
Related Posts