The rise of artificial intelligence marks a transformative era across multiple industries, from financial services and healthcare to media, manufacturing, and beyond. With most companies experimenting, piloting, or implementing AI, this technology’s impact will be undeniably profound.

AI has been instrumental in faster data-driven decision-making, enhanced customer experiences, boosted employee productivity and satisfaction, and increased revenue. PricewaterhouseCoopers forecasts that AI could boost global GDP by 14% by 2030 — that’s $15.7 trillion USD.

But the journey towards AI integration is not without its challenges, which range from privacy, liability, and high-quality data availability to ethics, regulations, and bias. As we are in the nascent stages of AI evolution, “Responsible AI” has emerged as critical to unlocking AI's value while mitigating its risks and limitations.

What is Responsible AI?

Responsible AI is a governance framework to guide an organization's creation and use of AI technologies in ways that promote business value while minimizing potential harms.

Typical Responsible AI frameworks minimize harm and avoid ethical dilemmas, but driving business value and a cultural shift toward innovation should also be part of a company’s framework. Those embracing that kind of Responsible AI aren’t just making a moral choice, but are also positioning themselves for strategic advantage now and in the future.

Core Principles of Responsible AI

We view the core principles that guide Responsible AI to be accountability, reliability, inclusion, fairness, transparency, privacy, sustainability, and governance. These principles are all interconnected to help align AI with regulations and policies, as well as societal and company values. Each principle contributes to creating AI systems that are better for organizations, their constituents, and their shareholders (and that includes economically better). It’s important to note that Responsible AI frameworks are highly specific to each company based on their legal and regulatory environments, values, and organizational culture. We at Neudesic believe that the following principles are the right ones for all organizations, but crafting and applying these principles to an individual organization requires important and detailed work.

Let’s look at how we at Neudesic think of each one.

Accountability in Responsible AI

Because AI-infused use cases require a multitude of people to bring them to fruition, Accountability must begin at the top level of the organization and include all involved. From executives who select and approve use cases to data engineers who prepare training data to data scientists who craft the algorithms, all are obligated to disclose and be held accountable for their choices and actions taken, both to their leadership as well as those impacted by an AI system’s usage. Accountability also includes a commitment to continuous improvement, or even deprecating the system if issues can’t be adequately addressed.

This is the foundational principle for Responsible AI. The more accountability each person involved assumes, the stronger the other principles become, and the better the outcomes the AI system produces, particularly when it comes to value.

In the case of the self-driving Uber car that struck and killed an Arizona pedestrian, the National Transportation Safety Board determined that the AI model was incapable of classifying an object as a pedestrian when not in or near a crosswalk. A human would have certainly recognized and avoided the pedestrian, but Uber’s backup driver was watching a video and not performing their assigned task. This event was well-covered by the media, in part, because of how avoidable it was.

Reliability in Responsible AI

In the context of Responsible AI, AI systems must behave consistently in their operations and outcomes, regardless of changes in their environment or attempts to exploit their vulnerabilities.

  • Reliability encompasses several key areas:
  • understanding and planning for edge cases,
  • tracking and adapting to drift in use cases or data,
  • and preparing for potential attacks and system obsolescence.

AI use cases can have far more impact than typical technology would – think of mortgage loan approvals, autonomous vehicles, or medical chatbots. An unreliable AI system can erode trust, hurt users or society at large (depending on the use case), and even damage a company’s brand or reputation.

After a customer looking for parcel delivery information from the delivery company’s website chatbot could not get anything useful, the customer decided to explore the chatbot’s functions for his amusement. The customer got the chatbot to tell a joke, create a poem, swear, and disparage the very delivery company it represents. This type of edge case wasn’t considered, so the system updates that included this vulnerability were deployed without detection.

Inclusion in Responsible AI

Inclusion aims to improve usability and outcomes by ensuring a variety of racial, cultural, and experiential viewpoints are considered. This also extends to testing and refinement in real-world settings. Inclusion is critical to creating AI technologies and use cases that best meet the needs of a system’s constituents, and it's also an important step in ensuring Fairness.

Inclusion involves incorporating perspectives from a wide range of stakeholders, like experts in ethics and diversity, as well as communities directly and indirectly impacted by the AI system throughout the system’s lifecycle, particularly during early research stages. This principle can help prevent the perpetuation of existing societal biases in AI applications.

Google developed an AI system to detect diabetic retinopathy to help the Thailand Ministry of Health meet its screening goal of 60%. The existing process took 10 weeks and required more retinol specialists than were available in the country. The AI system worked well in lab testing — 90% accuracy requiring just 10 minutes — but rejected about 20% of all scans when used by practitioners in real-world environments. The problem? The deep-learning algorithm was developed using high-quality image scans and was programmed to discard any that didn't meet a specific quality standard. Given that nurses often scanned numerous patients within an hour, frequently in suboptimal lighting, the system turned away over 20% of the images. The team is collaborating with experts to enhance results, but involving system users from the start would have identified and resolved these issues much earlier.

Fairness in Responsible AI

Fairness ensures that AI systems' outcomes are equitable and do not discriminate against any groups, particularly those most vulnerable to the harm that can come from AI. The entire goal of fairness is to not harm any vulnerable group. This involves careful design, implementation, and use of AI components to promote equity and reduce bias. Incorporating fairness into the Responsible AI framework involves a comprehensive approach: ensuring the Inclusion principle is working well, scrutinizing training data for biases, continually testing for unfair outcomes, and modifying models to address these biases. Because this principle involves a fair amount of technical know-how, sharing best practices, tools, and datasets within the broader community and evolving strategies to address the unique challenges presented by AI systems can help improve all AI systems, while growing this important skillset.

An AI-powered tool used by child protective services was designed to predict which children will be placed in foster care after a family is investigated. The tool itself was investigated for disproportionately targeting families that had mental or physical disabilities.

Transparency in Responsible AI

At Neudesic, we view Transparency as a two-sided principle, pairing the disclosure elements of transparency with explainability. Tracking and understanding how AI systems behave, including how the systems were created, their limitations, and their capabilities must be combined with appropriately disclosing relevant parts of that information to those impacted by these systems and other stakeholders. The potential value of an AI system can be quickly eroded by a loss of trust, so carefully consider the amount of disclosure and the type of explanation stakeholder groups should receive.

The Cambridge Analytica-Facebook scandal of 2019 is a stark example of a lack of transparency. In this case, Facebook user data was exploited for psychographic profiling aimed at swaying voters, which led to the Federal Trade Commission imposing a $5 billion fine on Facebook. The lack of clarity and openness about how Cambridge Analytica and Facebook were using people’s data in their algorithms created a major reputational harm for Facebook and caused Cambridge Analytica to cease operations, underscoring the need for transparency in Responsible AI.

Privacy in Responsible AI

Privacy in Responsible AI requires judicious and ethical management of personal data, ensuring that it is collected, stored, and used in a way that respects individual rights and complies with legal standards. Many states are establishing comprehensive privacy laws that will require some level of the following: collecting only necessary data, ensuring its quality and representativeness, maintaining transparency in data collection practices, and implementing robust security measures to protect that data. Beyond future or existing legislation, protecting people’s private data is also an important part of ensuring fairness and trust in AI systems.

Google took privacy seriously in its training of an AI-powered breast cancer detection system. Using anonymized mammograms, the system showed a reduction in false negatives and positives, all while keeping personally identifiable information (PII) out of the picture.

Sustainability in Responsible AI

Neudesic views Sustainability as the creation and operation of AI systems that not only minimize negative impacts on the environment, but also on those who create these systems while maximizing the value generated. This involves a balance between costs and business value, including environmental impact and human efforts. Sustainable practices include using high-quality data to reduce the quantity and labor needed for training, selecting the most efficient models given the requirements, and strategically timing the operation of AI systems to align with off-peak – and typically cheaper – energy periods. It also considers other use case and architectural decisions to contain costs, and when and how human intervention is necessary in the various phases of the system’s lifecycle.

Consider that many makers of foundation models will employ thousands of workers from the Global South to label harmful material as part of the AI model training process. They are often paid well below prevailing wages in their country and deal with traumatizing content with little support. Sustainability should consider the total cost (environmental, human, economic) of an AI system and help balance those costs throughout the system’s lifecycle, from inception to depreciation, and not as a mere afterthought.

Governance in Responsible AI

Governance in Responsible AI is the vehicle for deploying and maintaining the Responsible AI framework, first by aligning the organization’s values. Well-crafted policies and thoughtful, inclusively represented governing bodies and ancillary panels can help ensure adherence to the established Responsible AI principles, including controlling risks while also fostering collaboration and innovation. Without thoughtful governance, we can see medical chatbots recommending suicide, or even objectively bad use cases like an AI designed to be a power-hungry megalomaniac.

As AI is poised to significantly impact global GDP and revolutionize multiple industries, it calls for a conscientious framework to ensure its benefits are maximized while its risks are minimized. At the organizational level, a good Responsible AI framework can help avoid blunders that could create economic, reputational, or other existential harms. More importantly, Responsible AI can help companies position themselves well head of their peers, reaping a variety of other benefits in the process.