How business leaders can build confidence in AI
Written by Virginia Ghiara, Principal Researcher AI Ethics, Fujitsu Research of America
Board executives worldwide are grappling with a new challenge that came out of left field barely 12 months ago – the rise of generative AI.
There’s little doubt that the future changed in November 2022, as generative AI revealed its unlimited possibilities. What’s now emerging is a growing common understanding of the challenges – the consequences, uncertainties and risks to people, society and businesses.
Indeed, no one wants to get left behind in the opportunity to make giant strides for their stakeholders. But, along with all their astonishing powers, generative AI appears to come with hallucinations and bias baked in. Bias is not restricted to generative AI and LLMs alone and can be present in any training data, model, or AI prediction. However, the risk of bias in generative AI systems has simply made the topic of responsible AI front and center in corporate conversations.
What major concerns do business leaders have regarding AI?
Executives must set their organization’s standard, striking the balance between what can be done with AI and what should be done with it. Developing robust corporate policies to establish accountability, ensuring AI systems align with ethical requirements, and maintaining transparency are critical first steps. Keeping a close eye on upcoming regulations is another essential task.
The EU’s AI Act, now reaching the final stages of legislation, will soon enforce unbiased and ethical use of AI with fines up to 7% of turnover, or €35M. In the UK, firms regulated by the Financial Conduct Authority (FCA) are already exposed to obligations under the senior managers and certification regime (SMCR), which holds “accountable individuals” responsible for demonstrating the explainability of decisions made by AI.
C-level executives are left grappling with fundamental questions: Can you trust AI? How do you ensure that decisions delivered by AI are impartial, conform to legislation and uphold basic human rights? And how do you do that without the additional headaches of locating, evaluating and training all their people to use multiple technologies from various vendors?
Do we already have the solution?
Fortunately, many forward-thinking technology companies were already addressing the trust question before generative AI revealed the full implications and risks of AI. Discussions and research have been ongoing for several years, and some of the world’s brightest minds have been working on solutions to make trustable AI a reality.
Fujitsu has built tools to detect and mitigate bias in AI models and launched a private, trustworthy sandbox environment to leverage the power of conversational generative AI. Both AI Trust and Generative AI are key areas under Fujitsu Kozuchi, which consists of 7 areas of AI that can be rapidly developed, tested and implemented to deliver immediate results.
A key advantage of Fujitsu Kozuchi Generative AI is that organizations and startups can design and develop new generative AI solutions that meet their specific needs without worrying about resources: the risk of GPU scarcity is mitigated, as the necessary infrastructure exists, and no-code natural language inputs and the availability of consultancy from expert advisors minimize skills issues.
The real breakthrough, however, comes with the ability to achieve a new level of trust, created with multiple technology layers presented as a holistic solution:
• A new Fujitsu technology detects hallucinations in conversational AI models – a phenomenon in which generative AI creates incorrect or unrelated output.
• Fujitsu AI Ethics for Fairness allows training datasets and the fairness of AI decisions to be simply verified on a web browser. It includes functionalities to visually assess bias mitigation techniques to select the best one.
• Fujitsu Intersectional Fairness, integrated into Fujitsu AI Ethics for Fairness, specifically addresses discrimination towards groups of people with various combinations of sensitive characteristics (e.g., gender, race, age, sexuality, religion, disability, etc.) that are not detected and mitigated by traditional bias mitigation strategies.
• Fujitsu AI Ethics Impact Assessment provides a digitized solution to assess AI systems’ ethical impact and risks based on international AI ethics guidelines and previous AI incidents. Fujitsu AI Ethics Impact Assessment can be used to future-proof AI systems to comply with ethics principles and legal requirements. It can also create evidence to engage with auditors, approvers, and stakeholders.
• A recent solution looks beyond model development to provide security technologies that prevent possible generative AI attacks through disinformation and “LLM poisoning”.
Democratizing AI
We want to realize a world in which developers everywhere can easily and securely use the latest technologies on open platforms to create new trustable applications and find innovative solutions to challenges facing business and society.
To that end, Fujitsu, working with the Linux Foundation, has recently released its automated machine learning and AI fairness technologies as open-source software (OSS). This development offers developers access to software that automatically generates code for new machine learning models and a technology that addresses latent biases in training data. The Linux Foundation has approved the SapientML and Intersectional Fairness AI technologies to encourage developers worldwide to experiment and innovate further with AI and machine learning technologies.
Without building confidence, consumers and society will reject AI
A final thought. It’s not a complete analogy, but there is a distinct parallel between the need for greater confidence around AI today and the introduction of quality initiatives in manufacturing some decades ago. It’s hard to remember now, but manufacturing quality was highly variable until a new philosophy of Quality Management (QM) emerged. Japanese industry, which initially had a poor reputation for quality, embraced QM wholeheartedly, resulting in Japanese engineering becoming synonymous with reliability and zero defects.
The point of the analogy is that Japanese manufacturers didn’t treat quality as an additional cost but as the value-adding core principle of their offer. That’s how organizations using and developing AI should be approaching trust.
AI is the technology most likely to drive business and society in the coming decades. We believe building confidence rests at the heart of the compact between business and society – especially in AI. Without this confidence, society will reject the promise of AI to transform business and improve living conditions for everyone.
To learn more about Fujitsu AI and Fujitsu Kozuchi, please visit: www.fujitsu.com/global/kozuchi
Written by Virginia Ghiara, Principal Researcher AI Ethics, Fujitsu Research of America