Experts in top team and board consulting, training and development
Blog
Posted by Andrew & Nada on 13th November 2024
Boards don’t need AI experts, but they must fulfil their duty

The rapid evolution of artificial intelligence (AI) is transforming industries and stirring both excitement and concern across boardrooms worldwide.

With advances like ChatGPT, organisations gain tools to elevate creativity and productivity, yet they also face heightened risks, including potential for disinformation and data breaches. As we move into 2024, boards are navigating an increasingly complex digital landscape. This environment doesn’t demand AI experts on every board, but it does require boards to effectively execute their governance duties with a keen focus on strategic integration, risk management, and ethical oversight.

AI’s opportunities and risks: What boards need to know
AI brings unique capabilities to businesses. Generative AI tools can instantly generate text, images, and even art, streamlining decision-making and enhancing data processing with less human error. These technologies, however, also have notable risks. For example, generative AI can convincingly produce fabricated news or even academic papers, making it an unwitting agent of disinformation.

Such potential for AI to mislead highlights the broader governance challenge: while the power of AI can accelerate business growth, it also exposes companies to ethical and reputational risks.

If unchecked, AI can enable harmful outcomes, from data breaches—such as the 2023 incident when OpenAI’s ChatGPT inadvertently exposed user data—to privacy violations that invite regulatory scrutiny, like the Italian government’s temporary ban on ChatGPT. For boards, understanding and overseeing these risks is paramount, not as AI experts, but as stewards of their organisations’ assets and reputations.

The governance gap in AI adoption
A growing body of advice exists for boards regarding AI, but much of it is vague, failing to address actionable governance principles. AI’s role in governance goes beyond merely understanding technology—it requires boards to ask how AI can strategically fit into their organisation’s goals and business model. Directors should adopt a clear, accountable approach to integrating AI, focusing not only on operational gains but also on potential vulnerabilities.

The value of AI to the organisation
AI offers several advantages for improving operational efficiency, from faster data processing to reduced costs and minimised human error. For example, AI tools can handle complex data inputs more effectively than humans, resulting in increased accuracy and predictive power.

These advancements offer significant benefits in areas such as customer service and market analysis, expanding the organisation’s reach. However, boards must ensure these benefits align with the company’s strategy, carefully weighing the long-term advantages against potential risks.

Acknowledging the pitfalls of AI
AI’s potential to misinform cannot be ignored. Many users take AI-generated outputs at face value, even when those outputs are erroneous, presenting a unique reputational risk for organisations. As AI-driven services become more fluent and authoritative in their responses, the potential for false information to circulate grows, and boards must establish controls to mitigate such risks. Service providers, meanwhile, often place the onus of validating AI outputs on users themselves, which can lead to lapses in quality and accountability.
Boards must also contend with the “education trap” in which the emphasis is on educating users about AI tools, rather than focusing on the governance and strategic oversight necessary for AI’s effective integration. This can create a disconnect between understanding the technology and effectively managing its implementation within the organisation’s risk and ethical frameworks.

A Four-Step Plan for AI Governance
Boards can follow a structured approach to AI adoption, centered on four essential considerations: competitive advantage, risk, reputation, and value. This process allows boards to maximise the benefits of AI while navigating its risks and aligning with organisational goals.

  1. Competitive Advantage
    Boards must analyse AI’s potential to provide a competitive edge. By scrutinising the costs, potential gains, and strategic benefits AI can offer, directors can identify where technology can truly differentiate the organisation in the marketplace.
  2. Risk
    Boards need to assess the potential risks associated with both embracing and neglecting AI. These include technological vulnerabilities, operational disruptions, and regulatory exposure. Understanding AI-related risks allows boards to implement safeguards and make informed decisions.
  3. Reputation
    The board’s oversight role includes evaluating the reputational impact of adopting—or refraining from—AI. Missteps with AI could harm the organisation’s standing, while responsible AI integration can enhance trust and credibility.
  4. Value
    Finally, boards must determine the tangible and intangible value AI brings to the organisation, balancing competitive advantage, risk, and reputation to ensure AI adoption aligns with organisational priorities.

By adhering to these principles, boards can adopt a disciplined, evidence-based approach to AI strategy. Rather than reacting to technological trends, this approach encourages directors to ground decisions in a balanced view of AI’s benefits and challenges.

Engaging with general management: Essential for strategic implementation
For AI integration to succeed, boards need to foster communication across the organisation. While senior executives offer valuable strategic insights, general managers, with their direct knowledge of operational realities, provide essential feedback on how AI initiatives impact daily functions. This collaboration ensures AI strategies are not only theoretically sound but also practically effective.

In organisations with punitive cultures, however, general managers may hesitate to share honest insights, preferring instead to “keep the peace” by affirming executive assumptions. To counter this tendency, boards must foster an open environment, where meaningful contributions from all management levels inform strategic decisions.

AI as a growing economic force and governance challenge
Despite the risks, AI’s economic impact is undeniable. In the UK alone, AI-specific businesses added approximately £9.1 billion to the economy in 2023, with the global fraud prevention market projected to grow significantly alongside the rise of cybercrime. These figures underscore both AI’s potential and the importance of responsible governance to mitigate its negative effects.

The role of the board: Oversight, not expertise
Boards don’t need directors who are AI or technology experts. Rather, they need directors who understand their governance role in addressing the effects of new technologies. From blockchain to sustainability, boards should seek expert input when necessary, maintaining a balanced approach that considers competitive advantage, risk, reputation, and value.

Effective governance requires boards to engage deeply with management, seek diverse insights, and stay informed. Only through careful, evidence-based scrutiny can boards protect and enhance their organisations in an era where AI and other innovations are reshaping the corporate landscape.