Artikel

Managing Multiplying Black Swans

Organizations can adapt risk management strategies to prepare for AI-related risk and incorporate AI tools into risk management.

Simon Axon
Simon Axon
20. August 2024 4 min Lesezeit

Artificial intelligence is a powerful tool for any business that seeks to manage the volatility and uncertainty of today’s business context. Predictive analytics and large language model generative AI can both be used to find patterns that can help forecast and prepare for future scenarios. But unpredicted, and unpredictable events will still occur; there will still be ‘black swan’ events that even AI struggles to foresee, explain or react to appropriately. The ‘black-box’ nature of AI can also introduce new uncertainty and new risks – it can create its own black swan events - either as hallucinations, or through unforeseen combinations of data. So how can organizations adapt risk management strategies to both prepare for AI-related risk, and to incorporate AI tools into risk management? This is a question being asked by organizations and regulators across many sectors.  

Keeping up with AI

Black swan events are a useful starting point for this discussion, but the bulk of the iceberg is made up of a wide range of risks that poorly controlled AI could manifest. A recent article in the MIT Sloan Management Review found that fewer than one in five (17%) of experts asked agreed that organizations were sufficiently expanding risk management capabilities to address AI-related risks. They pointed to the pace of change as well as the increasing complexity of models as key aspects of this failure to keep up.  Ambiguity, and a lack of process to identify what and where AI risks lay, plus difficulties in quantifying them, were also seen as barriers.  

However, many of AI-related risks have their roots in well-established risk drivers including data governance, privacy, and security. Risk management is the core of good business practice and is the foundation of financial services. Effective governance, reporting and wide-ranging regulatory requirements are bread-and-butter for some sectors such as financial services, but many others are much less well prepared. Many of the building blocks for effective management of AI risk, those that ensure data quality and provenance are yet to be put in place. No time should be wasted in applying these fundamental aspects ready for the AI explosion.    

AI to manage risk

Banks are already using predictive analytics to improve the accuracy of credit scoring and fraud detection, manufacturers for quality control. Predictive maintenance and enhanced R&D processes are also early adopters of this form of AI. Generative AI has the potential to go further and analyze vast quantities of data to provide new and better insights to employees and customers alike. It could improve advice and lower the risk of sub-optimal outcomes for all parties. This includes spotting new risks and highlighting unknown unknowns. It can also accelerate and reimagine processes to deliver faster insights: imagine natural language queries that can provide accurate advice to experienced technicians dealing with unusual problems rather than time-consuming manual searches, enquires of study of complex technical manuals. Applications like these promise significant productivity gains across industries. 

But businesses should also understand and prepare for the new risks and limitations in AI itself. Bias, poor-quality data, and lack of oversight are all recognized risks that are augmented by AI. And AI is not an infallible forecaster. Predictive or probabilistic AI models may not respond well to so- called ‘black swan’ events. These highly unlikely scenarios born of almost random combinations of previously unrelated factors are not only highly disruptive, but can cause cascade effects in automated, AI-driven systems. By their nature these ‘edge-case’ scenarios happen extremely rarely, so there is virtually no data available to train AI to learn how to manage them.  

Good data governance is key

Faced with these difficulties in explaining the outputs of AI, firms and regulators are moving to focus on the inputs as well as the outputs. The black-box nature of AI decisions makes it very hard to examine why an individual decision was made in the way it was made. All AI’s make their decisions based on data, and data can be examined. But it isn’t enough to look simply at the database to see what data was used. More in-depth examination of the whole data pipeline, including provenance, transformation, and preparation that supports model building is required.  Understanding the rule by which AIs learn and adapt is also critical, so organizations must ensure that regulators can access all these stages in order to demonstrate why decisions are made.   

Transparency 

Transparency of data and models is fundamental for AI to be trusted, and to reinforce and extend tried and tested risk-management approaches to this fast-evolving space. Those organizations that have invested in open and connected data ecosystems that provide a single view of trusted data are best positioned to extend risk management to AI. By providing visibility internally, and potentially to regulators, of the data used by AI to make decisions, they can demonstrate why these decisions should be treated as fair, equitable, and safe. Critically, since data is the essential raw material of AI, it must be validated and trustworthy before any AI consumes it for training or inference.

Process and people 

People and processes are important elements in managing AI risk. Risk management can never overlook the role of people. Humans need to remain at the centre of AI technology and have a vital role to play in managing risk. A human-centered approach reinforces ethical and value-driven standards and is an important element in limiting risk and potential harm to individuals, businesses and brands. Responsible individuals must be in the loop to check for bias, to spot and challenge unexpected outcomes and to always enforce data governance and security standards. Data features used in model development should be preserved so that even as data scientists move onto new roles and tasks it is possible to recreate model development.  

Trusted AI depends on extending existing risk management practices into this fast-evolving space. The building blocks of transparency, good data governance, and compliance to existing regulation are all there. Now is the time to deploy these assets to secure growth, value and innovation from AI whilst managing risk. Learn more about Trusted AI.  

Tags

Über Simon Axon

Simon Axon leads the Financial Services Industry Strategy & Business Value Engineering practices across EMEA and APJ. His role is to help our customers drive more commercial value from their data by understanding the impact of integrated data and advanced analytics. Prior to his current role, Simon led the Data Science, Business Analysis, and Industry Consultancy practices in the UK and Ireland, applying his diverse experience across multiple industries to understand customers' needs and identify opportunities to leverage data and analytics to achieve high-impact business outcomes. Before joining Teradata in 2015, Simon worked for Sainsbury's and CACI Limited.

Zeige alle Beiträge von Simon Axon

Bleiben Sie auf dem Laufenden

Abonnieren Sie den Blog von Teradata, um wöchentliche Einblicke zu erhalten



Ich erkläre mich damit einverstanden, dass mir die Teradata Corporation als Anbieter dieser Website gelegentlich Marketingkommunikations-E-Mails mit Informationen über Produkte, Data Analytics und Einladungen zu Events und Webinaren zusendet. Ich nehme zur Kenntnis, dass ich mein Einverständnis jederzeit widerrufen kann, indem ich auf den Link zum Abbestellen klicke, der sich am Ende jeder von mir erhaltenen E-Mail befindet.

Der Schutz Ihrer Daten ist uns wichtig. Ihre persönlichen Daten werden im Einklang mit der globalen Teradata Datenschutzrichtlinie verarbeitet.