Why AI in banking struggles despite its massive potential
AI in banking faces significant hurdles, despite its vast potential. While AI co-pilots speed up processes like risk assessment and fraud detection, they're often slowed by lengthy approval processes and regulatory demands.
The complexity of MLOps in banking stems from legacy systems and human-in-the-loop processes. AI engineers in Asian banks have developed efficient AI co-pilots for various tasks, but these are held back by a nine-month Model Risk Management Committee sign-off process. Despite these challenges, the World Economic Forum reports that AI in banking could generate hundreds of billions in value through efficiency and revenue generation.
Banks invest heavily in debiasing techniques and fairness audits to prevent algorithmic bias. However, explainability remains a major challenge for AI engineers. Regulators require clear explanations for model decisions, making explainable AI (XAI) crucial for resolving the conflict between speed and safety in banking AI. The true latency of AI in banking isn't the technology itself, but compliance with regulations.
Regulation acts as a friction point for AI engineering in banking, but it's vital for building safer systems. The future of AI in banking is being shaped and strengthened by regulation, with the goal of creating transparent, auditable decision engines. Regulators focus on preventing an AI-induced systemic shock, prioritizing risk-proportionate governance over innovation.