As artificial intelligence moves from experimentation into real operational workflows, the conversation in wealth management is shifting decisively from curiosity to control. At a recent Hubbis roundtable in Singapore, chaired by Andrew Chow, strategic advisor to Hubbis, senior leaders from across private banking, wealth management, family offices and financial services gathered to examine how firms should oversee AI risk as adoption broadens.
What emerged was a grounded and highly technical exchange about governance, accountability, explainability, third-party risk, cloud dependency, operating model design and the growing need for senior management to understand where AI is already embedded across their organisations, anchored by subject matter experts in AI risk management. Attendees made clear that the question for senior management is how firms can use AI with sufficient discipline, control and commercial realism to reshape themselves.
Key Takeaways
- AI adoption is accelerating, but most firms remain cautious about where and how it should be deployed.
- Senior management oversight is becoming central to meeting increasingly well defined regulatory expectations around governance, accountability and control.
- KYC, onboarding, portfolio analytics, data consolidation and productivity workflows were cited as the most active use cases.
- Explainability remains a core concern, especially for generative AI, where achieving traditional model transparency may be unrealistic.
- Agentic AI may improve traceability if firms can break processes into bounded and testable steps.
- Cloud-based AI models and third-party tools are raising difficult questions around data lineage, vendor assurance and operational resilience.
- Smaller firms face a materially different challenge from larger institutions, especially where budgets, infrastructure and internal expertise are limited, and who do not have the scale to establish data lakes and internal large language models.
- The real transformation may happen at the task level rather than the job level, requiring firms to rethink operating models, skills and workforce design.
- AI adoption is not a given and much more thought needs to be put in organisational redesign to encourage humans to embrace AI
The Governance Question Is Now Front and Centre
The discussion validated that many firms are already using AI in some form, even if adoption remains uneven and often tightly bounded. Around the table, participants described activity spanning KYC, source-of-wealth reviews, reconciliation, portfolio consolidation, document production, risk screening, product material generation, internal research support and front-office productivity tools.
Through discussion, the central concern that emerged was how senior management should oversee these tools once they begin to influence workflows, outputs and risk decisions. The challenge for boards and leadership teams is to understand where AI is being used, what data is flowing through those systems, where the guardrails sit, and what happens if those models or providers fail.
AI governance is no longer a theoretical exercise or a narrow technology function but is instead a management issue, a control issue and, increasingly, a strategic issue.
Firms Are Starting with Practical, Lower-Risk Use Cases
From the discussion, most firms are targeting more contained and measurable areas, where productivity gains can be demonstrated and risk can be more comfortably controlled, rather than in the most sensitive or business-critical applications. Productivity-led use cases are being prioritised, particularly where AI can reduce manual work, summarise data, structure internal material or speed up document-heavy processes. More sensitive areas, especially those involving client onboarding, suitability, investment decision support or compliance judgement, are being approached much more cautiously given the potential for hallucination.
One participant said the safest path is to begin with the “low-hanging fruit” and build confidence from there. Another noted that while everyone talks about AI, the real challenge is operationalising it in ways that are measurable, governable and commercially worthwhile. That means defining KPIs, understanding productivity impact, creating governance around specific deployments and ensuring the underlying data environment is robust enough to support the tools.
That gradualism was not presented as reluctance. It was framed instead as a sensible recognition that the use of AI needs to move in tandem with the maturity of risk oversight.
Reliabilty Is Still a Major Constraint
One of the most substantive parts of the roundtable focused on explainability. For wealth managers and private banks, this remains one of the most difficult issues in AI deployment, particularly in regulated or judgement-heavy functions.
The concern was expressed in very practical terms. Participants pointed out that generative AI can still produce invented numbers, unsupported conclusions and outputs that are difficult to trace or defend. In research-related applications, this may be irritating but manageable. In KYC, compliance or client-facing investment contexts, it becomes a much more serious issue.
As was observed in the room, the underlying issue is that when a human gets something wrong, their reasoning can be interrogated. With generative AI, that is often far less straightforward. One attendee argued that this remains one of the most important differences between an analyst and a model. Another suggested that the industry still lacks a settled understanding of what explainability should realistically mean in a generative AI context.
That prompted a more nuanced line of thinking. Rather than demanding full model explainability in the classical sense, the roundtable explored whether firms should instead focus on evaluation and testing, traceability of process, and the design of testing and control frameworks that allow humans to understand what happened and why. Key controls such as explainability may need to be treated less as a standalone technical property and more as part of a control architecture that includes other aspects such as human oversight, evaluation and testing and monitoring.
Agentic AI May Offer More Structure Than Generative AI Alone
The discussion also drew a useful distinction between different forms of AI. One contributor suggested that firms should frame their thinking of AI not as a single category but instead break it down into three different types of AI – the AI of numbers (traditional machine learning), the AI of words (generative AI) and the AI of actions (agentic AI).
Demanding explainability from traditional machine learning models operating on structured numerical data may still be difficult, but they are generally more bounded and more amenable to testing. Generative AI introduces a different kind of uncertainty, especially in language generation and reasoning, because its hallucinations are fundamental to its nature. Agentic AI, however, may create a more controllable pathway if its actions can be broken into steps, logged, checked and governed.
Agentic workflows may in fact support a more practical form of oversight because they allow firms to separate retrieval, summarisation, analysis and execution into distinct stages. That creates the possibility of gating, logging and validating each step rather than treating the whole output as a black box.
As one participant put it, the opportunity is not to ask the model to do everything at once. It is to deconstruct tasks, set boundaries around each capability and reduce error through structured design. For firms looking to move beyond basic chat-based tools, that may prove to be one of the most important implementation lessons.
Human Oversight is Essential
Several attendees argued that the burden of overseeing AI systems cannot simply be pushed onto end users. Firms need to design systems in ways that genuinely support human review, rather than assuming that putting a human in the loop automatically solves the problem. It was noted that if an organisation relies too heavily on system outputs without training staff to question them, the control failure becomes cultural as much as technical.
The attendees agreed that oversight has to go beyond formal approval or policy language and be actually implemented. Management needs visibility into the design of workflows, the role of human review, the escalation routes when outputs look suspicious, and the conditions under which reliance on the model is appropriate. Oversight must be built into the organisation’s own operations instead of being an afterthought.
Third-Party Risk and Cloud Dependency Are Becoming Harder to Ignore
If explainability is one major challenge, third-party risk is another. The roundtable repeatedly returned to the complications introduced by cloud-based models, outsourced AI services and enterprise tools that sit outside a firm’s direct control. While these complications are not strictly an AI issue, the guidelines and regulations surrounding third party risk will inevitably impact AI.
Participants pointed out that many AI implementations today rely on providers whose infrastructure, subcontracting arrangements and data-handling architectures are not always fully transparent. That creates tension between what firms are expected to know and what is realistically obtainable from large vendors.
The difficulty is particularly acute for smaller institutions. Large global banks may be able to invest in private environments, custom controls or internal gating layers that strip out sensitive data before prompts are processed. Smaller banks, EAMs, boutiques and family offices often do not have that luxury. For them, the cost of safe deployment can quickly become disproportionate.
This led to a broader recognition that some of the hardest AI questions are not purely about AI at all. They are about cyber security, data lineage and integrity, access controls, outsourcing risk, vendor management and enterprise architecture. As was said during the discussion, once firms move into public or shared model environments, the issue is no longer just innovation. It is whether the control environment is strong enough to support it.
The Competitive Dimension Cannot Be Ignored
The roundtable also explored the strategic tension facing regulators and regulated firms alike. If AI is genuinely going to change the economics of financial services, then there is a question one of whether firms subject to differing regulatory intensity will continue to be competitive.
One participant argued that if leading international institutions are allowed to move faster, automate more aggressively and restructure their operating models earlier, local firms may find themselves at a structural disadvantage. The concern was not expressed as a plea for deregulation, but as a recognition that supervisory approaches will shape the speed and confidence with which firms can act.
At the same time, others cautioned against assuming that speed alone will determine the winners. A more relaxed regulatory stance elsewhere may generate short-term advantages, but the sustainability of that approach remains uncertain, especially if firms remove too much human capability before they fully understand how to manage the new risks.
The tension between regulation and competition remains unresolved but more light may become available as first movers encounter challenges.
The Real Shift May Be at the Task Level
One of the most thought-provoking themes of the discussion was whether firms should think less about jobs disappearing and more about tasks being reconfigured.
That distinction is important. Jobs in financial services are made up of many smaller activities or tasks – retrieving information, checking documents, summarising content, moving data, reviewing outputs, escalating exceptions, making judgements and communicating decisions. AI may not eliminate whole roles in the near term, but it is already beginning to absorb some of these composite tasks.
Several contributors suggested that this is the level at which firms should analyse the opportunity. Instead of asking where AI fits into a department, they should examine which tasks are repetitive, rule-based, document-heavy or disliked but necessary. That creates a more practical route to identifying value, scoping controls and designing adoption.
One attendee described these as the kinds of jobs that people do not especially want to perform but which still consume time and organisational energy. Another argued that the skill sets within many existing roles are likely to change significantly, even if the roles themselves persist.
That implies a different leadership challenge. The issue is not only whether AI works. It is whether firms can redesign workflows, retrain staff, establish controls and maintain trust while absorbing the organisational consequences. This suggests that jobs will still be available, being an aggregation of tasks, but that that the composition of these jobs will change over time.
Adoption Is Also a Human Problem
This led naturally into the final major theme of the roundtable: adoption. Several participants acknowledged that even where tools work, staff do not always use them. In some cases, people prefer to continue doing tasks manually, even if the AI can complete them faster and reasonably well.
That resistance is not irrational. In many firms, trust is earned through experience, repetition and peer validation. Asking staff to hand over parts of their work to a system can feel threatening, especially when their expertise has been built precisely around those tasks. As one contributor observed, the issue is not just technology change. It is also about human disintermediation risk.
That matters because AI transformation will not succeed through tools alone. Firms will need staff who can define problems, verify outputs, create effective guardrails, challenge system behaviour and exercise judgement around edge cases. They will also need management teams that understand the cultural and operational friction that comes with that transition.
The roundtable did not suggest that these obstacles are easily solved. But it did make one thing clear: AI implementation is not merely a technology rollout. It is an organisational redesign challenge.
A More Mature AI Conversation Is Beginning to Take Shape
What made this discussion useful was its realism. There was no suggestion that AI can be ignored, nor any assumption that it should simply be embraced wholesale. Instead, the conversation reflected a more mature phase in the industry’s thinking.
Attendees were not asking whether AI is powerful. They were asking how to govern it, how to test it, how create boundaries, how to justify reliance on it, how to manage vendors, how to align it with risk appetite and how to adopt it without losing control of the operating model.
That is a far more consequential discussion than the generic enthusiasm that has often dominated the early AI narrative.
For wealth management firms, family offices and private banks, that shift may be the most important development of all. The challenge now is not to be seen talking about AI. It is to build the management discipline required to use it responsibly, transparently, competitively and at scale.
Links to Wider Reading / Source Material
1) Guidelines on AI Risk Management (Proposed)
MAS consultation paper setting out proposed supervisory expectations for AI risk management across financial institutions.
https://www.mas.gov.sg/-/media/mas-media-library/publications/consultations/bd/2025/final_consultation_paper_on_guidelines_on_ai_risk_management_forrelease.pdf
2) AI Model Risk Management
MAS information paper outlining key considerations around AI model risk management and supervisory thinking.
https://www.mas.gov.sg/-/media/mas-media-library/publications/monographs-or-information-paper/imd/2024/information-paper-on-ai-risk-management-final.pdf
3) AI Risk Management: Executive Handbook
Executive-focused handbook offering practical guidance on AI oversight, governance and management responsibilities.
https://www.mas.gov.sg/-/media/mas-media-library/schemes-and-initiatives/ftig/project-mindforge/mindforge-ai-risk-management-executive-handbook.pdf
4) AI Risk Management: Operationalisation Handbook
Operational handbook focused on implementation, controls and embedding AI risk management into day-to-day practice.
https://www.mas.gov.sg/-/media/mas-media-library/schemes-and-initiatives/ftig/project-mindforge/mindforge-ai-risk-management-operationalisation-handbook.pdf
5) IMDA Model AI Governance Framework for Agentic AI
IMDA framework addressing governance considerations for agentic AI and more autonomous AI systems.
https://www.imda.gov.sg/-/media/imda/files/about/emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf
6) Andrew Chow Op-Ed: “Understanding the Complexities in Managing Risk for Artificial Intelligence: The Challenges for Senior Management”
Hubbis op-ed exploring the governance and risk management questions facing senior leadership as AI adoption accelerates.
https://www.hubbis.com/article/understanding-the-complexities-in-managing-risk-for-artificial-intelligence-the-challenges-for-senior-management
