Chris-Anthony Lafages Vitalis and Cédric Lalin (BearingPoint): Applied AI: Specialised by Design, Scaled by Governance.
Chris-Anthony Lafages Vitalis, Senior Manager, and Cédric Lalin, Senior Consultant at BearingPoint, discuss the state of AI in financial services, highlighting Luxembourg’s growing experimentation with generative and agentic AI while examining the governance, integration, and sector-specific challenges shaping its transition from pilots to scalable deployment.
What is your assessment of the current state of AI use in financial services?
Luxembourg’s business ecosystem has actively been experimenting with generative and agentic AI. Where generative AI has taken root, the first wave of impact is operational. Proofs of concept are multiplying in areas such as document-heavy processes, prospectus review, ESG reporting analysis, and operational controls. Middle/back office functions like reconciliation and certain control activities are also emerging as promising areas, as they involve repetitive processes and large volumes of semi-structured or unstructured data. These areas combine repetitive workflows with high volumes of semi-structured and unstructured data, making them suitable for augmentation through AI-driven tools.
Yet, these gains remain largely incremental as they improve efficiency at the margin without truly transforming the operating model, and we observe a more measured reality: few of these initiatives have progressed to enterprise-wide deployment. A global BearingPoint study indicates that merely eight% of organisations are scaling AI as originally planned, while Gartner reports that at least half of generative AI projects are abandoned after the proof-of-concept stage, often due to unclear business value, weak governance frameworks, or integration complexity. The contrast between this operational reality and global narratives of acceleration and competitive urgency can be surprising.
The financial services regulatory environment further shapes this trajectory. Requirements for auditability, transparency, and robust risk management set a high threshold for AI adoption. Confidentiality and sovereignty add another layer of constraint, steering organisations toward dedicated cloud instances, on-premises deployments, and tightly controlled access models. In very specific contexts such as private equity structures, fund vehicles, or cross-border investor data, priorities often shift from model performance to secure architectural design. In such settings, “confidentiality by design” is not merely a preference; it is a foundational requirement for any production-grade implementation.
In short, we see that the maturity in generative and agentic AI incrementally grows while organisations still face persistent shortcomings in areas such as governance, internal capabilities, and overall organisational readiness.

©360Crossmedia/CN
"The objective is not to replace human expertise, but to codify it."

©360Crossmedia/CN
What are the main challenges in moving to the next stage of AI maturity?
The most significant tension is the transition from pilot or experimentation to production. Technical feasibility is often demonstrated quickly. However, integration into legacy systems, scheduling, monitoring, audit trail generation, and exception management introduce layers of complexity that POCs rarely address. A substantial proportion of agentic AI projects risk cancellation when governance, cost discipline, and measurable ROI are not embedded early in the design phase. In Luxembourg’s regulated environment, this gap is even more pronounced.
A telling example can be found in middle-office automation initiatives observed in the Luxembourg post-trade ecosystem. Early pilots focused on automating reconciliation tasks through AI-assisted extraction and rule-based matching. The real value, however, only emerged when the solution was embedded into operational systems, connected to data pipelines, and subjected to internal controls. This shift from tool to integrated capability illustrates a broader truth: industrialisation is not about demonstrating that AI works. It is about redesigning workflows, embedding governance, and ensuring that outputs are auditable, repeatable, and scalable.
Another hurdle revealed by experimentation is the limitation of general-purpose models in regulated financial environments. Large language models trained on vast and diverse online data do not automatically grasp the regulatory and legal nuances of financial documentation. In highly supervised jurisdictions such as Luxembourg, semantic imprecision is not a minor flaw; it is a material risk. Scaling therefore requires specialisation. This means finance-calibrated prompt engineering, domain-specific taxonomies, and the embedding of regulatory logic directly into AI workflows. The objective is not to replace human expertise, but to codify it. Regulators themselves are exploring AI-driven document analysis and compliance automation, signalling that sector-specific applications are becoming structurally embedded in financial supervision. The competitive differentiator is no longer access to models, but the depth of business calibration.
BearingPoint’s experience with an Asset Management client firm illustrates this principle. In a validated proof of concept, an agentic solution was designed to automate start-up risk scoring across approximately 50 Key Risk Indicators. Crucially, the AI did not alter the firm’s proprietary risk methodology. Instead, it was calibrated to execute the existing scoring logic faithfully, through structured prompt chains aligned with the client’s definitions, thresholds, and qualitative assessment criteria. By embedding domain-specific rules and contextual interpretation directly into the workflow, the system was able to scale the process without changing its meaning. Backtesting against historical human assessments confirmed alignment of results. The lesson is clear: verticalisation is not about creating a different model. It is about ensuring that AI understands the business the way the business understands itself and can execute it at scale without distorting its intent.
What would be key guiding principles to move forward?
The organisations that succeed in scaling AI are not necessarily those with the most advanced models, but those with the clearest strategic discipline. Scaling requires a defined AI vision, prioritised high-impact use cases, and measurable business outcomes aligned with core operations. In regulated environments such as Luxembourg, governance is not an afterthought; it is the foundation. Model risk management, audit trails, data lineage, access controls, and accountability frameworks must be designed from the outset. The Commission de Surveillance du Secteur Financier has explicitly emphasised the need for robust oversight and risk controls when deploying AI within financial institutions. Without this structure, even technically successful pilots risk remaining isolated experiments.
Crucially, scaling also demands organisational transformation. Processes often need to be redesigned rather than superficially automated. Injecting AI into legacy workflows rarely delivers structural gains. Instead, firms must rethink how tasks are sequenced, how controls are embedded, and where human judgement adds value.
Strong top-management sponsorship is essential to move beyond bottom-up experimentation and to anchor AI initiatives within long-term operating models. The next competitive divide in financial services will not separate adopters from non-adopters of AI. It will distinguish those who industrialise with governance, sector expertise, and transformation discipline from those who continue to experiment cautiously at the margins.
© Duke26



