AI is no longer a theoretical discussion within finance organizations. It is becoming embedded in daily workflows, shaping how teams approach reporting, forecasting, and audit readiness. Yet as adoption accelerates, finance leaders are confronting a more exacting question. How do you move from isolated use cases to disciplined, reliable execution that stands up under scrutiny?
During the Controllers Council and Savant Labs webinar, Operationalizing AI in Finance: Driving Efficiency, Accuracy, and Audit Confidence Through Governance, panelists Joy Mbanugo, Renee Jewell, and Chitrang Shah offered a grounded perspective on what it takes to operationalize AI without compromising control, accuracy, or audit confidence.
AI Adoption Is Advancing, But Still Uneven
Most finance teams are no longer asking whether to use AI. They are determining how far and how fast to extend it.
As Joy Mbanugo explained, the shift has been practical rather than theoretical. “I use AI every day. It’s become necessity for me just based on the size of my team and the needs of the company.”
At the same time, organizations remain at different stages of maturity. Some are experimenting with individual productivity tools, while others are building structured programs that introduce governance and oversight early in the process.
Renee Jewell described a coordinated approach that balances experimentation with control. “We have team level goals for AI. We also have built a governance framework to track use cases.”
That dual focus reflects a broader reality. AI is advancing quickly, but finance teams must move deliberately when outputs influence financial reporting.
Governance is the Difference Between Pilots and Production
A consistent theme throughout the discussion was the distinction between using AI informally and deploying it within core financial processes.
Chitrang Shah framed the issue clearly. “The number one reason… why operationalization is hard is the trust and defensibility around AI.”
That challenge becomes more pronounced when AI moves beyond research or drafting tasks and begins producing outputs tied to reconciliations, forecasts, or financial reporting. At that point, finance leaders must ensure that results are traceable, reviewable, and consistent.
Jewell highlighted how this plays out in practice within a public company environment. “We need to be thoughtful about where we’re using AI in SOX controls. It’s going to create a different structure pattern around levels of testing.”
The implication is straightforward. AI can support finance, but it cannot bypass established control frameworks. Instead, it must be integrated into them.
Practical Use Cases are Delivering Measurable Results
While governance remains a central concern, the panelists shared several examples where AI is already delivering tangible value.
Mbanugo pointed to audit preparation as a meaningful area of impact. By using AI to support documentation and forecasting, her team reduced delays and improved readiness. “We were able to file within the allotted timeline… using LLMs… saved us a week or two.”
Jewell described a case where AI accelerated access to operational insights. A process that previously required coordination with a business intelligence team was completed directly by a finance team member. “What used to be a couple weeks of work, now took a couple of days.”
Across both examples, the pattern is consistent. AI reduces time spent on manual assembly and allows teams to focus on interpretation and decision-making.
Trust, Skepticism, and Security Remain Essential
Despite clear benefits, the panel emphasized that finance leaders must approach AI with informed skepticism.
Jewell summarized this mindset succinctly. “Anything that AI is doing requires that a human reviews it before it should be acted upon.”
Mbanugo reinforced the point, noting that AI outputs are not inherently reliable. “They are prone to hallucinations. You must have a healthy skepticism.”
Security also surfaced as a concern, particularly as organizations experiment with new tools and agents. The combination of sensitive financial data and rapidly evolving technology requires careful oversight and clear usage policies.
In practice, this means finance teams must adopt a “trust but verify” posture, applying the same rigor they would use in any audit or review process.
Start Small, Then Build Toward Scale
When asked about practical advice, the panel returned to a simple but effective principle. Begin with manageable use cases and expand deliberately.
Jewell recommended focusing on areas with low risk and clear returns. “Pick something low risk to automate, but a high return, those are usually your quick wins.”
Mbanugo offered a complementary perspective, encouraging teams to begin using the tools available to them today. “You won’t have any success if you don’t try it.”
Over time, these incremental efforts create both familiarity and confidence, which are necessary before expanding AI into more sensitive or complex workflows.
The Path Forward for Finance Leaders
Operationalizing AI in finance is not a single initiative. It is an ongoing discipline that blends technology adoption with governance, skepticism, and continuous refinement.
The organizations making the most progress are not those moving the fastest, but those building the right foundations. They are aligning AI with existing control environments, encouraging hands-on experimentation, and maintaining accountability for every output produced.
As finance teams continue this transition, the objective remains clear. Use AI to enhance efficiency and insight, while preserving the integrity and confidence that financial reporting demands.
About the Sponsor
Savant Labs is an AI automation platform for finance and tax teams, designed to modernize manual, data-intensive workflows with greater speed, accuracy, and control. Learn more at www.SavantLabs.io.
