Generative AI is being billed as a must-have enterprise tool for its ability to create full assets in seconds, from creative assets like logos and moodboards to comprehensive financial reports. But the path from prototypes to production remains uncertain for most companies. Security fears, compliance hurdles, and cost blowouts have left many large organizations circling in proof-of-concept purgatory.
This tension is particularly pressing in the financial sector, where a misstep can result in costly regulatory penalties and broken consumer trust.
At Truist Financial, Pavan Emani, Senior Vice President and Technical Leader of Generative AI engineering, has been addressing this fine line. His approach is less about flashy demos and more about building the invisible foundations (like disciplined data pipelines, multilayered guardrails, and constant attention to cost efficiency) that let generative AI applications move safely into production.
Setting Data As The Foundation Of Trust
Pavan Emani has been leading the charge on generative AI at Truist Financial, the eighth-largest bank in the United States. There, he’s been tasked with designing the framework that allows this technology to accelerate internal workflows while keeping costs to a minimum. This aligns with the benefits that have been shown with incorporating this technology, with reports showing financial firms that successfully deploy generative AI see operating costs drop by 45% relative to revenue, and insight delivery speeds increase by 74%.
To properly accomplish this, he first focuses on treating data engineering as a critical step. Large language models (the technology behind what’s commonly referred to as AI) are only as reliable as the information they get fed, and flawed inputs (whether mislabeled, poorly structured, or lacking the right evaluation framework) inevitably produce flawed insights. More concerningly, these systems are also vulnerable to “hallucinations,” which are responses that may sound authoritative at face value but actually lack any factual grounding.
As Pavan himself puts it, “If the data isn’t right, the model can give convincing but completely wrong answers.”
That reality shapes his approach at Truist. He works to ensure every application is built on validated, auditable foundations, with safeguards applied upstream in the internal data pipeline. By embedding rigorous checks before information ever reaches a model, Pavan ensures teams avoid hallucinations or incorrect information while reassuring compliance officers that they’re working with accurate and traceable outputs.
Establishing Proper Guardrails Against Risk
If, to Pavan, clean datasets are the floor for reliable AI, guardrails define the ceiling for safe use. In financial services, incorporating AI would technically mean feeding sensitive (or legally guarded) details like tax documents, account numbers, or identifiers like names and addresses into generative systems. This creates a fear that if left unchecked, AI systems could retain or “remember” sensitive data, making them susceptible to exposing private information to third-party platforms.
And while vendors such as Amazon, Google, and Microsoft stick to only baseline security features, Pavan insists enterprises need additional internal processes to prevent potential leaks. “Without enough guardrails, it will be a nightmare,” he warns.
That’s why Pavan has made containment a priority. His team layers protections ranging from access controls to automated scanners that flag and block personally identifiable information. Data is masked or anonymized so that systems retain analytical value without exposing individual records, and every request is scanned before it’s passed to a proper model to ensure nothing sensitive slips through. Together, these measures create a system where internal engineers can expand their internal capabilities without compromising security.
Aligning Technology With Real Needs
Finally, a trap Pavan avoids falling into is the belief that AI can solve every conceivable enterprise problem. With hype running high, executives may assume generative models are a one-size-fits-all solution with embedded capacities to automate all tasks with no need for intervention or fine-tuning. Pavan, instead, counters that some workflows are still better served by traditional software or human savviness.
At Truist, each proposed use case is reviewed to confirm whether AI is the right fit. “First, see if AI is a good fit for your problem — not all problems need AI,” he said. This discipline protects teams from burning resources on projects destined to stall while ensuring that when companies opt to use AI, the return on investment is clear and measurable.
This goes beyond functionality. Cost observability is a major factor he looks into, since every output carries an expense. Without careful monitoring, those costs can quickly spiral, especially when repetitive or unnecessary information is pushed into models. That’s why governance and traceability are treated as seriously as functionality from the first stage of design, and he incorporates techniques like caching repeated queries to reduce redundancy and observability tools to track performance and flag where intervention is needed.
Pavan’s Vision Of AI
As he continues working with this technology, Pavan believes the next wave of enterprise value will focus increasingly on specialized AI agents. These go beyond answering questions and can perform tasks automatically: routing service requests, updating databases, or coordinating across multiple systems in real-time.
This, if well-built and assessed, could be a game-changer. Customer-service cycles could shrink from hours to minutes as agents handle the groundwork before handing cases to actual humans. Booking travel, managing financial accounts, or solving IT issues are all scenarios that agents could step into and make complex workflows nearly seamless.
Yet Pavan is clear-eyed about the potential consequences that could come with poorly implementing this technology. Agents can misfire or even contradict one another, setting off fragile chains of automation. Observability will again be critical: knowing when an agent is underperforming and replacing it quickly to preserve business flow. For a bank bound by compliance obligations, the road to agent deployment will take careful navigation.
But Pavan Emani sees the horizon clearly and believes the groundwork that he’s helping lay today, one of data discipline, guardrails, and a clear vision on how to incorporate it, will make adoption smoother when the technology matures.
Spencer Hulse is the Editorial Director at Grit Daily. He is responsible for overseeing other editors and writers, day-to-day operations, and covering breaking news.