0

How Financial Services Should Prepare for Generative AI

#Financial #Services #Prepare #Generative

Gen AI/LLMs have both long-term benefits and risks. But what will that look like?

It’s no surprise that ever since ChatGPT’s broader predictive capabilities were made available to the public in November 2022, the sprawl of stakeholder capitalization on large language models (LLMs) has permeated nearly every sector of modern industry, accompanied or exacerbated by collective fascination. Financial services is no exception. 

But what might this transformation look like, from practical applications to potential risks? In this blog post, we’ll walk through what the actual capabilities of LLMs are, and the steps financial organizations should take to harness this technology effectively and safely.

How do Gen AI/LLMs “democratize” access to insights?

Compared to traditional AI models, generative AI/LLMs provide a significant uplift in tackling the wealth of unstructured data that make up things such as loan agreements, claims agreements, underwriting documents and the like. LLMs stand out in the following capabilities:

  • Content synthesis: Generative AI models can process huge amounts of multimodal information (text, images, video, audio, etc.) and synthesize content in a short period of time and to a very reasonable accuracy. 
  • Information extraction: Text generators like ChatGPT are efficient information retrievers capable of generating responses to specific human input, such as questions or requests. The main difference between this kind of extraction and that of a simple computer output is that there’s lexical fluidity in the human-computer interaction.
  • Content generation: Image generators like DALL-E, having gained much public traction and direct usage, teach us that AI models can imitate human paintings, music videos or even phonetics. The ability to translate content from one language to another, be it a human or systems language, represents a huge benefit to all industries, including financial services.   

This is all to say that generative AI and LLMs are significantly accretive from a usage perspective. They greatly reduce complexity for both technical and non-technical audiences, whether that means automating certain financial processes, for example, or generating a layman-readable summary of technical documents. Text generators like ChatGPT (and other AI/ML systems) fulfill the public desire for this kind of accessibility and ease of use — even for a model so technically complex, anyone can use it.

Democratization also carries risk, though. 

Potential pitfalls and why governance matters

The financial services sector, being a heavily regulated sector, has a framework and structure for addressing AI governance and model validation in most industry organizations. However, in most cases, these frameworks will need to be assessed and upgraded in light of the new risks amplified by generative AI (such as hallucinations and intellectual property rights exposure) as well as the evolving regulatory landscape. Regulatory bodies worldwide have issued guidelines around the use of AI/ML models, with the level of prescriptive guidance in these regulations varying by region/country. For example, the Biden administration recently issued an executive order (the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) that highlights the importance of fairness, testing, security and privacy in the development and usage of AI and ML models. 

So what are the benefits?

Nevertheless, there are enduring advantages to adopting generative AI and LLMs in financial services. Its novelty will likely mature into prevalence as computing resources proliferate and the costs of adoption decrease. As to how pervasive generative AI might be in the infrastructure of any given organization, we can only speculate.

It’s clear that, in the short term, curiosity prevails and most financial services firms are experimenting with the technology for several relevant use cases in their given organization. AI deployment in applications has already been prevalent for years, but generative AI expands that domain by automating routine functions of information review, parsing and synthesis — especially with unstructured data. And, of course, querying data by simply prompting a chatbot with instructions or questions is already possible, which means customer assistance with AI virtual assistants is right around the corner. Investment firms also benefit from more proficient generative AI data queries to extract insights about macroeconomic states, regulatory or company filings or more. 

In the long term, generative AI will be highly cost effective and will drive operations cost reductions through automation efficiencies. While caution and affordability concerns relegate AI use cases to simple and almost “secondary” assistive roles, the ultimate consensus is that this short-term assistance will transition to more involved, long-term automation and embedded AI in the very core of every business process. 

The transition, though, is not as simple as it sounds.

How do we transition to generative AI?

For some, the constant discourse around the AI paradigm shift seems to be more than just the usual noise. I think it is clear that AI — particularly LLMs — are here to stay. If anything, organizations are going to make it an integral part of every business process. But accompanying this question of how we transition is the more pressing concern of whether we’re even ready for that transition, especially given how nascent generative AI is and how unprimed many data strategies are for this superposition. 

An AI model is only as good as the data that you put in. Investing in a strong data infrastructure addresses the preliminary bugs and holes of siloed data and fragmentation. Implementing LLMs on top of a structure already plagued by incompatible data architectures — or perhaps inadequate talent onboarding to even support that implementation — will only create more problems. Not to mention that a lack of robust governance frameworks for privacy compliance and potential AI-created hallucinations will only inflate ethical and security concerns. The smoothness of the transition, therefore, depends on maturity or, as AI research Ashok Goel describes it, “muscle-building.”

The noise of generative AI’s “disruption” is not as loud or as dramatic as many believe. It isn’t a sudden rupturing of the ecosystem or a rapid scramble to immediately transform whole organizations. Instead, generative AI and LLMs will shift business processes for financial services, but slowly and only if organizations first optimize their data strategies and infrastructures.

To learn more about LLMs in financial services and how Snowflake can help, check out the full interview on DCN’s channel.