AI compliance

AI Compliance Nightmares: 3 Red Flags FINRA & SEC Are Watching

The adoption of AI in financial services and wealth management in the United States has increased significantly over the last few years. The Gartner Survey 2024 reveals that nearly 58% of financial functions across the industry utilize AI. From client onboarding to economic forecasts, operational assistance, and decision-making to internal process automation, AI tools take the lead, promising increased efficiency and accuracy.

However, here’s the kicker: According to a recent study by Fenergo, U.S. regulators have issued over $4.3 billion in fines to financial firms, with North America accounting for a staggering 95% of these regulatory fines. With regulatory fines rising and AI models prone to hallucination and bias risk, the million-dollar question is: Who takes the fall when things go sideways? Is it the financial firm that deployed the AI model? Or the tech platform? Or the AI itself? 

That’s why regulators, including FINRA and the SEC, are raising red flags and making it very clear to financial firms that accountability lies squarely with them when using AI. 

In this article, we break down the three main red flags being watched by regulators as AI weaves itself deeper into financial functions. 

AI Under Regulatory Scrutiny: Three Red Flags Financial Firms Can’t Ignore

Here are the three key AI compliance red flags that are on the radar of FINRA (Financial Industry Regulatory Authority) and the SEC (U.S. Securities and Exchange Commission).

A diagram of a diagram

AI-generated content may be incorrect.
  1. AI Hallucinations

Let’s be honest – AI can sound confident even while giving bad advice. Let’s assume a junior-level wealth advisor meets a client with an AI-prepared investment portfolio summary. The report looked convincing and refined. The AI model has recommended a non-existent “ESG fund” with a 12% annual return projection, based on outdated data. Impressed by the return, the client wanted to invest in it. The advisor had no idea that AI had cited a product that does not even exist, and it’s a potential violation of FINRA’s rules. 

AI models can hallucinate, and financial firms cannot give reasons like ā€˜AI said so or tech errors’ for violations. For this reason, FINRA Notice 24-09 clearly states that AI-generated investment recommendations, including those from generative artificial intelligence and large language models, are subject to the same suitability standards as those from a human advisor.

How are top firms addressing this red flag?

  • Top firms are proactively addressing this red flag by implementing human oversight or keeping a human advisor in the loop to review every AI-generated recommendation before it reaches clients. This approach ensures that AI is used as a tool, not a decision-maker, instilling confidence in the compliance process.
  • Investing in mature AI models that are compliance-ready
  • Using AI for fundamental insights and relying on human insight for complex products
  1. Model Bias

The risk of AI model bias can potentially lead to a hidden conflict of interest, as AI may make investment recommendations that are not in the investor’s best interest. For instance, an AI tool could be biased towards recommending high-risk investments to younger clients, assuming they have a higher risk tolerance, when in fact, this may not be the case.

Here is an example to consider. Let us assume an AI tool on a robo-advisory platform was trained to optimize investor portfolios based on retention and engagement. Though it sounds smart, performance wasn’t superior. That’s when the compliance team noticed a trend: younger investors were being excessively navigated toward high-cost in-house ETFS.

The AI wasn’t malicious – it had been trained to optimize revenue and client retention. During the process, the AI model quietly created a hidden conflict of interest, which is considered a bigger red flag by regulators.

This is why SEC’s Proposed Rule 211(h)(2)-3 targets predictive analytics that nudges investors based on their behavioural data. With concerns over the AI model’s bias and conflict of interest, the SEC has taken a proactive stance to address the regulatory issues that may arise from AI usage in predicting, guiding, and optimizing investors’ portfolios.

How are top firms addressing this red flag?

  • Many firms appoint internal compliance watchdogs to test the AI model bias and conflict of interest, even before it catches the eye of regulators.
  • Firms also utilize AI model monitoring software, such as TruEra, Fiddler AI, WhyLabs, and Arize, to flag potential model bias and monitor AI decisions.
  1. Missing Audit Trails

In the traditional advisory model, every decision must be documented and justified. This documentation, known as an ‘audit trail’, is a compliance requirement to maintain a clear record of how decisions, such as portfolio rebalances and recommendations, were made. However, this is one of the significant compliance nightmares of machine learning models used in wealth management. 

One of the well-known cases is that of Wealthfront Advisors LLC, charged by the SEC for misleading clients about their tax-loss harvesting strategy. The company failed to keep its promise that the strategy would monitor the client’s account, transactions and trigger a wash sale. Regulator also observed that Wealthfront lacked a compliance program and failed to maintain relevant documentation. 

How are top firms addressing this red flag?

  • Many financial firms are adopting a blockchain-style, decentralized, and immutable ledger for every AI decision and interaction to address growing concerns about accountability and transparency.
  • Integration of version-controlled models for compliance and mature AI models that can provide real-time red flags, AI-driven decisions, and recommendations.

Conclusion

AI is a double-edged sword in financial functions. Over 57% of financial firms now lean on AI and machine learning for compliance. But here’s the twist: in 2024 alone, 41% of SEC enforcement actions involved AI-related issues. That’s not a tech hiccup—that’s a full-blown regulatory red flag, highlighting the potential risks of AI compliance that firms need to be aware of.

AI models can boost business efficiency. However, a single missing audit trail or a single misguided piece of advice or decision can ruin a financial firm’s reputation and trip it up in ways that impact the C-suite. 

So the real trick? Firms can’t just integrate AI into their compliance function and hope for the best. AI can be brilliant, but it needs guardrails. Firms need to incorporate AI smartly into their existing compliance framework, monitor it relentlessly, and ensure that someone (a human) is always overseeing the machine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *