MBA2024 Session Summary Part I

AI, AVMs, and Model Management from a Regulatory Perspective

This session started off on what I thought was the proverbial wrong foot: with disclaimers. Everyone on the panel was a lawyer (nothing wrong with that), but to introduce oneself with phrases such as “We’re not AI experts” and “The opinions expressed today are my personal opinions and do not represent those of my employer”, well . . . that wasn’t the way to grab the audience’s attention. But, they recovered with interesting and on-topic themes on how regulators are embracing AI, and provided sound advice for lenders that are adopting or looking to adopt AI-based solutions.

What is the FHFA doing with AI?

The Federal Housing Finance Agency (FHFA) issued an Advisory Bulletin in 2022 to guide the use of Artificial Intelligence (AI) and Machine Learning (ML) in lending, especially for regulated entities, though it can serve as a framework for the broader mortgage industry. This bulletin encourages a flexible, risk-based approach to AI implementation, emphasizing principles of transparency, accountability, and fairness. It introduces a common taxonomy and recognizes the anticipated growth of AI in lending, offering support for compliance and risk management.

The FHFA also emphasized that “responsible AI” is crucial for the success of AI applications in this sector. I actually wrote about this in a blog post last year under the heading of Responsible AI as a Critical Success Factor.

What are the practical challenges lenders face when implementing an AI-based solution?

Lenders face various challenges in implementing AI solutions, including ensuring data quality and availability, as poor data leads to inaccurate results—the model is only as good as the data it was trained with. And, they must prioritize data privacy and security by establishing controls to keep the data used for training clean and secure.

Addressing algorithmic bias and regularly monitoring the models and its outputs are critical in order to demonstrate compliance with fairness laws. Additionally, lenders must consider how AI integrates with existing systems, and manage the potential increase in costs and resource demands—not everyone has in-house AI expertise.

Finally, model transparency and explainability are essential for understanding how AI reaches its conclusions, enabling better regulatory compliance.

What AI issues are you currently working on?

The FHFA is currently focusing on establishing its AI framework. The first step was to appoint a Chief AI Officer and aligning with the White House’s executive order to understand how to use AI for public benefit. FHFA is training its staff to: understand AI’s potential, what to look for in reviewing vendors’ AI practices, and how to establish a clear chain of accountability between vendors and clients—it’s crucial that the vendors that lenders engage are in full compliance with the regulations. Finally, the agency is committed to promoting responsible innovation, ensuring AI applications align with regulatory requirements (see prior point on resonsible AI).

From a legal perspective, clients are grappling with deploying AI while remaining compliant with regulations published decades ago. Key challenges include determining necessary disclosures, identifying and mitigating biases, and ensuring model transparency and explainability. There is also concern with the implications of AI outputs—for example, combining data from multiple sources may meet the definition of a consumer report. What are the regulatory requirements around that? Additionally, there are questions around potential legal liabilities—if I get this wrong, what happens?

How can AI can be used for good?

AI offers promising opportunities to improve inclusivity in lending. The FHFA sees AI as a tool to enhance credit decisions by incorporating alternative data, such as cash flow from bank statements, in order to expand the reach to underserved communities (for example, using rent payment history to augment a borrower’s credti report).

AI can also play a role in detecting biases, and assisting borrowers with limited English proficiency through translation. In compliance, underwriting, and fraud detection functions, AI can handle routine tasks, with human oversight reserved for exceptions—as MOZAIQ is doing today with our end-to-end automation platform, and could be used to identify potential ADA (American with Disabilities Act) non-compliance by analyzing images of the property. Finally, AI can improve valuation accuracy of automated valuation models, and support regulatory reporting and risk management, exemplified in areas like helping to report on Home Mortgage Disclosure Act (HMDA) data.

The panelists:

  • Courtenay R. Dunn – Senior Director of Government Affairs, ICE Mortgage Technology (Moderator)
  • Tracy Stephan – Chief Artificial Intelligence Officer, Federal Housing Finance Agency (FHFA)
  • Sherry-Maria Safchuk – Partner, Orrick, Herrington & Sutcliffe, LLP
  • Kevin Stevens – Senior Manager, Consumer Financial Protection Bureau (CFPB)

Full disclosure: This blog post was created by running my copious notes through ChatGPT and then editing the output. Why not use AI to write about AI? It’s the future. And I’ve embraced it.

Find out how we can support all of your intelligent mortgage automation needs.BOOK A DEMO
Download our Roadmap to Intelligent Mortgage Automation Whitepaper.