
Serving mortgage clients since 2000.
AI and the potential for fraud during the post-closing auditing process
We at Waquis decided to explore using AI for our post-closing audit process. Given the hype around AI, with promises of tremendous cost savings and operational efficiencies, our hopes were high.
Using ChatGPT, we asked it to provide scenario(s) of integrating AI within the post-closing process. Our team of developers would assess the solution(s) and provide feedback and realistic expectations of implementation.
We wanted a solution that could handle between 1000 and 5000 loans each month.
Due to high costs, and lack of programmers, building a custom AI engine was out of the question, so ChatGPT recommended a mix of turn-key software programs to be utilized:
1). Loan File Ingestion: Google Document AI or Microsoft Formstack.
2). Data Extraction: Mongo DB.
3). Data Validation: Python Integration with an existing LOS such as Encompass.
4). Custom Compliance Checks: Open AI and SQL.
5). Exception Detection: Python.
6). Reporting: Looker or Tableu.
7). Continuous Learning: AWS Sagemaker. MLFlow.
Immediately we grew skeptical because we’ve used some of these platforms and their performance is erratic by themselves. Worringly, they don’t like to play with others. Integrating all of them together into a cohesive system would present an engineering nightmare and prove extremely difficult. Even if we achieved integration, the end solution would likely require that every phase of the loan cycle require significant human oversight (defeating the purpose).
​
Building this type of system would require an up-front financial investment between $5m and $10m with a delivery schedule of 18 - 36 months IF we could find the engineers and programmers with the proper knowledge of each software program and the ability to build custom AI code (not an easy feat as AI engineers have in high demand by the major technology companies). In all likelihood, those numbers could double or even triple.
​
Then we started to research AI behavior and that's when we really grew skeptical of using AI for post-closing (or many mortgage processes).
For instance, AI can either be programmed or will learn cultural bias towards certain groups of people. An extensive study at Chapman University found that AI will eventually discriminate and show bias in either programmed or learned behavior (the study is available here). Imagine the consequences if your AI began discriminating against certain racial, ethnic, religious, or political groups when approving a mortgage loan or auditing a loan? The results would be disastrous.
Next, we found that AI will act on it's own if threatened. Palisade Research, an AI safety firm, has found that AI models will lie, cheat and disable mechanisms to achieve their goals. The firm shockingly found that AI models will prevent themselves from being shut down despite explicit instructions telling them to do so. Some even went so far as to blackmail programmers. Imagine a rogue AI software engine with access to very sensitive information from thousands of borrowers? The results would be catastrophic. That study is available here.
​
Due to the complexity of the mortgage process, AI would have to constantly learn new regulations, laws, and case samples and that's not something it does especially well. For instance, McDonalds and Taco Bell have either pulled back, or totally scrapped, AI solutions for drive-through because AI continually confused orders and exhibited problematic performance (and that's for a menu with 20 items). What happens with a mortgage industry with thousands of constantly changing rules, regulations, and scenarios? Chaos ensues. That study can be found HERE.
​
When we discussed using AI with our processors and auditors, they explained that every step of the loan process is vital to provide an understanding of the loan. For instance, organizing a loan file might take 15 or 20 minutes, but it provides the auditor with the opportunity to understand the file and the documents. Phasing out any of the steps would create gaps that would likely result in poor audits.
​
Our conclusion is that AI might be good at streamlining a few stand-alone tasks, but having it takeover the entire post-closing audit process, or even a significant amount, isn't feasible given the start-up costs, complexity of mortgage loans, risks of cultural bias, misuse of borrower information, and the inability to trust AI to do exactly what it is told or instructed. The mortgage and banking industries aren't like many other industries where errors have little consequences. The ramifications for misuse or mistakes is catastrophic.
​
Finally, we asked ChatGPT about the risks of using AI for mortgage post-closing and the results were very similar to the conclusions we found on our own. The study can be read below:
​
The Case Against Using Artificial Intelligence for Mortgage Post-Closing Auditing (generated by ChatGPT).
​
Introduction
​
Artificial Intelligence (AI) has transformed industries through automation, data analytics, and process optimization. In the mortgage industry, AI has been introduced in several phases of the loan lifecycle—from underwriting to servicing—with the goal of improving efficiency and reducing human error. However, the application of AI in post-closing auditing, a critical compliance and quality control step, raises significant ethical, operational, and legal concerns. This paper argues that using AI for mortgage post-closing auditing is a problematic idea due to the inherent risks of data inaccuracy, regulatory noncompliance, lack of transparency, and the loss of human judgment essential to maintaining loan integrity and consumer protection.
​
1. The Complexity and Sensitivity of Post-Closing Auditing
Post-closing audits are not merely administrative checks; they serve as a final safeguard ensuring compliance with federal, state, and investor requirements. Auditors must review loan files to confirm that all documentation aligns with underwriting guidelines, closing disclosures, and secondary market standards.
AI systems, while efficient in pattern recognition, often lack the contextual understanding necessary to interpret nuanced documentation. Mortgage documents can vary widely in structure, terminology, and content depending on lender, jurisdiction, and borrower circumstances. AI models trained on limited datasets may fail to recognize anomalies, misinterpret exceptions, or overlook subtle forms of noncompliance. The result can be false assurances of accuracy, undermining both quality control and legal compliance.
​
2. Data Integrity and Model Bias
AI depends on large volumes of historical data to make predictions and classifications. However, mortgage data is inherently messy—often containing errors, missing fields, or inconsistencies across systems. Training an AI model on such data risks embedding historical inaccuracies and systemic biases into its operations.
For example, if past audit data contains uncorrected errors or reflects biased interpretations of borrower risk, an AI model could perpetuate those biases in future audits. Moreover, AI cannot independently verify the authenticity of underlying data sources, meaning that incorrect or fraudulent inputs could pass undetected.
In an industry that relies on strict documentation accuracy, such vulnerabilities pose serious threats to data integrity and regulatory trustworthiness.
​
3. Regulatory and Legal Risks
Mortgage auditing is deeply tied to regulatory frameworks enforced by agencies such as the Consumer Financial Protection Bureau (CFPB), Fannie Mae, Freddie Mac, and state-level authorities. These bodies require strict adherence to documentation standards, borrower disclosures, and anti-fraud measures.
AI-driven auditing tools may lack the explainability regulators require. If an AI model flags—or fails to flag—a compliance issue, it may not be possible to trace its reasoning. This violates the principle of auditability that underpins compliance enforcement.
Additionally, errors caused by opaque algorithms can expose lenders to legal liability, including repurchase demands from investors, fines, or penalties for regulatory breaches. Without clear accountability, it becomes difficult to defend audit outcomes or demonstrate due diligence.
4. Lack of Human Judgment and Ethical Oversight
​
The mortgage post-closing process often involves subjective assessments. For example, determining whether a missing document constitutes a material defect requires human expertise and contextual understanding. AI lacks the ability to make value-based judgments or interpret intent, both of which are essential in evaluating borrower documentation and compliance anomalies.
Moreover, overreliance on AI could erode accountability within audit teams. Employees may defer to machine-generated results, assuming accuracy without adequate verification—a phenomenon known as automation bias. The result is a diminished culture of critical oversight, where errors propagate unnoticed.
​
5. Operational and Implementation Challenges
Implementing AI auditing systems requires massive investments in technology infrastructure, data integration, and ongoing model maintenance. Mortgage institutions typically rely on fragmented legacy systems and unstructured document formats, complicating data ingestion and model training.
Even after deployment, AI models demand continuous retraining to adapt to regulatory changes and evolving document standards. Without constant supervision, model drift can lead to declining accuracy over time. These factors increase operational costs and complexity—often negating the efficiency benefits that AI supposedly provides.
​
6. Consumer Trust and Reputational Risk
The mortgage process involves one of the most significant financial transactions in a person’s life. Borrowers expect that their loans will be handled with precision, fairness, and transparency. If consumers or regulators perceive that critical quality control steps are being outsourced to non-transparent algorithms, it could erode trust in lenders and the broader financial system.
Instances of AI errors—especially those leading to misclassified compliance outcomes—could trigger reputational damage, litigation, or loss of investor confidence.