How a Proposed Housing Act Change Could Affect Discriminatory AI
2019-09-30 - 4 minutes readSince its introduction in 1968, the Fair Housing Act has shielded proctected classes from discrimination. According to the HUD’s site, the “Fair Housing Act protects people from discrimination when they are renting or buying a home, getting a mortgage, seeking housing assistance, or engaging in other housing-related activities.”
This protection may be put to the test soon, due to a proposed rule by the Department of Housing and Urban Development (HUD) that will change how it interprets the Fair Housing Act.

Photo showing the Department of Housing and Urban Development building. Photo by LunchboxLarry and licensed under CC BY 2.0.
According to an article in OneZero and documents published by Reveal, the proposed change provides a framework for combating bias claims, thereby protecting companies that use third-party algorithms to process housing and loan applications. But what exactly does this change entail?
The Existence of AI and ML Biases
To begin to answer this question, let’s provide some background on AI and ML algorithms and their potential for biases.
As we know, deep-learning-powered algorithms rely on existing data, which contain existing patterns of human bias. And as the author of the OneZero Article, Daev Gershgorn, writes, “these deep neural networks will find hidden indicators of race, class, sex, age, or religion, and then begin to effectively discriminate along those lines.”
“The whole power of many deep learning systems is that they find correlations that are invisible to the naked eye,” says R. David Edelman, who was a senior technology and economic policy adviser to President Barack Obama and is now at MIT IPRI.
“The reason it creates a potentially impossible evidentiary standard is that simply looking at the inputs is not enough…Instead, the only way to be confident is to actually interrogate the model itself: its training data, its operation, and ultimately its outputs” Edelman continues.
Enter the Proposed Fair Housing Act Change
This proposed change would alter the Fair Housing Act by making it harder to prove that automated programs resulted in disparate impact. To allege this algorithmic bias, Gershgorn notes that plaintiffs “in a lawsuit would have to prove five separate elements to make a successful case against a company that uses algorithmic tools.” In addition, the proposal could help companies avoid liability for their algorithms, such as by relying on the testimony of a “qualified expert”.
Another way that banks, insurers, and real estate companies could benefit from this change is that this interpretation would mean these companies are not liable for an algorithm’s decision, because they didn’t make the algorithm.
Gershgorn states that the new interpretation could “encourage the use of biased algorithms in the housing industry, while protecting banks and real estate firms from lawsuits that might result.” While this proposal has not been released for public comment, it would greatly affect the housing industry if it goes into effect by shifting the burden of proof to those being discriminated against while also providing possible defenses for companies using these algorithms and reducing their liability.
Additional Reading
Learn more by reading the full article:
And check out these related articles: