Tell HUD: Algorithms Shouldn’t Be an Excuse to Discriminate

Update 10/18: EFF has submitted its comments to HUD, which you can read here.

The U.S. Department of Housing and Urban Development (HUD) recently released a proposed rule that will have grave consequences for the enforcement of fair housing laws. Under the Fair Housing Act, individuals can bring claims on the basis of a protected characteristic (like race, sex, or disability status) when there is a facially-neutral policy or practice that results in unjustified discriminatory effect, or disparate impact. The proposed rule makes it much harder to bring a disparate impact claim under the Fair Housing Act. Moreover, HUD’s rule creates three affirmative defenses for housing providers, banks, and insurance companies that use algorithmic models to make housing decisions. As we’ve previously explained, these algorithmic defenses demonstrate that HUD doesn’t understand how machine learning actually works.

This proposed rule could significantly impact housing decisions and make discrimination more prevalent. We encourage you to submit comments to speak out against HUD’s proposed rule. Here’s how to do it in three easy steps:

  1. Go to the government’s comments site and click on “Comment Now.”
  2. Start with the draft language below regarding EFF’s concerns with HUD’s proposed rule. We encourage you to tailor the comments to reflect your specific concerns. Adapting the language increases the chances that HUD will count your comment as a “unique” submission, which is important because HUD is required to read and respond to unique comments.
  3. Hit “Submit Comment” and feel good about doing your part to protect the civil rights of vulnerable communities and to educate the government about how technology actually works!

Comments are due by Friday, October 18, 2019 at 11:59 PM ET.

To Whom It May Concern:

I write to oppose HUD’s proposed rule, which would change the disparate impact standard for the agency’s enforcement of the Fair Housing Act. The proposed rule would set up a burden-shifting framework that would make it nearly impossible for a plaintiff to allege a claim of unjustified discriminatory effect. Moreover, the proposed rule offers a safe harbor for defendants who rely on algorithmic models to make housing decisions. HUD’s approach is unscientific and fails to understand how machine learning actually works.

HUD’s proposed rule offers three complete algorithmic defenses if: (1) the inputs used in the algorithmic model are not themselves “substitutes or close proxies” for protected characteristics and the model is predictive of risk or other valid objective; (2) a third party creates or manages the algorithmic model; or (3) a neutral third party examines the model and determines the model’s inputs are not close proxies for protected characteristics and the model is predictive of risk or other valid objective.

In the first and third defenses, HUD indicates that as long as a model’s inputs are not discriminatory, the overall model cannot be discriminatory. However, the whole point of sophisticated machine-learning algorithms is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own. These combinations of different variables could be close proxies for protected classes, even if the original input variables are not. Apart from combinations of inputs, other factors, such as how an AI has been trained, can also lead to a model having a discriminatory effect, which HUD does not account for in its proposed rule. 

The second defense will shield housing providers, mortgage lenders, and insurance companies that rely on a third party’s algorithmic model, which will be the case for most defendants. This defense gets rid of any incentive for defendants not to use models that result in discriminatory effect or to pressure model makers to ensure their algorithmic models avoid discriminatory outcomes. Moreover, it is unclear whether a plaintiff could actually get relief by going after a model maker, a distant and possibly unknown third party, rather than a direct defendant like a housing provider. Accordingly, this defense could allow discriminatory effects to continue without recourse. Even if a plaintiff can sue a third-party creator, trade secrets law could prevent the public from finding out about the discriminatory impact of the algorithmic model.

HUD claims that its proposed affirmative defenses are not meant to create a “special exemption for parties using algorithmic models” and thereby insulate them from disparate impact lawsuits. But that is exactly what the proposed rule will do. Today, a defendant’s use of an algorithmic model in a disparate impact case is considered on a case-by-case basis, with careful attention paid to the particular facts at issue. That is exactly how it should work.

I respectfully urge HUD to rescind its proposed rule and continue to use its current disparate impact standard.

Go to Source
Author: Saira Hussain

Advertisements

Comments