Tell HUD: Algorithms Shouldn’t Be an Excuse to Discriminate

Update 10/18: EFF has submitted its comments to HUD, which you can read here.

The U.S. Department of Housing and Urban Development (HUD) recently released a proposed rule that will have grave consequences for the enforcement of fair housing laws. Under the Fair Housing Act, individuals can bring claims on the basis of a protected characteristic (like race, sex, or disability status) when there is a facially-neutral policy or practice that results in unjustified discriminatory effect, or disparate impact. The proposed rule makes it much harder to bring a disparate impact claim under the Fair Housing Act. Moreover, HUD’s rule creates three affirmative defenses for housing providers, banks, and insurance companies that use algorithmic models to make housing decisions. As we’ve previously explained, these algorithmic defenses demonstrate that HUD doesn’t understand how machine learning actually works.

This proposed rule could significantly impact housing decisions and make discrimination more prevalent. We encourage you to submit comments to speak out against HUD’s proposed rule. Here’s how to do it in three easy steps:

  1. Go to the government’s comments site and click on “Comment Now.”
  2. Start with the draft language below regarding EFF’s concerns with HUD’s proposed rule. We encourage you to tailor the comments to reflect your specific concerns. Adapting the language increases the chances that HUD will count your comment as a “unique” submission, which is important because HUD is required to read and respond to unique comments.
  3. Hit “Submit Comment” and feel good about doing your part to protect the civil rights of vulnerable communities and to educate the government about how technology actually works!

Comments are due by Friday, October 18, 2019 at 11:59 PM ET.

To Whom It May Concern:

I write to oppose HUD’s proposed rule, which would change the disparate impact standard for the agency’s enforcement of the Fair Housing Act. The proposed rule would set up a burden-shifting framework that would make it nearly impossible for a plaintiff to allege a claim of unjustified discriminatory effect. Moreover, the proposed rule offers a safe harbor for defendants who rely on algorithmic models to make housing decisions. HUD’s approach is unscientific and fails to understand how machine learning actually works.

HUD’s proposed rule offers three complete algorithmic defenses if: (1) the inputs used in the algorithmic model are not themselves “substitutes or close proxies” for protected characteristics and the model is predictive of risk or other valid objective; (2) a third party creates or manages the algorithmic model; or (3) a neutral third party examines the model and determines the model’s inputs are not close proxies for protected characteristics and the model is predictive of risk or other valid objective.

In the first and third defenses, HUD indicates that as long as a model’s inputs are not discriminatory, the overall model cannot be discriminatory. However, the whole point of sophisticated machine-learning algorithms is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own. These combinations of different variables could be close proxies for protected classes, even if the original input variables are not. Apart from combinations of inputs, other factors, such as how an AI has been trained, can also lead to a model having a discriminatory effect, which HUD does not account for in its proposed rule. 

The second defense will shield housing providers, mortgage lenders, and insurance companies that rely on a third party’s algorithmic model, which will be the case for most defendants. This defense gets rid of any incentive for defendants not to use models that result in discriminatory effect or to pressure model makers to ensure their algorithmic models avoid discriminatory outcomes. Moreover, it is unclear whether a plaintiff could actually get relief by going after a model maker, a distant and possibly unknown third party, rather than a direct defendant like a housing provider. Accordingly, this defense could allow discriminatory effects to continue without recourse. Even if a plaintiff can sue a third-party creator, trade secrets law could prevent the public from finding out about the discriminatory impact of the algorithmic model.

HUD claims that its proposed affirmative defenses are not meant to create a “special exemption for parties using algorithmic models” and thereby insulate them from disparate impact lawsuits. But that is exactly what the proposed rule will do. Today, a defendant’s use of an algorithmic model in a disparate impact case is considered on a case-by-case basis, with careful attention paid to the particular facts at issue. That is exactly how it should work.

I respectfully urge HUD to rescind its proposed rule and continue to use its current disparate impact standard.

Go to Source
Author: Saira Hussain


Massachusetts: Tell Your Lawmakers to Press Pause on Government Face Surveillance

[unable to retrieve full-text content]

Face surveillance by government poses a threat to our privacy, chills protest in public places, and amplifies historical biases in our criminal justice system. Massachusetts has the opportunity to become the first state to stop government use of this troubling technology, from Provincetown to Pittsfield.

Massachusetts residents: tell your legislature to press pause on government use of face surveillance throughout the Commonwealth. Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of the technology, and your lawmakers need to hear from you ahead of an Oct. 22 hearing on these bills.


Pause Government Face Surveillance in Massachusetts

Concern over government face surveillance in our communities is widespread. Polling from the ACLU of Massachusetts has found that more than three-quarters, 79 percent, support a statewide moratorium.

The city council of Somerville, Massachusetts voted unanimously in July to ban government face surveillance altogether, becoming the first community on the East coast to do so. The town of Brookline, Massachusetts is currently considering a ban of its own. In California, the cities of San Francisco, Oakland—and just this week—Berkeley have passed bans as well.

EFF has advocated for governments to stop use of face surveillance in our communities immediately, particularly in light of what researchers at MIT’s Media Lab and others have found about its high error rates—particularly for women and people of color.

Even if it were possible to lessen these misidentification risks, however, government use of face recognition technology still poses grave threats to safety and privacy. Regardless of our race or gender, law enforcement use of face recognition technology poses a profound threat to personal privacy, political and religious expression, and the fundamental freedom to go about our lives without having our movements and associations covertly documented and analyzed.

Tell your lawmakers to support this bill and make sure that the people of Massachusetts have the opportunity to evaluate the consequences of using this technology before this type of mass surveillance becomes the norm in your communities.

Go to Source
Author: Hayley Tsukayama

EFF Urges Congress Not to Dismantle Section 230

The Keys to a Healthy Internet Are User Empowerment and Competition, Not Censorship

The House Energy and Commerce Committee held a legislative hearing today over what to do with one of the most important Internet laws, Section 230. Members of Congress and the testifying panelists discussed many of the critical issues facing online activity like how Internet companies moderate their users’ speech, how Internet companies and law enforcement agencies are addressing online criminal activity, and how the law impacts competition. 

EFF Legal Director Corynne McSherry testified at the hearing, offering a strong defense of the law that’s helped create the Internet we all rely on today. In her opening statement, McSherry urged Congress not to take Section 230’s role in building the modern Internet lightly:

We all want an Internet where we are free to meet, create, organize, share, debate, and learn. We want to have control over our online experience and to feel empowered by the tools we use. We want our elections free from manipulation and for women and marginalized communities to be able to speak openly about their experiences.

Chipping away at the legal foundations of the Internet in order to pressure platforms to play the role of Internet police is not the way to accomplish those goals. 


Privacy info. This embed will serve content from

Recognizing the gravity of the challenges presented, Ranking Member Cathy McMorris Rodgers (R-WA) aptly stated: “I want to be very clear: I’m not for gutting Section 230. It’s essential for consumers and entities in the Internet ecosystem. Misguided and hasty attempts to amend or even repeal Section 230 for bias or other reasons could have unintended consequences for free speech and the ability for small businesses to provide new and innovative services.” 

We agree. Any change to Section 230 risks upsetting the balance Congress struck decades ago that created the Internet as it exists today. It protects users and Internet companies big and small, and leaves open the door to future innovation. As Congress continues to debate Section 230, here are some suggestions and concerns we have for lawmakers willing to grapple with the complexities and get this right.

Facing Illegal Activity Online: Focus on the Perpetrators

Much of the hearing focused on illegal speech and activity online. Representatives and panelists mentioned examples like illegal drug sales, wildlife sales, and fraud. But there’s an important distinction to make between holding Internet intermediaries, such as social media companies and classified ads sites, liable for what their users say or do online, and holding users themselves accountable for their behavior.

Section 230 has always had a federal criminal law carve out. This means that truly culpable online platforms can already be prosecuted in federal court, alongside their users, for illegal speech and activity. For example, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs.

But EFF does not believe prosecuting Internet intermediaries is the best answer to the problems we find online. Rather, both federal and state government entities should allocate sufficient resources to target the direct perpetrators of illegal online behavior; that is, the users themselves who take advantage of open platforms to violate the law. Section 230 does not provide an impediment to going after these bad actors. McSherry pointed this out in her written testimony: “In the infamous Grindr case… the abuser was arrested two years ago under criminal charges of stalking, criminal impersonation, making a false police report, and disobeying a court order.”

Weakening Section 230 protections in order to expand the liability of online platforms for what their users say or do would incentivize companies to over-censor user speech in an effort to limit the companies’ legal exposure. Not only would this be harmful for legitimate user speech, it would also detract from law enforcement efforts to target the direct perpetrators of illegal behavior. As McSherry noted regarding the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA):

At this committee’s hearing on November 30, 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. One year later, there is anecdotal evidence that FOSTA has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.

Speech Moderation: User Choice and Empowerment

In her testimony, McSherry stressed that the Internet is a better place for online community when numerous platforms are available with a multitude of moderation philosophies. Section 230 has contributed to this environment by giving platforms the freedom to moderate speech the way they see fit.

The  freedom  that Section 230 afforded to Internet startups to choose their own moderation strategies has led to a multiplicity of options  for users—some more restrictive and sanitized, some more laissez-faire.  That  mix of  moderation philosophies contributes to a healthy environment for free expression and association online.

Reddit’s Steve Huffman echoed McSherry’s defense of Section 230 (PDF), noting that its protections have enabled the company to improve on its moderation practices over the years. He explained that the company’s speech moderation philosophy is one that prioritizes users making decisions about how they’d like to govern themselves:

The way Reddit handles content moderation today is unique in the industry. We use a governance model akin to our own democracy—where everyone follows a set of rules, has the ability to vote and self-organize, and ultimately shares some responsibility for how the platform works.

In an environment where platforms have their own approaches to content moderation, users have the ultimate power to decide which ones to use. McSherry noted in her testimony that while Grindr was not held liable for the actions of one user, that doesn’t mean that Grindr didn’t suffer. Grindr lost users, as they moved to other dating platforms. One reason why it’s essential that Congress protect Section 230 is to preserve the multitude of platform options.

“As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.”

Later in the hearing, Rep. Darren Soto (D-FL) asked each of the panelists who should be “the cop on the beat” in patrolling online speech. McSherry reiterated that users themselves should be empowered to decide what material they see online: “A cardinal principle for us at EFF is that at the end of the day, users should be able to control their Internet experience, and we need to have many more tools to make that possible.”

If some critics of Section 230 get their way, users won’t have that power. Prof. Danielle Citron offered a proposal (PDF) that Congress implement a “duty of care” regimen, where platforms would be required to show that they’re meeting a legal “reasonableness” standard in their moderation practices in order to keep their Section 230 protection. She proposes that courts look at what platforms are doing generally to moderate content and whether their policies are reasonable, rather than what a company did with respect to a particular piece of user content.

But inviting courts to determine what moderation practices are best would effectively do away with Section 230’s protections, disempowering users in the process. In McSherry’s words, “As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.”

Robots Won’t Fix It

There was plenty of agreement that current moderation was flawed, but much disagreement about why it was flawed. Subject-matter experts on the panel frequently described areas of moderation that were not in their purview as working perfectly fine, and questioning why those techniques could not be applied to other areas.

The deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it.

In one disorienting moment, Gretchen Peters of the Alliance to Counter Crime Online asked the congressional committee when they’d last seen a “dick pic” on Facebook, and took their silence as an indication that Facebook had solved the dick pic problem. She then suggested Facebook could move on to scanning for other criminality. Professor Hany Farid, an expert in at-scale, resilient hashing of child exploitative imagery, wondered why the tech companies could not create digital fingerprinting solutions for opioid sales.

Many cited Big Tech’s work to automatically remove what they believe to be copyright-infringing material as a potential model for other areas—perhaps unaware that the continuing failure of copyright bots is one of the few areas where EFF and the entertainment industry agree (though we think they take down too much entirely lawful material, and Hollywood thinks they’re not draconian enough.)

The truth is that the deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it. Robots are still terrible at understanding context, which has resulted in everything from Tumblr flagging pictures of bowls of fruit as “adult content” to YouTube removing possible evidence of war crimes because it categorized the videos as “terrorist content.” Representative Lisa Blunt Rochester (D-DE) pointed out the consequences of having algorithms police speech, “Groups already facing prejudice and discrimination will be further marginalized and censored.” A lot of the demand for Big Tech to do more moderation is predicated on the idea that they’re good at it, with their magical tech tools. As our own testimony and long experience points out—they’re really not, with bots or without.

Could they do better? Perhaps, but as Reddit’s Huffman noted, doing so means that the tech companies need to be able to innovate without having those attempts result in a hail of lawsuits. That is, he said, “exactly the sort of ability that 230 gives us.”

Reforming 230 with Big Tech as the Focus Would Harm Small Internet Companies

Critics of 230 often fail to acknowledge that many of the solutions they seek are not within reach of startups and smaller companies. Techniques like preemptive blocking of content, persistent policing of user posts, and mechanisms that analyze speech in real time to see what needs to be censored are extremely expensive.

That means that controlling what users do, at scale, will only be doable by Big Tech. It’s not only cost prohibitive, it will carry a high cost of liability if they get it wrong. For example, Google’s ContentID is often held up in the copyright context as a means of enforcement, but it required a $100 million investment by Google to develop and deploy—and it still does a bad job.

Google’s Katherine Oyama testified that Google already employs around 10,000 people that work on content moderation—a bar that no startup could meet—but even that appears insufficient to some critics. By comparison, a website like Wikipedia, which is the largest repository of information in human history, employs just about 350 staff for its entire operation, and is heavily reliant on volunteers.

A set of rules that would require a Google-sized company to expend even more resources means that only the most well-funded firms could maintain global platforms. A minimally-staffed nonprofit like Wikipedia could not continue to operate as it does today. The Internet would become more concentrated, and further removed from the promise of a network that empowers everyone.

As Congress continues to examine the problems facing the Internet today, we hope lawmakers remember the role that Section 230 plays in defending the Internet’s status as a place for free speech and community online. We fear that undermining Section 230 would harden today’s largest tech companies from future competition. Most importantly, we hope lawmakers listen to the voices of the people they risk pushing offline.

Read McSherry’s full written testimony.

Go to Source
Author: Sophia Cope

Victory! Berkeley City Council Unanimously Votes to Ban Face Recognition

Berkeley has become the third city in California and the fourth city in the United States to ban the use of face recognition technology by the government. After an outpouring of support from the community, the Berkeley City Council voted unanimously to adopt the ordinance introduced by Councilmember Kate Harrison earlier this year.

Berkeley joins other Bay Area cities, including San Francisco and Oakland, which also banned government use of face recognition. In July 2019, Somerville, Massachusetts became the first city on the East Coast to ban the government’s use of face recognition.

The passage of the ordinance also follows the signing of A.B. 1215, a California state law that places a three-year moratorium on police use of face recognition on body-worn cameras, beginning on January 1, 2020. As EFF’s Associate Director of Community Organizing Nathan Sheard told the California Assembly, using face recognition technology “in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion.”

Over the last several years, EFF has continually voiced concerns over the First and Fourth Amendment implications of government use of face surveillance. These concerns are exacerbated by research conducted by MIT’s Media Lab regarding the technology’s high error rates for women and people of color. However, even if manufacturers are successful in addressing the technology’s substantially higher error rates for already marginalized communities, government use of face recognition technology will still threaten safety and privacy, chill free speech, and amplify historical and ongoing discrimination in our criminal justice system.

Berkeley’s ban on face recognition is an important step toward curtailing the government’s use of biometric surveillance. Congratulations to the community that stood up in opposition to this invasive and flawed technology and to the city council members who listened.

Go to Source
Author: Matthew Guariglia

Why Fiber is Vastly Superior to Cable and 5G

The United States, its states, and its local governments are in dire need of universal fiber plans. Major telecom carriers such as AT&T and Verizon have discontinued their fiber-to-the-home efforts, leaving most people facing expensive cable monopolies for the future. While much of the Internet infrastructure has already transitioned to fiber, a supermajority of households and businesses across the country still have slow and outdated connections. Transitioning the “last mile” into fiber will require a massive effort from industry and government—an effort the rest of the world has already started.

Unfortunately, arguments by the U.S. telecommunications industry that 5G or currently existing DOCSIS cable infrastructure are more than up to the task of substituting for fiber have confused lawmakers, reporters, and regulators into believing we do not have a problem. In response, EFF has recently completed extensive research into the currently existing options for last mile broadband and lays out what the objective technical facts demonstrate. By every measurement, fiber connections to homes and businesses are, by far, the superior choice for the 21st century. It is not even close.

The Speed Chasm Between Fiber and Other Options

As a baseline, there is a divide between “wireline” internet (like cable and fiber) and “wireless” internet (like 5G). Cable systems can already deliver better service to most homes and businesses than 5G wireless deployments because the wireline service can carry signals farther with less interference than radio waves in the air.  We’ve written about the difference between wireless and wireline internet technologies in the past. While 5G is a major improvement over previous generations of wireless broadband, cable internet will remain the better option for the vast majority of households in terms of both reliability and raw speed.

Gigabit and faster wireless networks have to rely on high frequency spectrum in order to have sufficient bandwidth to deliver those speeds. But the faster the speed, and the higher the frequency,the more environmental factors such as the weather or physical obstructions interfere with the transmission. Gigabit 5G uses “millimeter wave” frequencies, which can’t travel through doors or walls. In essence, the real world environment adds so much friction to wireless transmissions at high-speeds that any contention that it can replace wireline internet fiber or cable—which contend with few of those barriers due to insulated wires— is suspect.

Meanwhile, fiber systems have at least a 10,000 (yes ten…thousand) fold advantage over cable systems in terms of raw bandwidth. This translates into a massive advantage for data capacity, and it’s why scientists have been able to squeeze more than 100 terabits per second (100,000 Gb/s) down a single fiber. The most advanced cable technology has achieved max speeds of around 10 Gb/s in a lab. Cable has not, and will not, come close to fiber. As we explain in our whitepaper, fiber also has significantly less latency, fewer problems with dropped packets, and will be easier to upgrade in the future.

Incumbents Favor the Status Quo Because its Expensive for You and Profitable for Them

The American story of broadband deployment is a tragic one where your income level determines  whether you have competition and affordable access. In the absence of national coverage policies, low-income Americans and rural Americans have been left behind. This stands to get worse absent a fundamental commitment to fiber for everyone. Our current situation and outlook for the future like did not happen in a vacuum—policy decisions made more than a decade ago, at the advent of fiber deployment in the United States, have proven to be complete failures when it comes to universal access. EFF’s review of the history of those decisions in the early 2000s has shown that none of the rationales have been justified by what followed.

But it doesn’t have to be like this. There is absolutely no good reason we have to accept the current situation as the future. A fundamental refocus on competition, universality, and affordability by local, state, and the federal government is essential to get our house back in order. Policymakers doing anything short of that are effectively concluding that having slower, more expensive cable as your only choice for the gigabit future is an acceptable outcome. 

Go to Source
Author: Bennett Cyphers

¿Quién Defiende Tus Datos?: Four Years Setting The Bar for Privacy Protections in Latin America and Spain

Four years have passed since our partners first published Who Defends Your Data (¿Quién Defiende Tus Datos?), a report that holds ISPs accountable for their privacy policies and processes in eight Latin America countries and Spain. Since then, we’ve seen major technology companies providing more transparency about how and when they divulge their users’ data to the government. This shift has been fueled in large part by public attention in local media. The project started in 2015 in Colombia, Mexico, and Peru, joined by Brazil in 2016, Chile and Paraguay in 2017, Argentina and Spain in 2018, and Panama this year.

When we started in 2015, none of the ISPs in the three countries surveyed had published transparency reports or any aggregate data about the number of data requests they received from governments. By 2019, the larger global companies with a regional presence in the nine countries surveyed are now doing this. This is a big victory for transparency, accountability, and users’ rights.

Telefónica (Movistar/Vivo), a global company with a local presence in Spain and in 15 countries in Latin America, has been leading the way in the region, closely followed by Millicom (Tigo) with offices in seven countries in South and Central America. Far behind is Claro (America Movil) with offices in 16 countries in the region. Surprisingly, in one country, Chile, the small ISP WOM! has also stood out for its excellent transparency reporting.

Telefonica publishes transparency reports in each of the countries we surveyed, while Millicom (Tigo) publishes transparency reports with data aggregated per specific region. In South America, Millicom (Tigo) publishes aggregate data for Bolivia, Colombia, and Paraguay. In 2018, Millicom (Tigo) also published a comprehensive Transparency report for Colombia only. While Claro (America Movil) operates in 16 countries in the region, it has only published a transparency report in one of the countries we surveyed, Chile. Chilean ISPs such as WOM!, VTR, and Entel have all also published their own transparency reports. In Brazil, however, Telefónica (Vivo) is the only Brazilian company that has published a transparency report.

All of the reports still have plenty of room for improvement. The level of information disclosed varies significantly company-by-company, and even country-by-country. Telefónica usually discloses a separate aggregate number for different types of government requests—such as wiretapping, metadata, service suspension, content blocking and filtering—in their transparency report. But for Argentina, Telefónica only provides a single aggregate figure that covers every kind of request. And in Brazil, for example, Telefónica Brazil has not published the number of government requests it accepts or rejects,  although it has published that information in other countries.

Companies have also adopted other voluntary standards in the region, like publishing their law enforcement guidelines for government data demands. For example, Telefónica provides an overview of the company’s global procedure when dealing with government data requests. But four other companies, who operate in Chile, publish more precise guidelines adapted only to that country’s legal frameworks including the small ISP WOM! and Entel, the largest national telecom company.

A Breakdown by Country

Colombia and Paraguay 

In 2015, the ¿Quién Defiende Tus Datos? project showed that keeping the pressure on—and having an open dialogue with—companies pay off. In Colombia, Fundación Karisma‘s 2015 report investigated five local ISPs and found that none published transparency reports on government blocking requests or data demands. By 2018, five of seven companies had published annual transparency reports on data requests, with four providing information on government blocking requests.

Millicom’s Transparency Report stood out by clarifying the rules for government access to data in Colombia and Paraguay.  Both countries have adopted draconian laws that compel Internet Service Providers to grant direct access to their mobile network to authorities. In Colombia, the law establishes hefty fines if ISPs monitor interception taking place in their systems. This is why tech companies claim they do not possess information about how often and for what periods communications interception is carried out in their mobile networks. In this scenario, transparency reports become irrelevant. Conversely, in Paraguay, ISPs can view the judicial order requesting the interception, and the telecom company is aware when interception occurs in their system, and could potentially publish aggregate data about the number of data requests.

Brazil and Chile

InternetLab’s report shows progress in companies’ commitment to judicially challenge abusive law enforcement data requests or fight back against legislation that harms users’ privacy. In 2016, four of six companies took this kind of action. For example, the mobile companies featured in the research are part of an association that challenged before the Brazilian Supreme Court a law that allows law enforcement agents to access users’ data without a warrant in case of human trafficking (Law 13.344/2016). The case is still open. Claro has also judicially challenged a direct request by the policy to access subscriber data. This number remained high in 2018 when five out of eight ISPs fought against unconstitutional laws, two of which also challenged disproportionate measures. 

In contrast, ISPs in Chile have been hesitant to challenge illegal and excessive requests. Derechos Digitales‘ 2019 report indicates that many ISPs are still failing to confront such requests in the courts on behalf of their users—except one. Entel got top marks because it was the only ISP to refuse the government requests for an individual’s data, out of the several ISPs contacted for the same information.

Chilean ISPs WOM!, VTR, Claro, and Entel also make clear in their law enforcement guidelines the need for a prior judicial order before handing content and metadata over to authorities. In Derechos Digitales’ 2019 report, these companies published law enforcement guidelines out of the six featured in the research. None of these companies took these steps in 2017, the project’s first year of operation in Chile. 

An even more significant achievement can be seen in user notification. ISPs in the region have always been reluctant to lay out a proper procedure for alerting users of government data requests, which was reflected in Chile’s 2017 report. In the latest edition, however, WOM!, VTR, and Claro in Chile explicitly commit to user notification in their policies.


In Peru, three of five companies didn’t publish privacy policies in 2015. By 2019 only one failed to provide details on the collection, use, and processing of their users’ personal data. Hiperderecho‘s 2019 report also shows progress in companies’ commitment to demand judicial orders to hand over users’ data. Bitel and Claro explicitly demand warrants when the request is for content. Telefónica (Movistar) stands out by requiring a warrant for both content and metadata. In 2015, only Movistar demanded a warrant for the content of the communication. 

Way Forward

Despite the progress seen in Brazil, Colombia, Chile, and Peru, there’s still a lot to be done in those countries. We also need to wait for upcoming evaluations for Argentina, Panama, Paraguay, and Spain, which were only recently included in the project.  But overall, too many telecom companies—whether large or small, global or local—still don’t publish law enforcement guidelines or have not established proper procedures and legal obligations. Those guidelines should be based upon the national legal framework and the countries’ international human rights commitments for the government to obtain users’ information.

Companies in the region equally fall short on committing to request a judicial order before handing over metadata to authorities. Finally, ISPs in the region are still wary of notifying users when governments make requests for user information. This is crucial for ensuring users’ ability to challenge the request and to seek remedies when it’s unlawful or disproportionate. The same fear keeps many ISPs from publicly defending their users in court and in Congress. 


For more information, see and the relevant media coverage about our partners’ reports in Colombia, Paraguay, Brazil, Peru, Argentina, Spain, Chile, Mexico, and Panama.

Go to Source
Author: Katitza Rodriguez

EFF Defends Section 230 in Congress

Watch EFF Legal Director Corynne McSherry Defend the Essential Law Protecting Internet Speech

All of us have benefited from Section 230, a federal law that has promoted the creation of virtually every open platform or communication tool on the Internet. The law’s premise is simple. If you are not the original creator of speech found on the Internet, you are not held liable if it does harm. But this simple premise is under attack in Congress. If some lawmakers get their way, the Internet could become a more restrictive space very soon.

EFF Legal Director Corynne McSherry will testify in support of Section 230 today in a House Energy and Commerce Committee hearing called “Fostering a Healthier Internet to Protect Consumers.” You can watch the hearing live on YouTube and follow along with our commentary @EFFLive.


Privacy info. This embed will serve content from

In McSherry’s written testimony, she lays out the case for why a strong Section 230 is essential to online community, innovation, and free expression.

Section 230 has ushered in a new era of community and connection on the Internet. People can find friends old and new over the Internet, learn, share ideas, organize, and speak out. Those connections can happen organically, often with no involvement on the part of the platforms where they take place. Consider that some of the most vital modern activist movements—#MeToo, #WomensMarch, #BlackLivesMatter—are universally identified by hashtags.

McSherry also cautions Congress to consider the unintended consequences of forcing online platforms to over-censor their users. When platforms take on overly restrictive and non-transparent moderation processes, marginalized people are often silenced disproportionately.

Without Section 230—or with a weakened Section 230—online platforms would have to exercise extreme caution in their moderation decisions in order to limit their own liability. A platform with a large number of users can’t remove all unlawful speech while keeping everything else intact. Therefore, undermining Section 230 effectively forces platforms to put their thumbs on the scale—that is, to remove far more speech than only what is actually unlawful, censoring innocent people and often important speech in the process.

Finally, Corynne urges Congress to consider the unintended consequences of last year’s Internet censorship bill FOSTA before it further undermines Section 230.

FOSTA teaches that Congress should carefully consider the unintended consequences of this type of legislation, recognizing that any law that puts the onus on online platforms to discern and remove illegal posts will result in over-censorship. Most importantly, it should listen to the voices most likely to be taken offline.

Read McSherry’s full testimony.

Go to Source
Author: Elliot Harmon

Hearing Thursday: EFF’s Rainey Reitman Will Urge California Lawmakers to Balance Needs of Consumers In Developing Cryptocurrency Regulations

Whittier, California—On Thursday, Oct. 17, at 10 am, EFF Chief Program Officer Rainey Reitman will urge California lawmakers to prioritize consumer choice and privacy in developing cryptocurrency regulations.

Reitman will testify at a hearing convened by the California Assembly Committee on Banking and Finance. The session, Virtual Currency Businesses: The Market and Regulatory Issues, will explore the business, consumer, and regulatory issues in the cryptocurrency market. EFF supports regulators stepping in to hold accountable those engaging in fraud, theft, and other misleading cryptocurrency business practices. But EFF has been skeptical of many regulatory proposals that are vague; designed for only one type of technology; could dissuade future privacy-enhancing innovation; or that might entrench existing players to the detriment of upstart innovators.

Reitman will tell lawmakers that cryptocurrency regulations should protect consumers but not chill future technological innovations that will benefit them.

WHAT: Informational Hearing of the California Assembly Committee on Banking  and Finance

WHO: EFF Chief Program Officer Rainey Reitman

WHEN: Thursday, October 17, 10 am

WHERE: Rio Hondo Community College
               Campus Inn
               600 Workman Mill Rd.
               Whittier, California 90601

For more about blockchain:

For more about EFF’s cryptocurrency activism:

Go to Source
Author: Karen Gullo

Congressional Hearing Wednesday: EFF Will Urge Lawmakers to Protect Important Internet Free Speech Law

Washington, D.C. – On Wednesday, Oct. 16, Electronic Frontier Foundation (EFF) Legal Director Corynne McSherry will testify at a congressional hearing in support of Section 230 of the Communications Decency Act (CDA)—one of the most important laws protecting Internet speech.

CDA 230 shields online platforms from liability for content posted by users, meaning websites and online services can’t be punished in court for things that their users say online. McSherry will tell lawmakers that the law protects a broad swath of online speech, from forums for neighborhood groups and local newspapers, to ordinary email practices like forwarding and websites where people discuss their views about politics, religion, and elections.

The law has played a vital role in providing a voice to those who previously lacked one, enabling marginalized groups to get their messages out to the whole world. At the same time, CDA 230 allows providers of all sizes to make choices about how to design and moderate their platforms. McSherry will tell lawmakers that weakening CDA 230 will encourage private censorship of valuable content and cement the dominance of those tech giants that can afford to shoulder new regulatory burdens.

McSherry is one of six witnesses who will testify at the House Committee on Energy and Commerce hearing on Wednesday, entitled “Fostering a Healthier Internet to Protect Consumers.”  Other witnesses include law professor Danielle Citron, and representatives from YouTube and reddit.

House Committee on Energy and Commerce
“Fostering a Healthier Internet to Protect Consumers”

EFF Legal Director Corynne McSherry

Wednesday, Oct 16
10 am

2123 Rayburn House Office Building
John D. Dingell Room
45 Independence Ave SW
Washington, DC  20515

For more on Section 230:

Go to Source
Author: Karen Gullo