The Ecuadorean Authorities Have No Reason to Detain Free Software Developer Ola Bini

Hours after the ejection of Julian Assange from the London Ecuadorean embassy last week, police officers in Ecuador detained the Swedish citizen and open source developer Ola Bini. They seized him as he prepared to travel from his home in Quito to Japan, claiming that he was attempting to flee the country in the wake of Assange’s arrest. Bini had, in fact, booked the vacation long ago, and had publicly mentioned it on his twitter account before Assange was arrested.

Ola’s detention was full of irregularities, as documented by his lawyers. His warrant was for a “Russian hacker” (Bini is neither); he was not read his rights, allowed to contact his lawyer nor offered a translator.

The charges against him, when they were finally made public, are tenuous. Ecuador’s general prosecutor has stated that Bini was accused of “alleged participation in the crime of assault on the integrity of computer systems” and attempts to destabilize the country. The “evidence” seized from Ola’s home that Ecuadorean police showed journalists to demonstrate his guilt was nothing more than a pile of USB drives, hard drives, two-factor authentication keys, and technical manuals: all familiar property for anyone working in his field.

Ola is a free software developer, who worked to improve the security and privacy of the Internet for all its users. He has worked on several key open source projects, including JRuby, several Ruby libraries, as well as multiple implementations of the secure and open communication protocol OTR. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world.

Like many people working on the many distributed projects defending the Internet, Ola has no need to work from a particular location. He traveled the world, but chose to settle in Ecuador because of his love of that country and of South America in general. At the time of his arrest, he was putting down roots in his new home, including co-founding Centro de Autonomia Digital, a non-profit devoted to creating user-friendly security tools, based out of Ecuador’s capital, Quito.

One might expect the Ecuadorean administration to hold up Bini as an example of the high-tech promise of the country, and use his expertise to assist the new administration in securing their infrastructure — just as his own European Union made use of Ola’s expertise when developing its government-funded DECODE privacy project.

Instead, Ecuador’s leadership has targeted him for arrest as a part of wider political process to distance itself from WikiLeaks. They have incorporated Ola into a media story that claims he was part of a gang of Russian hackers who planned to destabilize the country in retaliation for Julian Assange’s ejection.

At EFF, we are familiar with overzealous prosecutors attempting to implicate innocent coders by portraying them as dangerous cyber-masterminds, as well as demonizing the tools and lifestyle of coders that work to defend the security of critical infrastructure, not undermine it. These cases are indicative of an inappropriate tech panic, and their claims are rarely borne out by the facts.

As expressed by the many technologists supporting Ola Bini in our statement of solidarity, Ecuador should drop all charges against him, and allow Ola to return home to his family and friends. Ecuador’s leaders undermine their country’s reputation abroad and the independence of its judicial system by this fanciful and unfounded prosecution.

Go to Source
Author: Danny O'Brien


Julian Assange’s Prosecution is about Much More Than Attempting to Hack a Password

The recent arrest of Wikileaks editor Julian Assange surprised many by hinging on one charge: a Computer Fraud and Abuse Act (CFAA) charge for a single, unsuccessful attempt to reverse engineer a password. This might not be the only charge Assange ultimately faces. The government can add more before the extradition decision and possibly even after that if it gets a waiver from the UK or otherwise. Yet some have claimed that as the indictment sits now, the single CFAA charge is a sign that the government is not aiming at journalists. We disagree.  This case seems to be a clear attempt to punish Assange for publishing information that the government did not want published, and not merely arising from a single failed attempt at cracking a password. And having watched CFAA criminal prosecutions for many years, we think that neither journalists nor the rest of us should be breathing a sigh of relief. 

The CFAA grants broad discretion to prosecutors and has been used to threaten, prosecute, and civilly sue security researchers, competitors, and disloyal employees, among others. It has notoriously severe penalties, often applied out of all proportion to the offense. Here the government says the single charge of attempted, apparently unsuccessful assistance in password cracking can carry five years in prison, although under the sentencing guidelines the actual sentence would likely be lower. Remember, there is no parole in the federal judicial system. 

We do not believe this will be the last time we see the CFAA used to prosecute efforts central to journalism. 

While we can all agree that we need some method for prosecuting malicious computer crimes, the lack of clear limits and exceptions, combined with draconian penalties, make the CFAA a powerful hammer that prosecutors can use against those who act against the wishes of a computer owner. That’s an especially broad reach in this age of networked computers. As the tragic prosecution of our friend Aaron Swartz for downloading scientific articles demonstrated, this also isn’t the first time that the CFAA has been used to bludgeon people for trying to inform the public.

Since journalists often work to provide us with information that the powerful do not want us to see, we do not believe this will be the last time we see the CFAA used to prosecute efforts central to journalism. 

Of course, breaking into computers and cracking passwords in many contexts is rightly illegal. When analyzing the worst abuses of the CFAA, EFF has argued that the statute should only be applied to serious attempts to circumvent technological access barriers, including passwords. But even if the government has made a sufficient claim of a ‘legitimate’ CFAA violation here, it still must prove every element beyond a reasonable doubt, and it should do so without relying on irrelevant arguments about whether Wikileaks was truly engaged in journalism.

Whistleblower Chelsea Manning was charged in 2010 for her role in the release of approximately 700,000 military war and diplomatic records to WikiLeaks, which created front page news stories around the world and spurred significant reforms. The disclosure of classified Iraq war documents exposed human rights abuses and corruption the government had kept hidden from the public. While the disclosures riveted the globe, they also angered, embarrassed, and inconvenienced many, including the U.S. Departments of Defense and State, although no injuries or deaths were ever demonstrated as a result.

The Assange indictment, in contrast, arises from conversations the two had about an apparently unsuccessful attempt to access other classified documents. Here’s why it seems clear to us that the government’s charge of an attempted conspiracy to violate the CFAA is being used as a thin cover for attacking the journalism. 

First, the government spends much of the indictment referencing regular journalistic techniques that are irrelevant to the CFAA claim. The indictment includes the actual elements of the CFAA claim in paragraph 15. Here’s an attempt to translate it in plain English: pursuant to an agreement aimed at giving Assange access to secret government information, Manning gave Assange a scrambled portion of a password that would allow Manning to log into a computer in a way that would hide her identity from the government. Assange’s only alleged illegal act was trying to unscramble a portion of that password.

If the government wasn’t aiming further, it could have stopped there. But it didn’t. Instead it included descriptions of normal journalistic practices in the modern age: using a secure chat service, using cloud services to transfer files, removing usernames, and deleting logs to protect the source’s identity. The government includes in the indictment a cryptic comment by Assange: “curious eyes never run dry in my experience,” which it characterizes as “encouraging” violations of the law. The government’s inclusion of these facts, as well as its reference to the Espionage Act, is a strong signal that it believes these other actions should also be viewed as part of a crime.   

On top of that, as they have since the 1990s when they want to feed the “hacker madness” narrative, the prosecutors added unnecessary computer allegations to the indictment. The indictment mentions Manning’s use of the Linux operating system, darkly described as “special software . . . to access the computer file” that contained the password. It describes the use of a secure online chat service called Jabber. It even includes the fact that Manning used a “special folder” in Wikileaks’ cloud-based file transfer system. These facts are completely irrelevant to the single CFAA claim, but they, along with the Justice Department’s press release headline trumpeting Assange’s “hacking,” appear aimed at linking and even equating journalism and use of normal technical tools with the underlying crime. 

Second, President Trump himself has blurred the distinction between what Wikileaks is accused of here and mainstream journalism. In an interview just after the arrest, Trump received a lot of scorn for saying that he did not know much about Wikileaks, an obvious lie. But what he said next should also be raising concerns about Trump’s view of the legality of normal journalistic practices: “I guess the concept is perhaps [Assange] is a reporter type and, you know, The New York Times is doing the same thing maybe and The Washington Post maybe the same thing.” Trump has made no secret of his hatred for these outlets and desire to create more liability for journalists revealing facts and news he doesn’t like to the public. His words here should give journalists pause.

Third, legally speaking, the claim in the indictment itself seems very small. The underlying act Assange is accused of—a single failed attempt to figure out a password—was not even important enough to be included in the formal CFAA charges leveled against Manning, even though it was known to the prosecutors and reported about long ago. The government made its CFAA case against Manning on her separate use of an “unauthorized” program (Wget) to actually access other materials she provided to Wikileaks, in violation of the government’s terms of use. For separate reasons, this was not a legitimate use of the CFAA, as EFF argued in its amicus brief in support of Manning. The misapplication of the CFAA to Manning is actually still pending in the appeal of Manning’s case, which continues despite the commutation of her sentence.

In the prosecutors’ desperation to find something, anything, to charge Assange, the U.S. government had to reach beyond the acts it used to court-martial Manning into something that apparently didn’t happen. While attempted violations of the CFAA are illegal, as with many other crimes, it’s still a remarkably small potatoes violation—with no apparent harm. It’s difficult to imagine that any U.S. Attorneys’ office would even investigate, much less impanel a grand jury and demand extradition for an attempted, unsuccessful effort to unscramble a single password if it wasn’t being done to punish the later publication of other materials.

From where we sit this prosecution feels sadly familiar. Just a few years ago this same statute was used by federal prosecutors to find something, anything, they could use to charge our friend Aaron Swartz. Swartz angered the government, first by downloading a bunch of judicial documents from the Pacer system and later, by downloading scientific journal articles from JSTOR. The government then continued the JSTOR prosecution even when JSTOR, the alleged victim, asked them to stop. Facing the CFAA’s draconian penalties, Swartz took his own life.

From these and other CFAA prosecutions we’ve tracked over at least the past 20 years, it’s nearly impossible to weigh the relatively narrow charge used to arrest Assange without considering the nearly decade-long effort by the U.S. government to find a way to punish Wikileaks for publishing information vital to the public interest. Anyone concerned about press freedom should be concerned about this application of the CFAA. 

Go to Source
Author: Cindy Cohn

Media Alert: EFF Argues Against Forced Unlocking of Phone in Indiana Supreme Court

[unable to retrieve full-text content]

Justices to Consider Fifth Amendment Right Against Self-Incrimination

Wabash, IN—At 10 a.m. on Thursday, April 18, the Electronic Frontier Foundation (EFF) will argue to the Indiana Supreme Court that police cannot force a criminal suspect to turn over a passcode or otherwise decrypt her cell phone. The case is Katelin Seo v. State of Indiana.

The Fifth Amendment of the Constitution states that people cannot be forced to incriminate themselves, and it’s well settled that this privilege against self-incrimination covers compelled “testimonial” communications, including physical acts. However, courts have split over how to apply the Fifth Amendment to compelled decryption of encrypted devices.

Along with the ACLU, EFF responded to an open invitation from the Indiana Supreme Court to file an amicus brief in this important case. In Thursday’s hearing, EFF Senior Staff Attorney Andrew Crocker will explain that the forced unlocking of a device requires someone to disclose “the contents of his own mind.” That is analogous to written or oral testimony, and is therefore protected under the U.S. Constitution.

Thursday’s hearing is in Indiana’s Wabash County to give the public an opportunity to observe the work of the court. Over 750 students are scheduled to attend the argument. It will also be live-streamed.

Hearing in Katelin Seo v. State of Indiana

EFF Senior Staff Attorney Andrew Crocker

April 18, 10 a.m.

Ford Theater
Honeywell Center
275 W. Market Street
Wabash, Indiana 46992

For more information on attending the argument in Wabash:

For more on this case:

Senior Staff Attorney

Go to Source
Author: Karen Gullo

Four Steps Facebook Should Take to Counter Police Sock Puppets


Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

The most recent examples—and one of the most egregious—was revealed by The Guardian this week. The U.S. Department of Homeland Security executed a complex network of dummy Facebook profiles and pages to trick immigrants into registering with a fake college, The University of Farmington. The operation netted more than 170 arrests. Meanwhile, Customs and Border Protection issued a privacy impact assessment that encourages investigators to conceal their social media accounts.

Last fall, after the Memphis Police Department was caught using fake profiles to monitor Black Lives Matter activists, Facebook added new language to its law enforcement guidelines emphasizing that this practice was not permitted. Facebook also removed the offending accounts and sent Memphis a stern warning not to do it again. However, Facebook has proven resistant to sending warning letters to every agency caught red-handed; recently it turned down a request by EFF that it confront the San Francisco Police Department after court records revealed its use of fake accounts in criminal investigations.

This latest DHS investigation uncovered by The Guardian, as well as The Root’s report revealing other agencies that authorize undercover cops to friend people on Facebook, indicates that much more needs to be done.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts.

  1. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken.
  2. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly. These interactions include, but are not limited to, a friend request, Messenger messages, a comment, membership in a group, or being shown an advertisement. The user should know what agency operated the account and how long it was in operation. Facebook should also add a notification to the agency’s page informing the public that the agency is known to have created fake/impersonator law enforcement accounts.
  3. Facebook should further amend its “Amended Terms for Federal, State and Local Governments in the United States” to make it explicitly clear that, by agreeing to the terms, the agency is agreeing not to operate fake/impersonator profiles on the platform. Facebook has the right to take actions in response to violation of their terms, but when they do so, Facebook should be fair and consistent with the Santa Clara Principles.
  4. Facebook should review the department policies for social media use by law enforcement agencies. When law enforcement has a written policy of engaging in fake/impersonator law enforcement accounts in violation of the “Amended Terms for Federal, State and Local Governments in the United States,” Facebook should add a notification to the agency’s page to inform users of the law enforcement policy.

Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface. We often only discover the existence of law enforcement fake profiles months, if not years, after an investigation has concluded. These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules.

Go to Source

Author: Dave Maass

Don’t Force Web Platforms to Silence Innocent People

The U.S. House Judiciary Committee held a hearing this week to discuss the spread of white nationalism, online and offline. The hearing tackled hard questions about how online platforms respond to extremism online and what role, if any, lawmakers should play. The desire for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly in the wake of the recent massacre in New Zealand. But unfortunately, looking to Silicon Valley to be the speech police may do more harm than good.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors. Similarly, when Congress considers regulating speech on online platforms, it must consider both the First Amendment implications and how its regulations might unintentionally encourage platforms to silence innocent people.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors.

Again and again, we’ve seen attempts to more aggressively stamp out hate and extremism online backfire in colossal ways. We’ve seen state actors abuse flagging systems in order to silence their political enemies. We’ve seen platforms inadvertently censor the work of journalists and activists attempting to document human rights atrocities.

But there’s a lot platforms can do right now, starting with more transparency and visibility into platforms’ moderation policies. Platforms ought to tell the public what types of unwanted content they are attempting to screen, how they do that screening, and what safeguards are in place to make sure that innocent people—especially those trying to document or respond to violence—aren’t also censored. Rep. Pramila Jayapal urged the witnesses from Google and Facebook to share not just better reports of content removals, but also internal policies and training materials for moderators.

Better transparency is not only crucial for helping to minimize the number of people silenced unintentionally; it’s also essential for those working to study and fight hate groups. As the Anti-Defamation League’s Eileen Hershenov noted:

To the tech companies, I would say that there is no definition of methodologies and measures and the impact. […] We don’t have enough information and they don’t share the data [we need] to go against this radicalization and to counter it.

Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines to help align platform moderation practices to human rights and civil liberties principles. The Principles ask platforms

  • to be honest with the public about how many posts and accounts they remove,
  • to give notice to users who’ve had something removed about what was removed, and under what rule, and
  • to give those users a meaningful opportunity to appeal the decision.

Hershenov also cautioned lawmakers about the dangers of heavy-handed platform moderation, pointing out that social media offers a useful view for civil society and the public into how and where hate groups organize: “We do have to be careful about whether in taking stuff off of the web where we can find it, we push things underground where neither law enforcement nor civil society can prevent and deradicalize.”

Before they try to pass laws to remove hate speech from the Internet, members of Congress should tread carefully. Such laws risk pushing platforms toward a more highly filtered Internet, silencing far more people than was intended. As Supreme Court Justice Anthony Kennedy wrote in Matel v. Tam (PDF) in 2017, “A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all.”

Go to Source
Author: Elliot Harmon

Join EFF and Help Guide Our International Policy Work

Skip to main content


April 12, 2019

Do you want to help defend civil liberties around the world? Are you an expert in copyright, intermediary liability, and European lawmaking? A rare opportunity to help guide EFF in those arenas is now available—we’re hiring an International Policy Director.

EFF weighs in when international lawmaking has a huge potential impact on the Internet for everyone. That’s why we banded with organizations around the world to stop the Trans-Pacific Partnership, whose copyright and anti-hacking measures would have changed the global Internet for the worse. It’s also why we fought to stop Article 13 in Europe, which now threatens to usher in a new era of a more highly filtered web. The policy fights that will change the Internet for everyone frequently happen in international forums.

The International Policy Director will act as a bridge between EFF’s legal strategy and our international policy work. You don’t have to be a lawyer to apply, but lawyers are highly encouraged. The Director will work closely with others across EFF and lead a small team of senior policy experts, so communication skills and management experience are essential.

EFF has highly competitive housing benefits to make living in the Bay Area a reality. We also have a warm, welcoming, and intellectually challenging workplace culture.

If you think you might be the right person for the role, please apply. Otherwise, please forward the listing on to your appropriate contacts.

Back to top

JavaScript license information

Go to Source

Author: Elliot Harmon

Government Fights to Trap EFF’s NSA Spying Case in a Catch-22

The U.S. government admits—and, of course, it’s common knowledge—that the NSA conducts mass, dragnet surveillance of hundreds of millions of Americans’ communications. It has done so via a series of different technical strategies and legal arguments for over 18 years. Yet the Justice Department insists that our legal fight against this spying is bound by a Catch-22: no one can sue unless the court first determines that they were certainly touched by the vast surveillance mechanisms of the NSA, but the court cannot decide whether any particular person’s email, web searches, social media or phone calls were touched by the surveillance unless the government admits it. Which, of course, it will not do.

At a federal court hearing last month in Oakland, California for our Jewel v. NSA case, we took on this circular argument. EFF Special Counsel Richard Wiebe reviewed the vast trove of direct and circumstantial evidence showing our clients’ communications likely swept up by the NSA dragnet surveillance—this establishes legal “standing.” The interception of communications was first revealed in 2006 by a whistleblower working for AT&T in San Francisco, Mark Klein. Klein demonstrated, with expert assistance, that AT&T tapped into the high-capacity fiber optic cables that carry Internet traffic and copied all of the data flowing through those cables for the NSA. A 2009 draft NSA Inspector General’s report confirms that telecom companies including AT&T gave the NSA access to customers’ communications. Justice Department officials and government agencies have acknowledged its existence going back a decade. Ex-NSA contractor and whistleblower Edward Snowden leaked documents describing the spying and authenticated a key document for the court when the government refused. And just this past year, an additional whistleblower and several other experts have submitted statements explaining that the surveillance program likely touched our clients’ communications.

We also noted that it’s not necessary to absolutely establish that our client’s communications were touched by the surveillance to prevent dismissal. We must only demonstrate that it is more likely than not that our clients’ communications were touched by the NSA’s three programs of telephone record collection, Internet metadata collection, and Internet backbone surveillance. Given the mountain of evidence that we have presented and the admitted scope of the program, there is almost no chance that our clients’ communications—like the communications of millions of innocent Americans—weren’t touched by the government’s programs.

“Direct and circumstantial evidence are both enough for standing,” Wiebe told the court. “The public evidence, combined with classified evidence, will remove any question about standing.”

We also directly addressed the government’s state secret claims, which were first rejected by the Court in 2006 but which the DOJ continues to assert. We got a boost from a recent court ruling in the U.S. District Court of Appeals for the Ninth Circuit, Fazaga v FBI, which flatly rejected the application of the state secret privilege in electronic surveillance cases. It instead found that Congress required the courts to use a part of the Foreign Intelligence Surveillance Act, 50 U.S.C. 1806(f), to decide whether the alleged spying was lawful. That same law should be used in Jewel.

Snowden submitted a declaration in our case confirming that he had seen the report when he was an NSA contractor. DOJ attorneys told the court that Snowden was “not competent” to testify.

Justice Department lawyers fought back hard, claiming that our evidence wasn’t enough. They said that the court cannot rely on the draft NSA Inspector General’s report because the NSA has refused to formally authenticate it — despite never claiming it was fake. Because the government refused to formally acknowledge the document, Snowden submitted a declaration in our case confirming that he had seen the report when he was an NSA contractor. DOJ attorneys told the court that Snowden was “not competent” to testify.  As for the Ninth Circuit ruling, DOJ attorneys said it doesn’t apply because our plaintiffs must first prove that they were surveilled — and they cannot do that unless the government agrees.

Rather circular, no? Our clients can’t sue because a court isn’t allowed to rule on whether they have standing because that would harm national security. And they can’t test the government’s claim of national security, because they don’t have standing.

If U.S. District Court Judge Jeffrey White rules that he is indeed trapped by the government’s Catch-22 argument, then EFF will be required, once again, to take the case to the Ninth Circuit to have the decision reversed.

Despite the government’s ongoing efforts to kill it, Jewel v. NSA has come further than any case challenging NSA spying. At this point, 18 years in, two of the three programs at issue in the case have been stopped due in part to public outcry. The third was radically scaled back. At least two programs—telephone records and Internet metadata—were reportedly abandoned in part because, despite significant financial costs and ongoing harms to the rights of millions of Americans, they showed no appreciable benefit in protecting anyone.

Yet the government’s strategy of continually throwing up roadblocks has kept us from getting to the heart of the matter: the NSA has flipped the basic rules of government access to your private papers upside down. Instead of gaining access only when they have specific basis to believe that you’ve done something wrong, the NSA first collects or scans our communications en masse, then sorts out what they really want second. This is a digital version of a “general warrant”— sweeping authority to search Americans without any suspicion — which were used in colonial times and rejected by the nation’s founders. John Adams even claimed that the opposition to general warrants fueled the American Revolution.

Now the government has resorted to arguing that what is common knowledge in the world, and what the European Courts have now ruled about multiple times, must never be spoken of in an adversarial process in an American court of law. That’s not right, and we’ll keep fighting for our clients to have their day in court.

Go to Source
Author: Cindy Cohn

Victory! The House of Representatives Passes Net Neutrality Protections

[unable to retrieve full-text content]

In a vote of 232-190, the House of Representatives passed the Save the Internet Act (H.R. 1644). This is a major step forward in the fight for net neutrality protections, and it’s because you spoke up about what you want.

The Save the Internet Act was written to restore the strong and hard-fought protections of the 2015 Open Internet Order. Americans overwhelmingly support an Internet where Internet service providers (ISPs) have to treat all the data transmitted over their networks in a nondiscriminatory way. In other words, where ISPs don’t act as gatekeepers to the Internet and where you, the user, decide how and what you want to see online. As many Americans have no choice when it comes to their ISP, it is vital that they retain control over their online experience.

Americans overwhelmingly support an Internet where Internet service providers (ISPs) have to treat all the data transmitted over their networks in a nondiscriminatory way.

Famously, violations of net neutrality have included the practices of blocking, throttling, and paid prioritization. But that is not all that ISPs can do to warp your Internet experience. The Open Internet Order of 2015 prohibited these three techniques, while also including privacy and competition protections. All of these things would be restored with the Save the Internet Act. We deserve a return to the 2015 order, not a watered-down version of net neutrality.

The Save the Internet Act could have had damaging or weakening amendments added to it on its way to today’s vote, but you spoke up and told your Representatives that you wanted real net neutrality and not net neutrality in name only. That’s why the Save the Internet Act passed unscathed.

A number of amendments did get added to the bill, but they are mostly about directing research by government agencies into the state of the Internet and FCC accountability.

One amendment does give us pause, though. The last amendment to the bill (McAdams), affirms a bit from the old Open Internet Order, saying that the net neutrality prohibition on blocking doesn’t prevent ISPs from blocking “illegal” content, a distinction that includes copyrighted material. Users do not want an ISP to substitute for a court of law on determining the legality of speech online. Users want ISPs to simply provide broadband access and serve as conduits of our speech. A broad reading of this amendment could easily have greenlit Comcast’s throttling of Bit Torrent, which led to a past FCC sanctioning the cable company for violating net neutrality.  

EFF had concerns with the original 2015 order, as it seemed to let ISPs make their own determinations of legality, rather than say that blocking content deemed illegal to a court is not a violation of the order. As ISPs and media companies become even more intertwined, it’s easy to imagine this loophole being exploited. However, legislative debate between Rep. Ben McAdams, the amendment’s author, and Rep. Mike Doyle, the lead author of the Save the Internet Act, made clear that this amendment did not give an ISP the right to censor content solely because the ISP thought the content was unlawful.

As the Save the Internet Act is debated in the Senate and comes up to a final vote, we’ll fight to keep net neutrality protections from having a copyright loophole. But before we can do that, we need the Senate to take up net neutrality as an issue.

Last year, a majority of the Senate voted to overturn the FCC. Like the Save the Internet Act, that Congressional Review Act vote would have restored the protections of the 2015 Open Internet Order. It’s time to ask the Senate to once again show a commitment to a free and open Internet. Contact your Senators and tell them to co-sponsor the Save the Internet Act (S. 682).

Take Action

Protect Net Neutrality

Go to Source
Author: Katharine Trendacosta

The Los Angeles Department of Transportation’s Ride Tracking Pilot is Out of Control

The Los Angeles Department of Transportation (LADOT) is about to make a bad privacy situation worse, and it’s urgent that Los Angeles residents contact their city council representatives today to demand they put the brakes on LADOT’s irresponsible data collection. The agency plans to scoop up trip data on every single e-bike and scooter ride taken within the city and, left unchecked, it will do so in the absence of responsible and transparent policies to mitigate the privacy risks to Los Angeles riders.

Take Action

Tell The City Council To Put The Brakes on LADOT’s Rider Surveillance Program 

Location data is among the most sensitive forms of information related to a person’s privacy. Collected over time, people’s movements from place to place reveal a good deal about them: where they work, where they play, where they worship, their political leanings, and even personal and familial relationships. While the U.S. Supreme Court and California’s State Legislature are in agreement on the sensitivity of location data, the Los Angeles Department of Transportation appears to be much less convinced.

EFF and OTI have called on LADOT to start taking the privacy of Los Angeles residents seriously and cease moving forward with its invasive data collection plans until it has real policies in place to protect the data. Make your voice heard, too.  

A Tale of Two API’s

In September, after the streets of Los Angeles were overwhelmed with dockless e-bikes and scooters, the Los Angeles City Council passed an ordinance calling for the creation of a Shared Mobility Device Pilot Program. In part, the ordinance called on LADOT to issue permits and set guidelines aimed at reducing sidewalk interference and regulating vehicle speed.

LADOT’s Mobility Data Specification (MDS), part of which went into effect shortly after the ordinance passed in September, gives the agency the ability to request massive amounts of information about Los Angeles riders and their day-to-day travels. Specifically, the MDS requires dockless mobility permit holders like LimeBike and Bird to provide LADOT access to a provider-side application processing interface (API), allowing the agency to demand granular trip data for dockless bicycle and scooter rides. This trip data includes extremely precise, time-stamped, location data from the beginning to the end of each trip.

LADOT has not grappled with the serious privacy and civil liberties issues implicated by such a massive data collection campaign.

The problem? LADOT has not grappled with the serious privacy and civil liberties issues implicated by such a massive data collection campaign. Months later, despite requests from EFF and the Open Technology Institute; and the Center for Democracy and Technology, LADOT still fails to acknowledge the raw trip data it collects through its MDS is personal data pertaining to real movements of real individuals. More importantly, it has failed to set out basic privacy protections for the sensitive location data it collects every time Los Angeles residents take a dockless scooter or e-bike ride through their city.

Now, despite their lack of a clearly articulated plan to protect Los Angeles residents from the potential harms that could result from the exposure of this data, LADOT plans to make a bad situation worse. Beginning on April 15, LADOT will require dockless mobility operators to push trip data for each and every e-bike and scooter ride taken within the City directly to LADOT, and its for-profit partner Remix, through a new agency-side API as well.

Responsible Data Collection Requires Responsible Data Policy

In our letter to the Los Angeles City Council, EFF and OTI have called on the Council to put the brakes on these additional data sharing requirements before the April 15 deadline. LADOT should by no means be moving forward with increased data demands when it has yet to address the privacy and civil liberties concerns raised by earlier stages of the MDS.

So far, LADOT has issued only high-level “Data Protection Principles,” which amount to a list of aspirations and buzz words you would want to see in a strong policy: ‘de-identification,’ ‘data minimization,’ ‘aggregation.’ But they provide no meaningful, enforceable restrictions to protect the privacy of Los Angeles residents. These “principles” are a far cry from the transparent, actionable, and enforceable data privacy policies we would expect of any city agency demanding this level of sensitive information about Los Angeles residents.

Furthermore, LADOT’s failure to limit law enforcement access to raw trip data through anything less than a warrant signed by a judge is in seeming opposition to the Supreme Court’s holding in Carpenter v. United States, which held that “the Government must generally obtain a warrant supported by probable cause before acquiring” location records. In its ruling, The Court recognized that time-stamped location data “provides an intimate window into a person’s life, revealing not only his particular movements, but through them his familial, political, professional, religious, and sexual associations.” The Supreme Court’s analysis of the sensitivity of location data was echoed by the California State Legislature when it passed the California Consumer Privacy Act (CCPA)—explicitly listing geolocation information as personal information and affirming that “any information that can be reasonably linked, directly or indirectly, with a particular consumer should be considered “personal information.”

Even with names stripped out, location information is notoriously easy to re-identify.

Part of the problem is LADOT’s failure to acknowledge the sensitive nature of trip information, claiming that the MDS requires “no personally identifiable information about users directly.” (emphasis added). But even with names stripped out, location information is notoriously easy to re-identify—particularly for habitual trips. To demonstrate the process through which this information could be re-identified, EFF Staff Technologists—in a cursory analysis of publicly available data from New York City’s rideshare program, CitiBike—identified what is likely a single rider regularly leaving home between 7:30 am and 8 am each morning and returning home just after 6 pm each evening. Unlike New York’s public rideshare program, which requires riders to pick-up and return bikes at docking stations dispersed throughout the city, LADOT’s program applies to dockless bikes and scooters, so the location data acquired through Los Angeles’ dockless mobility program is even more unique to each rider. Yet, even with the data available through CitiBike, one need only wait for our rider’s regular routine to begin one morning in order to confirm his identity. This may seem innocuous, but what if our rider was a domestic violence survivor at risk of being stalked by their assaulter? Or, instead of a regular commute to and from work or school, the data showed our rider taking regular trips to attend Jummah prayer at a local mosque or meetings of a local political organization? The potential threat to their safety as well as religious and political freedom makes it easy to see how critical it is that LADOT and the City Council act to protect this sensitive personal information.

Act Now

LADOT’s GitHub Repository and June 2018 press release announcing “A New Digital Playbook for Mobility” make it clear the department has no intention of stopping at dockless e-bikes and scooters. At the same time, LADOT’s General Manager Seleta Reynolds, in her capacity as an official within the National Association of City Transportation Officials, also seems intent on spreading this methodology to other cities across the U.S. The people of Los Angeles and cities across the country deserve safe streets. They also deserve the freedom to move about those streets without undue risks to their privacy and physical well-being through unchecked vehicle surveillance. With the April 15 compliance deadline for the next phase in Los Angeles dockless mobility program quickly approaching, it’s urgent that Los Angeles residents contact their City Council representative today, and demand that they put the brakes on LADOT’s irresponsible data collection.

Go to Source
Author: Nathan Sheard