The Worst Possible Version of the EU Copyright Directive Has Sparked a German Uprising

Last week’s publication of the final draft of the new EU Copyright Directive baffled and infuriated almost everyone, including the massive entertainment companies that lobbied for it in the first place; the artists’ groups who endorsed it only to have their interests stripped out of the final document; and the millions and millions of Europeans who had publicly called on lawmakers to fix grave deficiencies in the earlier drafts, only to find these deficiencies made even worse.

Take Action

Stop Article 13

Thankfully, Europeans aren’t taking this lying down. With the final vote expected to come during the March 25-28 session, mere weeks before European elections, European activists are pouring the pressure onto their Members of the European Parliament (MEPs), letting them know that their vote on this dreadful mess will be on everyone’s mind during the election campaigns.

The epicenter of the uprising is Germany, which is only fitting, given that German MEP Axel Voss is almost singlehandedly responsible for poisoning the Directive with rules that will lead to mass surveillance and mass censorship, not to mention undermining much of Europe’s tech sector.

The German Consumer Association were swift to condemn the Directive, stating: “The reform of copyright law in this form does not benefit anyone, let alone consumers. MEPs are now obliged to do so. Since the outcome of the trilogue falls short of the EU Parliament’s positions at key points, they should refuse to give their consent.”

A viral video of Axel Voss being confronted by activists has been picked up by politicians campaigning against Voss’s Christian Democratic Party in the upcoming elections, spreading to Germany’s top TV personalities, like Jan Böhmermann.

Things are just getting started. On Saturday, with just two days of organizing, hundreds of Europeans marched on the streets of Cologne against Article 13. A day of action—March 23, just before the first possible voting date for MEPs—is being planned, with EU-wide events.

In the meantime, the petition to save Europe from the Directive—already the largest in EU history—keeps racking up more signatures, and is on track to be the largest petition in the history of the world.

Take Action

Stop Article 13

Go to Source
Author: Cory Doctorow

The Payoff From California’s “Data Dividend” Must Be Stronger Privacy Laws

California Governor Gavin Newsom, in his first State of the State Address, called for a “Data Dividend” (what some are calling a “digital dividend”) from big tech. It’s not yet clear what form this dividend will take. We agree with Governor Newsom that consumers deserve more from companies that profit from their data, and we suggest that any “dividend” should take the form of stronger data privacy laws to protect the people of California from abuse by the corporations that harvest and monetize our personal information.

In his February 12 address, Governor Newsom said:

California is proud to be home to technology companies determined to change the world. But companies that make billions of dollars collecting, curating and monetizing our personal data have a duty to protect it. Consumers have a right to know and control how their data is being used.

I applaud this legislature for passing the first-in-the-nation digital privacy law last year. But California’s consumers should also be able to share in the wealth that is created from their data. And so I’ve asked my team to develop a proposal for a new Data Dividend for Californians, because we recognize that your data has value and it belongs to you.

Strengthen the California Consumer Privacy Act

We agree with Governor Newsom that technology users and other Californians have “a right to know and control how their data is being used.” 

That’s why California began the process of protecting consumer data privacy last year. Specifically, it enacted the law that Governor Newsom described in his address: the California Consumer Privacy Act (CCPA). The CCPA provides consumers the right to know what personal information companies have collected from them, the right to opt-out of the sale of that information, and the right to delete some of that information.

EFF and other data privacy advocates will work this year to strengthen the CCPA. For example, California needs a private cause of action to enforce the CCPA, so consumers who suffer violations of their data privacy can hold accountable the corporations that violated their rights. The California Attorney General supports this expansion of CCPA enforcement power. The CCPA also should require opt-in consent before corporations share consumers’ data, and not just opt-out consent from corporations selling their data. Presumptions matter, and corporations may share personal information without selling it. Further, California needs a stronger right to know, including better “data portability,” meaning the right to obtain a machine-readable copy of one’s data.

Sadly, some big tech companies will work this year to weaken the CCPA. The privacy movement will resist their efforts.

With this legislative storm brewing, we are buoyed by Governor Newsom’s address. It signals his intent to stand up for the data privacy of Californians. We hope he will work with privacy advocates to strengthen the CCPA.

No Pay-For-Privacy

Some observers have speculated that by “Data Dividend,” Governor Newsom means payments by corporations directly to consumers in exchange for their personal information.

We hope not. EFF strongly opposes “pay-for-privacy” schemes. Corporations should not be allowed to require a consumer to pay a premium, or waive a discount, in order to stop the corporation from vacuuming up—and profiting from—the consumer’s personal information. It is not a good deal for consumers to get a handful of dollars from companies in exchange for surveillance capitalism remaining unchecked.

Privacy is a fundamental human right. It is guaranteed by the California Constitution. The California Supreme Court has ruled that this constitutional protection “creates a right of action against private as well as government entities.”

Pay-for-privacy schemes undermine this fundamental right. They discourage all people from exercising their right to privacy. They also lead to unequal classes of privacy “haves” and “have-nots,” depending upon the income of the user.

The good news is that the CCPA contains a non-discrimination rule, which forbids companies from discriminating against a consumer because the consumer exercised one of their CCPA privacy rights. For example, companies cannot deny goods, charge different prices, or provide different level of quality. The bad news is that the CCPA’s non-discrimination clause has two unclear and potentially far-reaching exceptions. This year, privacy advocates will seek to eliminate these exceptions, and some business groups will seek to expand them.

We hope Governor Newsom will join us in the fight against pay-for-privacy, and for strong legal protection of consumer data privacy. As the Governor powerfully explained this week: “Consumers have a right to know and control how their data is being used.”

Go to Source
Author: Adam Schwartz

EFF to State Department: Respect Freedom of Speech of Chinese Students

EFF to State Department: Respect Freedom of Speech of Chinese Students | Electronic Frontier FoundationSkip to main content


February 15, 2019

EFF joined a letter to Secretary of State Mike Pompeo opposing a proposal to deploy stronger vetting procedures against Chinese students intending to study in the United States because the procedures would threaten the free speech interests of both Chinese students and their American associates.

Reuters reported that the Trump administration is considering “checks of student phone records and scouring of personal accounts on Chinese and U.S. social media platforms for anything that might raise concerns about students’ intentions in the United States, including affiliations with government organizations.”

In opposing the vetting proposal, we argued that “[p]rospective students may self-censor whom they talk to or what they say on social media out of fear that political discussion about China or the United States will harm their academic prospects—a result sharply at odds with our national commitment to academic freedom and free expression,” and that “monitoring the phone and social media activity of Chinese students also threatens the free speech rights of their American associates—whether family members, friends, or fellow students.”

The State Department’s Chinese student vetting proposal follows U.S. Custom and Border Protection’s new program to ask visa applicants from China for their social media handles, which we similarly opposed. These programs focusing on Chinese visitors are part of a concerning broader strategy by the Trump administration to engage in social media surveillance of both visitors and immigrants to the United States.

We joined the letter to Secretary Pompeo along with Foundation for Individual Rights in Education (FIRE), PEN America, National Coalition Against Censorship, and Defending Rights & Dissent.

Related Issues

Back to top

JavaScript license information

Go to Source
Author: Sophia Cope

Oakland Renters Deserve Quality Service and The Power To Choose Their ISP

Oakland Renters Deserve Quality Service and The Power To Choose Their ISP | Electronic Frontier FoundationSkip to main content


February 14, 2019

Dark orange bakground with light orange internet and computer symbols. In middle, black circle with black figures holding bullhorn and sign

Oakland residents, we need your stories and experience to continue the fight to stop powerful Internet Service Providers (ISPs) from limiting your ability to choose the service that’s best for you.

Submit Your Story

If you live in Oakland and have had trouble acquiring service from the ISP of your choice, EFF wants to know.

Even in cities like Oakland where many residents ostensibly have a choice, thousands of renters are denied the power of that option.

For years, renters have been denied access to the Internet Service Provider of their choice as a result of pay-to-play schemes. These schemes―promoted by the corporations in control of the largest Internet Service Providers―allow powerful corporations to manipulate landlords into denying their tenants the ability to choose between providers who share their values, or have plans that provide the best service meeting a customer’s needs and budget. This concern was only exacerbated when the FCC repealed the 2015 Open Internet Order. Chairman Pai and the FCC claimed that net neutrality protections were not necessary, as the free market would prevent exploitative practices by allowing customers to vote with their dollars. But with more than half the country only having one option of high-speed Internet Service Provider, this illusion of choice has never been based in reality. Even in cities like Oakland where many residents ostensibly have a choice, thousands of renters are denied the power of that option by real estate trusts and management firms that restrict access to their properties to any provider other than the one with the most enticing landlord incentives.

In January of 2017, San Francisco adopted critical protections to stop these exploitative practices. As a result, San Francisco residents enjoy better, more affordable options than many of their friends and coworkers in neighboring communities.

EFF, local residents, advocacy groups, and businesses have begun working with Oakland lawmakers to make sure that the city’s renters can take advantage of these same protections. If you live in Oakland and have experienced difficulty acquiring Internet service from the provider that’s best for you, your City Council representatives want to know.

Related Issues

Back to top

JavaScript license information

Go to Source
Author: Nathan Sheard

Designing Welcome Mats to Invite User Privacy

The way we design user interfaces can have a profound impact on the privacy of a user’s data. It should be easy for users to make choices that protect their data privacy. But all too often, big tech companies instead design their products to manipulate users into surrendering their data privacy. These methods are often called “Dark Patterns.”

When you purchase a new phone, tablet, or “smart” device, you expect to have to set it up with the needed credentials for it to be fully usable. For Android devices, you set up your Google account. For iOS devices, you set your Apple ID. For your Kindle, you set up your Amazon account.

Privacy by default should be the goal. However, particularly worrisome practices have been paired with the on-boarding process for many different platforms that serve as an obstacle to this aspiration.

What are “Dark Patterns”?

Harry Brignull, a UX researcher, coined the term “Dark Patterns.” He maintains a site dedicated to documenting the different types of Dark Patterns, where he explains: “Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to.”

The Norwegian Consumer Council (the Forbrukerrådet or NCC) builds on this critical UX concept in a recent report that criticizes “features of interface design crafted to trick users into doing things that they might not want to do, but which benefit the business in question.”

On the heels of this report, the NCC filed a complaint against Google on the behalf of a consumer. This complaint argues that Google violated the European Union’s General Data Protection Regulation (GDPR) by tricking the consumer into giving Google access to their location information. Likewise, the French data protection agency (the CNIL) recently ruled that some of Google’s consent and transparency practices violate the GDPR. The CNIL fined Google 50 million Euros (equivalent to about 57 million U.S. dollars).

The NCC report emphasizes two important steps in the on-boarding process of Android-based devices: the enabling of Web & App Activity and Location History. These two services encompass a wide variety of information exchanges between different Google applications and services. Examples include collection of real-time location data on Google Maps and audio-based searches and commands via Google Assistant.

It is possible to disable these services in the “Activity Controls” section of one’s account. But Google’s on-boarding process causes users to unintentionally opt-in to information disclosure, then makes it difficult to undo these so-called “choices” about privacy control, which were not ethically presented in the beginning. This creates more work for the consumer to retroactively opt-out.

Of course, Google isn’t alone in using Dark Patterns to coerce users into “consenting” to different permissions. For example, in the image immediately below, Facebook Messenger’s SMS feature presents itself when you first download the application. Giving SMS permission would mean making Facebook Messenger the default texting application for your phone. Note the bright blue “OK”, as opposed to the less prominent “Not Now”.

facebook sms messenger feature on install

Likewise, in the next image immediately below, Venmo’s onboarding encourages users to connect to Facebook and sync the contacts from their phones. Note how “Connect Facebook” is presented as the bolder and more apparent option, potentially cross-sharing robust profiles of information from your Facebook network.

These are classic Dark Patterns, deploying UX design against consumer privacy and in favor of corporate profit.

What is “Opinionated Design”?

The common thread between Opinionated Design and Dark Patterns is the power of the designer behind the technology to nudge the user to take actions that the business would like the user to take.

Of course, UX design can also guide users to protect their safety. “Opinionated Design” uses the same techniques as Dark Patterns, by means of persuasive visual indicators, bolder options, and compelling wording. For example, the Google Chrome security team used the design principles of “Attractiveness of Choice” and “Choice Visibility” to effectively warn some users about SSL hazards, as discussed in their report in 2015. When the safety of the user is valued by the designer and product team, they can guide the user away from particularly vulnerable situations while browsing.

The common thread between Opinionated Design and Dark Patterns is the power of the designer behind the technology to nudge the user to take actions that the business would like the user to take. As in the case of Google Chrome’s SSL warnings, where explanations and clear guidance to safety can help to prevent abuse of a person navigating the web.

These are examples of Opinionated Design:

SSL warnings are presented to the user and given brief explanations on why the connection is not safe. Note how “Back to safety” is boldly presented to guide the user back from an potential attack.

Privacy by Default

Part of the solution is new legislation that requires companies to obtain opt-in consent that is easy for users to understand before they harvest and monetize users’ data. To do this, UX design must pivot from using Dark Patterns to satisfy business metrics. Among other things, it should:

  • Decouple the on-boarding process for devices and applications from the consent process.
  • Visually display equally weighted options on pages that involve consent to data collection, use, and sharing.
  • Consumers feel uneasy about privacy, so default to the “no” option during setup.
  • Coercing “consent” for lucrative data bundling may satisfy a temporary metric, but public distrust of your platform will outweigh any gains from unethical design.

We must continue this critical discussion around consent and privacy, and urge product designers and managers to build transparency into their applications and devices. Privacy doesn’t have to be painful and costly, if it is integrated in the beginning of UX design, rather than stapled on at the end.

Go to Source
Author: Alexis Hancock

Powerful Permissions, Wimpy Warnings: Installing a Root Certificate Should be Scary

More lessons from “Facebook Research”

Last week, Facebook was caught using a sketchy market research app to gobble large amounts of sensitive user activity after instructing users to alter the root certificate store on their phones. A day after, Google pulled a similar iOS “research program” app. Both of these programs are a clear breach of user trust that we have written about extensively.

This news also drew attention to an area both Android and iOS could improve on. Asking users to alter root certificate stores gave Facebook the ability to intercept network traffic from users’ phones even when that traffic is encrypted, making users’ otherwise secure Internet traffic and communications available to Facebook. How the devices alert users to this possibility—the “UX flow”—on both Android and iOS could be improved dramatically.

To be clear, Android and iOS should not ban these capabilities altogether, like Apple has already done for sideloaded applications and VPNs. The ability to alter root certificate stores is valuable to researchers and power-users, and should never be locked-down for device owners. A root certificate allows researchers to analyze encrypted data that a phone’s applications are sending off to third-parties, exposing whether they’re exfiltrating credit-card numbers or health data, or peddling other usage data to advertisers. However, Facebook’s manipulation of regular users into allowing this ability for malicious reasons indicates the necessity of a clearer UX and more obvious messaging.

Confusing prompts for adding root certificates

When regular users are manipulated into installing a root certificate on their device, it may not be clear that this allows the owner of the root certificate to read any encrypted network traffic.

On both iOS and Android, users installing a root certificate click through a process filled with vague jargon. This is the explanation users get, with inaccessible jargon bolded.

Android: “Note: The issuer of this certificate may inspect all traffic to and from the device.”

iOS: “Installing the certificate ” will add it to the list of trusted certificates on your iPhone. This certificate will not be trusted for websites until you enable it in Certificate Trust Settings.”

Android’s warning before adding a root certificate is some small red text filled with jargon.

iOS’s warning is much larger, but doesn’t explain at all what significance this action may have to a non-technical user.

Regular users probably don’t know about the X.509 Certificate ecosystem, who certificate issuers are, what it means to “trust” a certificate, and its relationship to encrypting their data. On Android, the warning is vague about who has what capabilities: an “issuer … may … inspect all traffic”. On iOS, there’s no explanation whatsoever, even in the “Certificate Trust Settings,” about why this may be a dangerous action.

Security-compromising actions should have understandable messaging for non-technical users

The good news: it’s possible to get this sort of messaging right.

 For instance, these dangers also apply on browsers, where the warnings to users are much more clear. Compare the above messaging flow for trusting root certificates on your phone to the equivalent warnings on browsers when you hit a website with a self-signed or untrusted certificate. Chrome warns in very large letters, “Your connection is not private,” and Firefox similarly announces, “Your connection is not secure.” Chrome’s messaging even lists possible types of sensitive data that may be exfiltrated: “passwords, messages, or credit cards.” Changing your browser’s root certificate store then involves multiple steps hidden behind an “Advanced” button.

Chrome’s warning on websites with self-signed certificates. The messaging is clear and understandable, and changing your browser’s root certificate store then involves multiple steps hidden behind an “Advanced” button.

The prompt that appears when entering the developer console on the Facebook website.

 Another good example comes from Facebook itself: when you open a browser developer console on Facebook’s website, a big red “Stop!” appears to prevent users not familiar with the console from doing something dangerous. Here, Facebook goes out of its way to warn users about the dangers of using a feature meant for researchers and developers. Facebook’s “market research” app, Android, and iOS did none of this.

The answer should not be to vilify root certificates and their capabilities in general. Tools like this prove themselves invaluable to security researchers and privacy experts. At the same time, they should not be presented to general users without abundantly clear messaging and design to indicate their potential dangers.

Go to Source
Author: Sydney Li

The Final Version of the EU’s Copyright Directive Is the Worst One Yet

Despite ringing denunciations from small EU tech businesses, giant EU entertainment companies, artists’ groups, technical experts, and human rights experts, and the largest body of concerned citizens in EU history, the EU has concluded its “trilogues” on the new Copyright Directive, striking a deal that—amazingly—is worse than any in the Directive’s sordid history.

Take Action

Stop Article 13

Goodbye, protections for artists and scientists

The Copyright Directive was always a grab bag of updates to EU copyright rules—which are long overdue for an overhaul, given that it’s been 18 years since the last set of rules were ratified. Some of its clauses gave artists and scientists much-needed protections: artists were to be protected from the worst ripoffs by entertainment companies, and scientists could use copyrighted works as raw material for various kinds of data analysis and scholarship.

Both of these clauses have now been gutted to the point of uselessness, leaving the giant entertainment companies with unchecked power to exploit creators and arbitrarily hold back scientific research.

Having dispensed with some of the most positive versions of the Directive, the trilogues have also managed to make the (unbelievably dreadful) bad components of the Directive even worse.

A dim future for every made-in-the-EU platform, service and online community

Under the final text, any online community, platform or service that has existed for three or more years, or is making €10,000,001/year or more, is responsible for ensuring that no user ever posts anything that infringes copyright, even momentarily. This is impossible, and the closest any service can come to it is spending hundreds of millions of euros to develop automated copyright filters. Those filters will subject all communications of every European to interception and arbitrary censorship if a black-box algorithm decides their text, pictures, sounds or videos are a match for a known copyrighted work. They are a gift to fraudsters and criminals, to say nothing of censors, both government and private.

These filters are unaffordable by all but the largest tech companies, all based in the USA, and the only way Europe’s homegrown tech sector can avoid the obligation to deploy them is to stay under ten million euros per year in revenue, and also shut down after three years.

America’s Big Tech companies would certainly prefer not to have to install these filters, but the possibility of being able to grow unchecked, without having to contend with European competitors, is a pretty good second prize (which is why some of the biggest US tech companies have secretly lobbied for filters).

Amazingly, the tiny, useless exceptions in Article 13 are too generous for the entertainment industry lobby, and so politicians have given them a gift to ease the pain: under the final text, every online community, service or platform is required to make “best efforts” to license anything their users might conceivably upload, meaning that they have to buy virtually anything any copyright holder offers to sell them, at any price, on pain of being liable for infringement if a user later uploads that work.

News that you’re not allowed to discuss

Article 11, which allows news sites to decide who can link to their stories and charge for permission to do so, has also been worsened. The final text clarifies that any link that contains more than “single words or very short extracts” from a news story must be licensed, with no exceptions for noncommercial users, nonprofit projects, or even personal websites with ads or other income sources, no matter how small.

Will Members of the European Parliament dare to vote for this?

Now that the Directive has emerged from the Trilogue, it will head to the European Parliament for a vote for the whole body, either during the March 25-28 session or the April 15-18 session—with elections scheduled in May.

These elections are critical: the Members of the European Parliament are going to be fighting an election right after voting on this Directive, which is already the most unpopular legislative effort in European history, and that’s before the public gets wind of these latest changes.

Let’s get real: no EU political party will be able to campaign for votes on the strength of passing the Copyright Directive—but plenty of parties will be able to drum up support to throw out the parties that defied the will of voters and risked the destruction of the Internet as we know it to pour a few million Euros into the coffers of media companies and newspaper proprietors—after those companies told them not to.

There’s never been a moment where your voice mattered more

Watch this space. We will be working with allies across the EU to make this upcoming Parliamentary vote into an issue that every Member of the European Parliament is well-informed on, and we’re going to make sure that every MEP knows that the voters of Europe are watching them and taking note of how they vote.

Digital rights activists across the EU will be working to make this upcoming Parliamentary vote into an issue that every Member of the European Parliament is well-informed on. Between the lobbying of Big Tech and Big Media, we’ll be explaining what this means for every day Internet users. And together, we’re going to make sure that every MEP knows that the voters of Europe are watching them and taking note of how they vote.

All that it takes is for you to speak up. Over four million Internet users have signed the petition against the Directive. If you can do that, you can pick up the phone and call your MEP. Tell them why you’re against the Directive, what it means for you, and what you expect your representatives to do in the forthcoming plenary vote. It really is the last chance to make your voice heard.

Take Action

Stop Article 13

Go to Source
Author: Cory Doctorow

National Emergencies: Constitutional and Statutory Restrictions on Presidential Powers

When a president threatens to exercise the power to declare a national emergency, our system of checks and balances faces a crucial test. With President Trump threatening such a declaration in order to build his proposed physical border wall, that test could be an important one that could quickly implicate your right to privacy and a transparent government.

Like the Constitution, statutory powers do not justify a presidential declaration of emergency powers to build a proposed border wall.

EFF has long tangled with governmental actions rooted in presidential power. From mass telephone records collection to tapping the Internet backbone, and from Internet metadata collection to biometric tracking and social media monitoring, claims of national crisis have often enabled digital policies that have undermined civil liberties. Those policies quickly spread far beyond their initial justification. We have also seen presidential authorities misused to avoid the legislative process—and even used to try to intimidate courts and prevent them from doing their job to protect our rights.

So when the President threatens to use those same emergency authorities to try paying for a border wall after Congress has refused, we watch closely. And so should you.

National Emergencies and the Constitution

The tension created by the constitutional separation of powers among the president, Congress, and the courts during times of national emergency is not new. 

President Lincoln famously suspended habeas corpus during the American Civil War. Later, President Roosevelt authorized the detention and internment of 100,000 Japanese-Americans, a profound constitutional offense that ranks among our country’s most severe violations of our founding commitments. Finally, President Truman sought to use emergency powers to nationalize a steel mill to supply the armed forces during their “police action” in Korea.

The last of these prompted a lawsuit that has served as the touchstone for consideration of these questions ever since. The case is known as the Steel Seizure Cases or Youngstown Sheet & Tube Co. v. Sawyer (1952). In Youngstown, the Court ruled against President Truman, holding that he did not have the power to seize a privately owned steel mill.

Justice Jackson’s concurring opinion in Youngstown set forth the analytical framework that has come to define this area of law. It explains that executive power stands at its lowest ebb when confronting an explicit act of Congress denying the purported authority, as President Truman did when attempting to seize steel mills. In contrast, executive power attains maximal reach when authorized (either explicitly or by implication) by Congress, such as when Congress has authorized military action. In between, the executive branch has flexible authority within a “zone of twilight” on issues that Congress has not addressed.  

Flash forward to today. Congress has not appropriated funds to build the wall requested by the President. Given that Congress is the branch of government with the exclusive power to tax and spend, the most obvious way to characterize this refusal is as a rejection of the President’s request—placing the President at his “lowest ebb” of power under the Youngstown analysis.

Alternatively, Congress’ silence could be read to indicate that it hasn’t addressed the issue. Congress regularly grants the President an annual sum of funding for “discretionary” spending purposes, from which the administration could claim funds have already been provided and that his acts fall within the “zone of twilight.” On the other hand, funds provided in discretionary budgets are designed to fill budgetary gaps of federal agencies in their regular operations. Construing congressional silence as assent would therefore require a stretch.

The choice between these two possibilities will likely be the crux of any legal challenge to the President’s attempt to direct funds absent congressional appropriation. Because Congress exclusively wields the power to appropriate funds and declined to do so here, the courts should overturn any unilateral executive branch action, as they did in Youngstown.

Statutory Powers to Declare Presidential Emergencies

In addition to his constitutional powers, two statutory powers could conceivably justify a presidential declaration of emergency.

First, Congress enacted the National Emergencies Act in 1976. The Act has been invoked dozens of times in the years since, usually to prohibit transactions with foreign powers engaged in violent conflict with U.S. military forces.

On the one hand, the National Emergencies Act does not specify any particular circumstances that must be satisfied before a president can invoke its extraordinary powers. On the other hand, the legislative history preceding its passage reveals Congress’ intent to limit executive fiat by terminating previous open-ended declarations of emergency and providing mechanisms for Congress to limit future declarations.

Until either the Senate or House passes a resolution to terminate a national emergency, however, the Act requires relatively little of a president invoking its authority. It requires the President to periodically report to Congress about any funds it spends related to a declaration of emergency.

This requirement for reporting could be read to insinuate presidential authority to spend funds in the event of a bona fide emergency. Courts, however, should scrutinize the legitimacy of any claimed emergency, especially when Congress has affirmatively declined to appropriate the full sum sought by the President.

Second, the Robert T. Stafford Disaster Relief and Emergency Assistance Act provides a more well-tested authority for a president to spend funds without congressional appropriation. It, however, has been used only in responses to natural disasters, to enable temporary responses to unforeseen events. It has never justified spending funds that Congress has refused to appropriate in response to a previous executive branch request. It also requires a Governor of an affected state to request an emergency declaration from the president, which has yet to happen along the U.S. southern border. In fact, one Governor has done the opposite.

Like the Constitution, statutory powers do not justify a presidential declaration of emergency powers to build a proposed border wall. Given Congress’ refusal to appropriate funds for the president’s proposal, the courts should guard the separation of powers, and not defer to a potential presidential declaration of emergency.

Emergency Powers Have Enabled Secrecy and Mass Surveillance

Letting a supposed national emergency serve as a pretense for extraordinary executive powers outside of congressional approval has proven to be very dangerous for digital civil liberties. As noted above, emergencies have already served as the basis for creating several mass surveillance programs that EFF has worked hard to stop. These same claims have been used to justify tracking immigrants and other marginalized people, tracking that quickly spread far beyond its original justification.

Beyond the powers themselves, the secrecy that has come along with those efforts threatens our Constitutional checks & balances by truncating legislative oversight and judicial review. The decade we’ve spent trying to get a public federal court to consider the government’s unconstitutional mass surveillance programs has informed our perspective on that problem.

Put simply, there is no national emergency exception to the Constitution, or to the key statutes that constrain executive branch authority here. Similarly, national emergencies have never legitimately justified the continual monitoring of hundreds of millions of Americans without a shred of suspicion. 

As the debate over border policy continues, we urge both the political branches and the judiciary to refuse to allow the false pretenses about a national emergency to rip holes into our system of checks and balances. We know all too well that such holes can too easily be used to justify a further expansion of both surveillance and secrecy.

Go to Source
Author: Cindy Cohn

Entrepreneurs Tell USPTO Director Iancu: Patent Trolls Aren’t Just “Monster Stories”

Patent trolls aren’t a myth. They aren’t a bedtime story. Ask a software developer—they’re likely to know someone who has been sued or otherwise threatened by one, if they haven’t been themselves.

Unfortunately, the new director of the U.S. Patent and Trademark Office (USPTO) is in a serious state of denial about patent trolls and the hurt they cause to technologists everywhere. Today a number of small business owners and start-up founders have submitted a letter [PDF] to USPTO Director Andre Iancu telling him that patent trolls remain a real threat to U.S. businesses. Signatories range from mid-sized companies like Foursquare and Life360 to one-person software enterprises like Ken Cooper’s. The letter explains the harm, cost, and stress that patent trolls cause businesses.   

Patent trolls aren’t a thing that happens once in a while or an exception to the rule. Over the past two decades, troll litigation has become the rule. There are different ways to measure exactly what a “troll” is, but by one recent measurement, a staggering 85 percent of recently filed patent lawsuits in the tech sector were filed by trolls.

That’s almost 9 out of 10 lawsuits being filed by an entity with no real product or service. Because the Patent Office issues so many low-quality software patents, the vast majority of these suits are brought by entities that played no role in the development of the real-world technology they attack. Instead, trolls use vague and overbroad patents to sue the innovators who create products and services. This is how we end up with patent trolls suing people for running an online contest or making a podcast.

Three Steps Forward, Two Steps Back

The news isn’t all bad. Reformers have made substantial progress in the fight against patent trolls. A string of positive Supreme Court decisions, beginning with the 2006 eBay v. MercExchange decision, and going on through 2014’s Alice v. CLS Bank ruling, have made it feasible to fight trolls in court. Meanwhile, the America Invents Act created a useful new process for challenging bad patents right in the patent office—the inter partes review

Supreme Court decisions have made it harder for patent trolls to rope defendants into remote, inappropriate venues like the Eastern District of Texas; easier to award fees against patent owners who abuse the system; and perhaps most importantly, the Alice decision has made it easier to knock out bogus software patents more quickly.

Those victories haven’t solved the problem. Still, they have slowed down the onslaught of litigation by patent trolls and the lawyers who help them. Just to focus on a single, prolific bad actor, take the shell company ArrivalStar, which later morphed into to Shipping & Transit LLC. Several years ago, the Shipping & Transit lawsuit machine was able to scare both private companies and even public transit agencies into coughing up $80,000 or more for valueless “licenses.” Shipping & Transit ultimately skimmed hundreds of thousands of dollars from cash-strapped cities and millions more from companies large and small. Later, bolstered by the new reforms, small companies fought back—and won. Today, Shipping & Transit is bankrupt, can’t sue anyone, and admits that patents it used to demand millions in licensing fees are worth just $1.

The victories we’ve won are why the trolls and their allies are out in force this year. They’re pushing awful legislation like the STRONGER Patents Act, which would roll back just about every reform that has given victims of trolling a fighting chance. The trolls and abusive companies that have profited off the patent system have now won a considerable prize. The man who runs the office where patents are granted has said clearly that further reforms aren’t necessary. It’s disappointing, but considering their over-the-top lobbying efforts, it isn’t surprising.

Trolls Are All Too Real

Director Iancu has gone much further than saying he’s skeptical of reform. Iancu appears to question whether patent trolls even exist. In a recent speech, he called accounts of patent trolling “scary monster stories.” Iancu clearly isn’t listening to the stories of small businesses hit by patent demands week after week. But we do hear from those businesses—over and over again. We won’t stand idle while Iancu denies basic facts about what’s going on in the U.S. patent system.

It isn’t hard to find entrepreneurs who have been hurt by patent trolls. We highlight just a few of those innovators in our “Saved by Alice” series. These business owners endured years of stress, huge costs, and sometimes bankruptcy, because they were threatened by patent trolls that produce nothing. And they are the few that are brave enough to speak up—many more are too afraid to speak out, else be targeted with yet another expensive lawsuit. 

These aren’t myths. The flood of lawsuits we witness isn’t an opinion. The cases are real and the damage they do to defendant companies is undeniable. Iancu is choosing to ignore this situation, to satisfy his audience. And the audience he’s chosen says it all. When Iancu called patent trolls “monster stories,” he was speaking to a gathering of lawyers and judges in the Eastern District of Texas—the heart of the problem. The signaling couldn’t have been more clear. Iancu is working to overturn hard-won reforms and to re-open the spigot of patent trolling dollars that flows into that skewed venue.

When Iancu hails the innovation produced by the U.S. tech sector, he’s absolutely correct. Innovation in software and tech is everywhere we look. But patent trolls are there, too, and easy to find. When it comes to patents, the magical thinking isn’t coming from reformers. Rather, it’s on full display at the exclusive conferences that Iancu is speaking at, surrounded by patent owners, patent lawyers, and patent-licensing insiders. We’re in danger of heading back to a wrongheaded mentality that “more patents equals more innovation.” That’s the real myth.

Go to Source
Author: Joe Mullin

French Data Protection Authority Takes on Google

        France’s data protection authority is first out the gate with a big decision regarding a high-profile tech company, and every other enforcer in Europe is taking notes. On January 21, France’s CNIL fined Google 50 million Euros for breaches of the General Data Protection Regulation (GDPR). This is about 57 million U.S. dollars. The decision relates to Google’s intrusive ad personalization systems, and its inadequate systems of notice and consent when users create accounts for Google services on Android devices.

Since the GDPR came into effect on May 25, 2018, many companies have simulated compliance with the law while manipulating users into granting them consent by means of deceptive interface design and behavioral nudging. If a major company is seeking to get a free pass from another national data protection authority, that decision will now be critically contrasted with the approach of the CNIL.

Hopefully, the CNIL’s recent decision is a harbinger of a robust enforcement approach which will deliver critical privacy protections to users.

The Complaints from Privacy Advocates

Under the GDPR, processing of personal data is only allowed where there is a “legal basis.” such as the consent of the user, and users are granted extensive rights over their data. The CNIL found Google in breach of the law’s transparency and information requirements, and as a result found invalid the so-called “consent” that Google sought to rely upon.

The CNIL’s investigation was prompted by two complaints from digital rights organizations, None of Your Business (NOYB) and La Quadrature du Net (LQDN). NOYB was established by data protection activist Max Schrems, and this group filed similar complaints against Android, Facebook, Whatsapp and Instagram. NOYB objected to the privacy policy which Android users were asked to agree to, and argued that the consent was invalid and thus illegal.

LQDN’s complaint addresses the consent process around the creation of an account to access Google services. They argued that Google does not have a valid legal basis for using consumer data for the personalization of content, the behavioral analysis of users, and the targeting of ads on Youtube, Gmail, and Google Search.

Invalid Transparency and Information

The GDPR places much importance on companies informing users about how their data is used and what rights they have to intervene in the processing. Article 13 specifies information that must be disclosed to the user before any processing takes place, such as the nature and purpose of collection, and how long the data will be retained. Article 12 requires that this information be conveyed “in a concise, transparent, intelligible and easily accessible form.” The aim is to ensure that users have control over what data is taken from them, and how it is used and shared.

The CNIL found that Google violated its duties of transparency and information. Specifically, Google obfuscated “essential information” about data processing purposes, data storage periods, and categories of personal information used for ads personalization. For example, the relevant information was “excessively disseminated” over multiple documents, and required users to click through five or six pages. Moreover, information was “not always clear” due to “generic and vague” verbiage. Yet the “massive and intrusive” scope and detail of the data collected by Google from its array of services and sources placed an increased obligation on the company to make its practices clear and comprehensible to users.

Invalid Consent

In its communications with the CNIL, Google asserted that the legal basis for its personalization of ads was the consent of the user. The CNIL rejected this assertion, for two reasons. First, the CNIL found that due to the breaches of Articles 12 and 13 discussed above, the consent acquired by Google was not properly informed.

In the EU, the user must opt-in – consent cannot be implied on the basis that users theoretically have a way of opting out.

Second, the GDPR requires user consent to be specific and unambiguous, and the latter requires a positive act by the user to indicate their agreement. Yet Google had pre-ticked the boxes allowing it to use web and app history for behavioral targeting, a method specifically excluded by the Regulation. The CNIL cites Article 29 Working Party guidance which requires a user to take steps to consent. In the EU, the user must opt-in – consent cannot be implied on the basis that users theoretically have a way of opting out.

Unanswered Questions

While the CNIL found Google in breach of the GDPR, it left unaddressed key arguments of the complainants. NOYB hones in on the imbalance of power between Android and individual users. Its dominance of the market, and the absence of effective alternatives, means that users have little option but to “consent” or they will be excluded from the ecosystem. Recital 42 of the GDPR states that consent “should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment.” This is one more reason why companies must be required to offer access to their service even when users reject tracking and behavioral personalization.

LQDN challenge the practice of tying, or making acceptance of personalized advertising a condition of access to its services. Article 7(4) states that “when assessing whether consent is freely given, utmost account shall be taken of whether… the provision of a service is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.” Behavioral analysis of user data for the personalization of advertising is not necessary to deliver mail or video hosting services to users. If tying is allowed, users will be confronted by cookie walls everywhere, requiring that they agree to tracking in exchange for access to services.

The Stakes Have Been Raised in Europe

The $57 million fine highlights the increased sanctions available under the GDPR. In November, the UK’s Information Commissioner’s Office imposed the then-maximum fine of £500,000 on Facebook for breaches uncovered as part of Cambridge Analytica investigations under the old law. $57 million is certainly manageable for a company with a turnover over €96 Billion, but the ramifications of this decision do not end with the payment of the fine.

Google is the subject of multiple other investigations in Europe, and this is unlikely to be the last finding that it violated the GDPR. It will have to remedy its violations and change its practices and improve user privacy. This decision sends a shot across the bows of other companies with worse transparency and few scruples in deceiving users into granting consent.

Google has four months to appeal this decision to the Conseil D’État, France’s Supreme Court for administrative matters.

Go to Source
Author: Alan Toner