Certbot Leaves Beta with the Release of 1.0

Earlier this week EFF released Certbot 1.0, the latest version of our free, open source tool that helps websites encrypt their traffic. The release of 1.0 is a significant milestone for the project and is the culmination of the work done over the past few years by EFF and hundreds of open source contributors from around the world.

Certbot was first released in 2015 to automate the process of configuring and maintaining HTTPS encryption for site administrators by obtaining and deploying certificates from Let’s Encrypt. Since its initial launch, many features have been added, including beta support for Windows, automatic nginx configuration, and support for over a dozen DNS providers for domain validation.

Certbot is part of EFF’s project to encrypt the web. Using HTTPS instead of unencrypted HTTP protects people from eavesdropping, content injection, and cookie stealing, which can be used to take over your online accounts. Since the release of Let’s Encrypt and Certbot, the percentage of web traffic using HTTPS has increased from 40% to 80%. This is significant progress in building a web that is encrypted by default but there is more work to be done.

The release of 1.0 officially marks the end of Certbot’s beta phase, during which it has helped over 2 million users maintain HTTPS access to over 20 million websites. We’re very excited to see how many more users, and more websites, Certbot will assist in 2020 and beyond.

It’s thanks to our 30,000+ members that we’re able to maintain Certbot and push for 100% encrypted web.

Support Certbot!

Contribute to EFF’s Security-enhancing tech projects

Go to Source
Author: Brad Warren

Advertisements

The FCC Is Opening up Some Very Important Spectrum for Broadband

Decisions about who gets to use the public airwaves and how they use it impact our lives every day. From the creation of WiFi routers to the public auctions that gave us more than two options for our cell phone providers, the Federal Communications Commission (FCC)’s decisions reshape our technological world. And they’re about to make another one.

In managing the public spectrum, aka “the airwaves,” the FCC has a responsibility to ensure that commercial uses benefit the American public. Traditionally, the FCC either assigns spectrum to certain entities with specific use conditions (for example, television, radio, and broadband are “licensed uses”) or simply designating a portion of spectrum as an open field with no specific use in mind called “unlicensed spectrum,” which is what WiFi routers use.

The FCC is about to make two incredibly important spectrum decisions. The first we’ve written about previously, but, in short, the FCC intends to reallocate spectrum currently used for satellite television to broadband providers through a public auction. The second is reassigning spectrum located in the 5.9 Ghz frequency band from being exclusively licensed to the auto industry to being an open, unlicensed use.

We support this FCC decision because unlicensed spectrum allows innovators big and small to make use of a public asset without paying license fees or obtaining advance government permission. Users gain improved wireless services, more competition, and more services making the most of an asset that belongs to all of us.

Why Is 5.9 GHz Licensed to the Auto Industry?

In 1999, the FCC allocated a portion of the public airwaves to a new type of car safety technology using Dedicated Short Range Communications (DSRC). In theory, cars equipped with DSRC devices on the 5.9 GHz band would communicate with each other and coordinate to avoid collisions. 20 years later, very few cars actually use DSRC. In fact, so few cars are using it that a study found that its current automotive use is worth about $6.2 million, while opening up the spectrum would be worth tens of billions of dollars. In other words, a public asset that could be used for next generation WiFi is effectively laying fallow until the FCC changes part of the license held by the auto industry into an unlicensed use.

Even though it’s barely using it, what are the chances the auto industry will give up exclusive access to a multi-billion public asset it gets for free? This is why last ditch efforts to argue that the auto industry must maintain exclusive use over a huge amount of spectrum as a matter of public safety is hollow. Nothing the FCC is doing here is preventing cars from using this spectrum, and given that its high-frequency a lot of data can travel over the airwave in question.

It isn’t the FCC’s job to stand idly by while someone essentially squats on public property and let it go to waste. Rather, the FCC’s job is to continually evaluate who is given special permission by the government and to decide if they are producing the most benefit to the public.

Unlicensing 5.9 GHz Means Faster WiFi, Improved Broadband Competition, and Better Wireless Service

Spectrum is necessary for transmitting data, and more of it means more available bandwidth. WiFi routers today have a bandwidth speed limit because you can only move as much data as you have spectrum available. In addition, the frequency range affects how much data you can move as well. So earlier WiFi routers that used 2.5 GHz generally transmitted 100s of megabits per second while today’s routers also use 5.0 GHz to deliver gigabit speeds. More spectrum in the range of 5.9 GHz with similar properties to current gigabit routers will mean the next line of WiFi routers will be able to transmit even greater amounts of data.

Adding more high-capacity spectrum into the unlicensed space also means that smaller competitive wireless ISPs that compete with incumbents will be given more capacity for free. Typically small wireless ISPs (WISPs) are reliant on unlicensed spectrum because they do not have the multi-billion dollar financing that AT&T and Sprint do to purchase exclusive licenses. Their lack of financing also limits their ability to immediately deploy fiber wires to the home and unlicensed spectrum allows them to bypass the infrastructure costs until they have enough customers to fund fiber infrastructure. In essence, improving the competitiveness of small wireless players is an essential part of eventually reaching a fiber for all future because smaller competitive ISPs are also more aggressive in deploying fiber to the home than incumbent big telecommunications companies that have generally abandoned their buildouts.

Lastly, wireless broadband service in general will improve because unlicensed spectrum has dramatically reduced congestion on cell towers by offloading traffic to WiFi hotspots and routers. This offloading process is so pervasive that, these days, 59 percent of our smartphone traffic is over WiFi instead of over 4G. It’s estimated that 71 percent of 5G traffic will actually be over WiFi in its early years. Adding more capacity to offloading and higher speeds will mean less congestion on public and small business guest WiFi as well.

As it stands, the 5.9 GHz band of the public airwaves is barely serving the public at all. The FCC deciding to open it up has only benefits and is a good idea.

Go to Source
Author: Ernesto Falcon

EFF Releases Certbot 1.0 to Help More Websites Encrypt Their Traffic

San Francisco – The Electronic Frontier Foundation (EFF) today released Certbot 1.0: a free, open source software tool to help websites encrypt their traffic and keep their sites secure.

Certbot was first released in 2015, and since then it has helped more than two million website administrators enable HTTPS by automatically deploying Let’s Encrypt certificates. Let’s Encrypt is a free certificate authority that EFF helped launch in 2015, now run for the public’s benefit through the Internet Security Research Group (ISRG).

HTTPS is a huge upgrade in security from HTTP. For many years, web site owners chose to only implement HTTPS for a small number of pages, like those that accepted passwords or credit card numbers. However, in recent years, it has become clear that all web pages need protection. Pages served over HTTP are vulnerable to eavesdropping, content injection, and cookie stealing, which can be used to take over your online accounts.

“Securing your web browsing with HTTPS is an important part of protecting your information, like your passwords, web chats, and anything else you look at or interact with online,” said EFF Senior Software Architect Brad Warren. “However, Internet users can’t do this on their own—they need site administrators to configure and maintain HTTPS. That’s where Certbot comes in. It automates this process to make it easy for everyone to run secure websites.”

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Along with our browser add-on, HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship. The project is encrypting traffic to over 20 million websites, and has recently added beta support for Windows-based servers. Before the release of Let’s Encrypt and Certbot, only 40% of web traffic was encrypted. Now, that number is up to 80%.

“A secure web experience is important for everyone, but for years it was prohibitively hard to do,” said Max Hunter, EFF’s Engineering Director for Encrypting the Internet. “We are thrilled that Certbot 1.0 now makes it even easier for anyone with a website to use HTTPS.”

For more about Certbot:
https://certbot.eff.org/

Go to Source
Author: Rebecca Jeschke

Sen. Cantwell Leads With New Consumer Data Privacy Bill

[unable to retrieve full-text content]

There is a lot to like about U.S. Sen. Cantwell’s new Consumer Online Privacy Rights Act (COPRA). It is an important step towards the comprehensive consumer data privacy legislation that we need to protect us from corporations that place their profits ahead of our privacy.

The bill, introduced on November 26, is co-sponsored by Sens. Schatz, Klobuchar, and Markey. It fleshes out the framework for comprehensive federal privacy legislation announced a week earlier by Sens. Cantwell, Feinstein, Brown, and Murray, who are, respectively, the ranking members of the Senate committees on Commerce, Judiciary, Banking, and Health, Education, Labor and Pensions.

This post will address COPRA’s various provisions in four groupings: EFF’s key priorities, the bill’s consumer rights, its business duties, and its scope of coverage.

EFF’s Key Priorities

COPRA satisfies two of EFF’s three key priorities for federal consumer data privacy legislation: private enforcement by consumers themselves; and no preemption of stronger state laws. COPRA makes a partial step towards EFF’s third priority: no “pay for privacy” schemes.

Private enforcement. All too often, enforcement agencies lack the resources or political will to enforce statutes that protect the public, so members of the public must be  empowered to step in. Thus, we are pleased that COPRA has a strong private right of action to enforce the law. Specifically, in section 301(c), COPRA allows any individual who is subjected to a violation of the Act to bring a civil suit. They may seek damages (actual, liquidated, and punitive), equitable and declaratory relief, and reasonable attorney’s fees.

COPRA also bars enforcement of pre-dispute arbitration agreements, in section 301(d). EFF has long opposed these unfair limits on user enforcement of their legal rights in court.

Further, COPRA in section 301(a) provides for enforcement by a new Federal Trade Commission (FTC) bureau comparable in size to existing FTC bureaus. State Attorneys General and consumer protection officers may also enforce the law, per section 301(b). It is helpful to diffuse government enforcement in this manner.

No preemption of stronger state laws. COPRA expressly, in section 302(c), does not preempt state laws unless they are in direct conflict with COPRA, and a state law is not in direct conflict if it affords greater protection. This is most welcome. Federal legislation should be a floor and not a ceiling for data privacy protection. EFF has long opposed preemption by federal laws of stronger state privacy laws.

“Pay for privacy.” COPRA only partially addresses EFF’s third priority: that consumer data privacy laws should bar businesses from retaliating against consumers who exercise their privacy rights. Otherwise, businesses will make consumers pay for their privacy, by refusing to serve privacy-minded consumers at all, by charging them higher prices, or by providing them services of a lower quality. Such “pay for privacy” schemes discourage everyone from exercising their fundamental human right to data privacy, and will result in a society of income-based “privacy haves” and “privacy have nots.”

In this regard, COPRA is incomplete. On the bright side, it bars covered entities from conditioning the provision of service on the individual’s waiver of their privacy rights in section 109. But COPRA allows covered entities to charge privacy-minded consumers a higher price or provide a lower quality. We urge amendment of COPRA to bar such “pay for privacy” schemes.

Consumer Rights Under COPRA

COPRA would provide individuals with numerous data privacy rights that they may assert against covered entities.

Right to opt-out of data transfer. An individual may require a covered entity to stop transferring their data to other entities. This protection, in section105(b), is an important one. COPRA requires the FTC to establish processes for covered entities to use to facilitate opt-out requests. In doing so, the FTC shall “minimize the number of opt-out designations of a similar type that a consumer must take.” We hope these processes include browser headers and similar privacy settings, such as the “do not track” system, that allow tech users at once to signal to all online entities that they have opted-out.

Right to opt-in to sensitive data processing. An individual shall be free from any data processing or transfer of their “sensitive” data, unless they affirmatively consent to such processing, under section 105(c). There is an exception for certain “publicly available information.

The bill has a long list of what is considered “sensitive” data: government-issued identifiers; information about physical and mental health; credentials for financial accounts; biometrics; precise geolocation; communications content and metadata; email, phone number, or account log-in credentials; information revealing race, religion, union membership, sexual orientation, sexual behavior, or online activity over time and across websites; calendars, address books, phone and text logs, photos, or videos on a device; nude pictures; any data processed in order to identify the above data; and any other data designated by the FTC.

Of course, a great deal of information that the bill does not deem “sensitive” is in fact extraordinarily sensitive. This includes, for example, immigration status, marital status, lists of familial and social contacts, employment history, sex, and political affiliation. So COPRA’s list of sensitive data is under-inclusive. In fact, any such list will be under-inclusive, as new technologies make it ever-easier to glean highly personal facts from apparently innocuous bits of data. Thus, all covered information should be free from processing and transfer, absent opt-in consent, and a few other tightly circumscribed exceptions.

Right to access. An individual may obtain from a covered entity, in a human-readable format, the covered data about them, and the names of third parties their data was disclosed to. Affirming this right, in section 102(a), is good. But requesters should also be able to learn the names of the third parties who provided their personal data to the responding entity. To map the flow of their personal data, consumers must be able to learn both where it came from and where it went.

Right to portability. An individual may export their data from a covered entity in a “structured, interoperable, and machine-readable format.” This right to data portability, in section 105(a), is an important aspect of user autonomy and the right-to-know. It also may promote competition, by making it easier for tech users to bring their data from one business to another.

Rights to delete and to correct. An individual may require a covered entity to delete or correct covered data about them, in sections 103 and 104.

Business Duties Under COPRA

COPRA would require businesses to shoulder numerous duties, even if a consumer does not exercise any of the aforementioned rights.

Duty to minimize data processing. COPRA, in section 106, would bar a covered entity from processing or transferring data “beyond what is reasonably necessary, proportionate, and limited” to certain kinds of purposes. This is “data minimization,” that is, the principle that an entity should minimize its processing of consumer data. Minimization is an important tool in the data privacy toolbox. We are glad COPRA has a minimization rule. We also are also glad COPRA would apply this rule to all the ways an entity processes data (and not just, for example, to data collection or sharing).

However, COPRA should improve its minimization yardstick. Data privacy legislation should bar companies from processing data except as reasonably necessary to give the consumer what they asked for, or for a few other narrow purposes. Along these lines, COPRA allows processing to carry out the “specific” purpose “for which the covered entity has obtained affirmative express consent,” or to “complete a transaction … specifically requested by an individual.” Less helpful is COPRA’s additional allowance of processing for the purpose “described in the privacy policy made available by the covered entity.” We suggest deletion of this allowance, because most consumers will not read the privacy policy.

Duty of loyalty. COPRA, in section 101, would bar companies from processing or transferring data in a manner that is “deceptive” or “harmful.” The latter term means likely to cause: a financial, physical, or reputational injury; an intrusion on seclusion; or “other substantial injury.” This is a good step. We hope legislators will also explore “information fiduciary” obligations where the duty of loyalty would require the business to place the consumer’s data privacy rights ahead of the business’ own profits.

Duty to assess algorithmic decision-making impact. An entity must conduct an annual impact assessment if it uses algorithmic decision-making to determine: eligibility for housing, education, employment, or credit; distribution of ads for the same; or access to public accommodations. This annual assessment—as described in section 108(b)—must address, among other things, whether the system produces discriminatory results. This is good news. EFF has long sought greater transparency about algorithmic decision-making.

Duty to build privacy protection systems. A covered entity must designate a privacy officer and a data security officer. These officers must implement a comprehensive data privacy program, annually assess data risks, and facilitate ongoing compliance with COPRA’s section 202. Moreover, the CEO of a “large” covered entity must certify, based on review, the existence of adequate internal controls and reporting structures to ensure compliance. COPRA in section 2(15) defines a “large” entity as one that processes the data of 5 million people or the sensitive data of 100,000 people. These COPRA rules will help ensure that businesses build the privacy protections systems needed to safeguard consumers’ personal information.

Duty to publish a privacy policy. A covered entity must publish a privacy policy that states, among other things, the categories of data it collects, the purpose of collection, the identity of entities to which it transfers data, and the duration of retention. This language, in section 102(b), will advance transparency.

Duty to secure data. A covered entity must establish and implement reasonable data security practices, as described in section 107.

Scope of Coverage

Consumer data privacy laws must be scoped to particular data, to particular covered entities, and with particular exceptions.

Covered data. COPRA, in section 2(8)(A) protects “covered data,” defined as “information that identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data.” This term excludes de-identified data, and information lawfully obtained from government records.

We are pleased that “covered data” extends to “devices,” and that “derived” data includes “data about a household” in section 2(11). Some businesses track devices and households, without ascertaining the identity of individuals.

Unfortunately, COPRA defines “covered data” to exclude “employee data,” meaning personal data collected in the course of employment and processed solely for employment in sections 2(8)(B)(ii) and 2(12). For many people, the greatest threat to data privacy comes from their employers and not from other businesses. Some businesses use cutting-edge surveillance tools to closely scrutinize employees at computer workstations (including their keystrokes) and at assembly lines (including wristbands to monitor physical movements). Congress must protect the data privacy of workers as well as consumers.

Covered entities. COPRA, as outlined in section 2(9) applies to every entity or person subject to the FTC Act. That Act, in turn, excludes various economic sectors, such as common carriers, per 15 U.S.C. 45(a)(2). Hopefully, this COPRA limitation reflects the jurisdictional frontiers of the various congressional committees—and the ultimate federal consumer data privacy bill will apply across economic sectors.

COPRA excludes “small business” from the definition of “covered entity” under sections 2(9) & (23). EFF supports such exemptions, among other reasons because small start-ups often are engines of innovation. Two of COPRA’s three size thresholds would exclude small businesses: $25 million in gross annual revenue, or 50% of revenue from transferring personal data. But COPRA’s third size threshold would capture many small businesses: annual processing of the personal data of 100,000 people, households, or devices. Many small businesses have websites that process the IP addresses of 300 visitors per day. We suggest deleting this third threshold, or raising it by an order of magnitude.

Exceptions. COPRA contains various exemptions, listed in sections 110(c) through 110(g).

Importantly, it includes a journalism exemption in section 110(e): “Nothing in this title shall apply to the publication of newsworthy information of legitimate public concern to the public by a covered entity, or to the processing or transfer of information by a covered entity for that purpose. This exemption is properly framed by the activity of journalism, which all people and organizations have a First Amendment right to exercise, regardless of whether they are a professional journalist or a news organization.

COPRA, in section 110(d)(1)(D), exempts the processing and transfer of data as reasonably necessary “to protect against malicious, deceptive, fraudulent or illegal purposes.” Unfortunately, many businesses may interpret such language to allow them to process all manner of personal data, in order to identify patterns of user behavior that the businesses deem indicative of attempted fraud. We urge limitation of this exemption.

Conclusion

We thank Sen. Cantwell for introducing COPRA. It is a strong step forward in the national conversation over how government should protect us from businesses that harvest and monetize our personal information. While we will seek strengthening amendments, COPRA is an important data privacy framework for legislators and privacy advocates.

Go to Source
Author: Adam Schwartz

How a Patent on Sorting Photos Got Used to Sue a Free Software Group

Taking and sharing pictures with wireless devices has become a common practice. It’s hardly a recent development: the distinction between computers and cameras has shrunk, especially since 2007 when smartphone cameras became standard. Even though devices that can take and share photos wirelessly have become ubiquitous over a period spanning more than a decade, the Patent Office granted a patent on an “image-capturing device” in 2018.

A patent on something so commonplace might be comical, but unfortunately, U.S. Patent No. 9,936,086 is already doing damage to software innovation. It’s creating litigation costs for real developers. The creator of this patent is Rothschild Patent Imaging LLC, or RPI, a company linked to a network of notorious patent trolls connected to inventor Leigh Rothschild. We’ve written about two of them before: Rothschild Connected Devices Innovations, and Rothschild Broadcast Distribution Systems. Now, RPI has used the ’086 patent to sue the Gnome Foundation, a non-profit that makes free software.

The patent claims a generic “image-capturing mobile device” with equally generic components: a “wireless receiver,” “wireless transmitter,” and “a processor operably connected to the wireless receiver and the wireless transmitter.”  That processor is configured i: to (1) receive multiple photographic images, (2) filter those images using criteria “based on a topic, theme or individual shown in the respective photographic image,” and (3) transmit the filtered photographic images to another wireless device. In other words: the patent claims a smartphone that can receive images that a user can filter by content before sending to others.

According to Rothschild’s complaint, all it takes to infringe its patent is to provide a product that “offers a number of ways to wirelessly share photos online such as through social media.” How in the world could a patent on something so basic and established qualify as inventive in 2018?

At least part of the answer is that the Patent Office simply failed to apply the Supreme Court’s Alice decision. The Alice decision makes clear that using generic computers to automate established human tasks cannot qualify as an “invention” worthy of patent protection. Applying Alice, the Federal Circuit has specifically rejected a patent on the “abstract idea of classifying and storing digital images in an organized manner” in TLI Communications

Inexplicably, there’s no sign the Patent Office gave either decision any consideration before granting this application. Alice was decided in 2014; TLI in 2016. Rothschild filed the application that became the ‘086 patent in June 2017. Before being granted, the application received only one non-final rejection from an examiner at the Patent Office. That examiner did not raise any concerns about the application’s eligibility for patent protection, let alone any concerns specifically stemming from Alice or TLI.

The examiner only compared the application to one earlier reference—a published patent application from 2005. Rothschild claimed that system was irrelevant, because the filter was based on the image’s quality; in Rothschild’s “invention,” the filter was based on “subject identification” criteria, such as the topic, theme, or individual in the photo.

Rothschild didn’t describe how the patent performed the filtering step, or explain why filtering on these criteria would be a technical invention. Nor did the Patent Office ask. But under Alice, it should have. After all, humans have been organizing photos based on topic, theme, and individuals depicted for as long as humans have been organizing photos.

Because the Patent Office failed to apply Alice and granted the ’086 patent, the question of its eligibility may finally get the attention it needs in court. The Gnome Foundation has filed a motion to dismiss the case, pointing out that the patent’s lack of eligibility. We hope the district court will apply Alice and TLI to this patent. But a non-profit that exists to create and spread free software never should have had to spend its limited time and resources on this patent litigation in the first place.

Go to Source
Author: Alex Moss

Video: Ruth Taylor Describes Her Win Against an Online Voting Patent

We’ve been fighting abuses of the patent system for years. Many of the worst abuses we see are committed by software patent owners who make money suing people instead of building anything. These are patent owners we call patent trolls. They demand money from people who use technology to perform ordinary activities. And they’re able to do that because they’re able to get patents on basic ideas that aren’t inventions, like running a scavenger hunt and teaching foreign languages.

Efforts at reforming this broken system got a big boost in 2014, when the Supreme Court decided the Alice v. CLS Bank case. In a unanimous decision, the high court held that you can’t get a patent on an abstract idea just by adding generic computer language. Now, courts are supposed to dismiss lawsuits based on abstract patents as early as possible.

We need an efficient way to throw out bad software patents because patent litigation is so outrageously expensive. Small businesses simply can’t afford the millions of dollars it costs to go through a full patent trial. And thanks to Alice, they haven’t had to: since the decision came down, U.S. courts have thrown out bad patents in hundreds of cases.

Our Saved by Alice project tells the stories of these businesses and the people behind them. One is the story of Ruth Taylor, a photographer who ran a website called Bytephoto.com. Bytephoto hosted forums for a passionate community of photographers, and also ran weekly photo competitions where users voted on the photos they liked best.

Today, we’re publishing a short video in which Ruth tells her story about how a company called Garfum.com, claiming that her online photo contests infringed its patent, demanded she pay $50,000 or face a lawsuit for patent infringement.

mytubethumbplay

%3Ciframe%20align%3D%22center%22%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FSvovyPIT32M%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22autoplay%3B%20encrypted-media%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info. This embed will serve content from youtube-nocookie.com

“I wasn’t about to hand over $50,000, because I didn’t have $50,000,” Ruth said in our interview. “And even if I did, I still wouldn’t have handed it over.”

Instead, Ruth called EFF. We were able to take her case pro bono and prepare a strong defense, arguing that the ridiculous Garfum.com patent never should have been issued in the first place. Garfum’s patent simply takes the well-known idea of a competition by popular vote and applies it to networked computers. This is exactly the type of patent Alice prohibits. After our brief was filed, and Garfum was scheduled to meet us in court, they dropped their lawsuit.

We were able to take and win Ruth’s case because of the Alice decision. Faced with a hearing in which they’d have to justify their patent, Garfum backed down from its bullying behavior. Large patent holders like IBM and Qualcomm have been pushing Congress to weaken one of the few safeguards we have against bad patents, and earlier this year, U.S. Senators began considering a draft proposal that would have thrown out Alice altogether.

When we asked Ruth what it would mean for small businesses if the Alice decision was overturned. “They would be sued, and probably go under,” she said. “To me, a patent was always an amazing invention, that would help people—not something that’s used to extort money from people.”

We hope you’ll take time to listen to Ruth’s story, and understand why we can’t let Congress chip away at the Alice decision. Let’s defend that original vision of the patent system, as promoting real inventions that help people—not extortionate scams like the one that threatened Ruth Taylor’s livelihood. And check out our video of Justus Decher, another Saved by Alice small business.

Go to Source
Author: Joe Mullin

Sanctions, Protests, and Shutdowns: Fighting to Open Iran’s Internet

A debate is raging, in Congress and the media, over whether or not we need new regulations to try to shape how Internet platforms operate. Too often, however, the discussion is based on rhetoric and anecdote, rather than empirical research. The recently introduced National Commission on Online Platforms and Homeland…

Go to Source
Author: Jillian C. York

DEEP DIVE: EFF to DHS: Stop Mass Collection of Social Media Information

[unable to retrieve full-text content]

The Department of Homeland Security (DHS) recently released a proposed rule expanding the agency’s collection of social media information on key visa forms and immigration applications. Earlier this month, EFF joined over 40 civil society organizations that signed on to comments drafted by the Brennan Center for Justice. These comments identify the free speech and privacy risks the proposed rule poses to U.S. persons both directly, if they are required to fill out these forms, and indirectly, if they are connected via social media to friends, family, or associates required to fill out these forms.

DHS’s Proposed Rule

In the proposed rule, “Generic Clearance for the Collection of Social Media Information on Immigration and Foreign Travel Forms,” DHS claims that it has “identified the collection of social media user identifications . . . as important for identity verification, immigration and national security vetting.” The proposed rule identifies 12 forms adjudicated by DHS agencies U.S. Customs and Border Protection (CBP) and U.S. Citizenship and Immigration Services (USCIS) that will now collect social media handles and associated social media platforms for the last five years. The applications will not collect passwords. DHS will be able to only view information that the user publicly shares.

U.S. Customs and Border Protection

The proposed rule mandates social media collection on three CBP forms:

  • Electronic System for Travel Authorization (ESTA, known as the visa waiver program)
  • I-94W Nonimmigrant Visa Waiver Arrival/Departure Record
  • Electronic Visa Update System (EVUS, the system used by Chinese nationals with 10-year visitor visas).

EFF previously highlighted the government’s proposals to collect social media information from visa waiver and EVUS applicants. In 2016, the government finalized CBP’s proposed rule to collect social media handles on the ESTA form as an optional question. Under DHS’s current proposed rule, this question would not longer be optional. DHS claims that this question is not “mandatory” in order to obtain or retain a benefit, such as a visa waiver, but it is mandatory to submit the ESTA and EVUS forms. Applicants may choose “none” or “other” as responses.

U.S. Citizenship and Immigration Services

The proposed rule mandates collection of social media handles on nine USCIS forms, including applications for citizenship, permanent residency (green card), asylum, refugee status, and refugee and asylum family petitions. The proposed rule marks the first time that USCIS has sought to collect social media information from individuals seeking an immigration benefit. 

USCIS claims that it is not “mandatory” to provide social media information on all of these forms. But, for both CBP and USCIS, the proposed rule states that “failure to provide the requested data may either delay or make it impossible for [the agency] to determine an individual’s ability for the requested benefit.” Thus, though the agency may still process forms without a response to the social media question, applicants risk being denied if they fail to provide the information.

Civil Liberties and Privacy Concerns

As we’ve previously argued, collection of social media handles and information in public posts raises a number of First Amendment concerns.

First, the proposed rule will chill the exercise of free speech and lead to self-censorship. As we argued in the comments, social media platforms have become the de facto town square, where people around the world share news and ideas and connect with others. If individuals know that the government is monitoring their social media pages, they are likely to self-censor. Indeed, studies have shown that fears about online government surveillance lead to a chilling effect among both U.S. Muslims and broader samples of Internet users. The proposed rule may cause individuals to delete their accounts, limit their postings, and maximize privacy settings when they otherwise may have shared their social media activity more freely.

Second, the proposed rule infringes upon anonymous speech. Under the proposed rule, individuals running anonymous social media accounts could be at risk of having their true identities unmasked, despite the Supreme Court’s ruling that anonymous speech is protected by the First Amendment. Given that the proposed rule states that “[n]o assurance of confidentiality is provided,” collection of anonymous social media handles tied to their real-world identities could present a dangerous situation for individuals living under oppressive regimes who use such accounts to criticize their government or advocate for the rights of minority communities.

Third, the proposed rule threatens freedom of association. Collection of social media information implicates not just an applicant for a visa or an immigration benefit, but also any person with whom that applicant engages with on social media, including U.S. citizens. This may lead to applicants disassociating from online connections for fear that others’ postings may endanger the applicant’s immigration benefit. Earlier this year, CBP cancelled a Palestinian Harvard student’s visa and deported him back to Lebanon, allegedly based on the social media postings of his online connections. And conversely, the proposed rule may lead to family and friends disassociating from applicants for fear of government social media surveillance.

In addition, the proposed rule raises issues around privacy. Often, people’s social media presence can reveal much more than they intend to share. A recent study demonstrated that using embedded geolocation data, researchers accurately predicted where Twitter users lived, worked, visited, and worshipped—information that many users hadn’t even known they had shared. The proposed rule’s collection of public social media information may allow the government to piece together and document users’ personal lives.

These civil liberties concerns are why EFF and other civil society organizations signed on to the Brennan Center’s comments urging DHS to rescind the proposed rule and abandon its initiative to collect social media information from over 33 million people.

New USCIS Policy on Fake Accounts

The release of the DHS proposed rule dovetailed with the release of a USCIS Privacy Impact Assessment (PIA) on the agency’s use of fake social media accounts to conduct social media surveillance. Under the PIA, the USCIS Fraud Detection and National Security Directorate (FDNS) can create fake social media accounts to view publicly available social media information to:

  1. Identify individuals who may pose a threat to national security or public safety and are seeking an immigration benefit;
  2. Detect and pursue cases when there is an indicator of potential fraud; or
  3. Randomly select previously adjudicated cases for review to identify and remove systemic vulnerabilities.

Under the PIA, FDNS officers may use fake accounts only with supervisor approval. Officers can access only publicly available content and cannot engage on social media (for example, through “friending”).

This USCIS PIA and the DHS proposed rule together involve two separate units within USCIS that engage in social media surveillance: one through social media collection on forms and the other through fake accounts. In the first instance, the applicant is aware that USCIS may monitor their social media activity, while in the second, the applicant may not be aware that USCIS is engaging in such monitoring. The PIA also discusses reevaluation for previously adjudicated decisions, indicating that an applicant may be under a “review” process long after their case has been adjudicated.

The PIA is concerning for several reasons. To begin, the PIA’s authorization of use of fake accounts directly contradicts previous policy. Prior USCIS and DHS guidance required any officer using social media for government purposes to identify themselves with a government identifier. Moreover, as we’ve previously highlighted, the PIA’s authorization of fake accounts violates the terms of service of many social media platforms such as Facebook.

In addition, the PIA provides only vague justifications for why USCIS officers need to create fake accounts to engage in this type of immigration vetting. The PIA claims that using fake accounts is an operational security measure that protects USCIS employees and DHS information technology systems. This explanation provides little clarity, especially since officers are not allowed to engage with other social media users, whether through a government-identified profile or a fake profile. While the PIA claims that any risk to users is mitigated because users are allowed to control what content they make public, the use of fake accounts makes it harder for individuals to use the “block” feature effectively—a key user tool for content control, akin to a privacy setting. By hiding law enforcement’s identity, a user may not block accounts they otherwise might.

Finally, the PIA raises similar concerns as the proposed rule around First Amendment issues and privacy. In particular, the third category allows for social media review of someone who has already been granted an immigration benefit. This means that someone who is already a naturalized U.S. citizen or permanent resident would have to be on alert for the possibility of having their social media content reviewed—and even having their immigration benefit revoked—years after the immigration benefit is granted. The PIA also contemplates the collection of publicly available information from an associate of a person under investigation—for example, comments on a photo. These dual risks could result in the indefinite chilling of individuals’ speech online.

The DHS Privacy Office recommends three ways to limit USCIS’s use of fake social media profiles. First, the PIA states that fake accounts should not be the default, but rather should only be used when there is an “articulated need.” Second, the Privacy Office will initiate a Privacy Compliance Review within 12 months of the PIA’s publishing. Third, the Privacy Office recommends that FDNS implement an efficacy review. We hope that, at minimum, USCIS follows these recommendations. We further ask that USCIS explain why its position has changed from previous guidance prohibiting the use of fake accounts.

Go to Source
Author: Saira Hussain