Exposing Your Face Isn’t a More Hygienic Way to Pay

A company called PopID has created an identity-management system that uses face recognition. Their first use case is as a system for in-store, point of sale payments using face recognition as authorization for payment.

They are promoting it as a tool for restaurants, claiming that it is pandemic-friendly because it is contactless.

Nonetheless, the PopID payment system is less secure than alternatives, unfriendly to privacy, and is likely riskier than other payment alternatives for anyone concerned about catching COVID-19. On top of these issues, PopID is pitching it as a screening tool for COVID-19 infection, another task that it’s completely unsuited for.

Equities issues

It’s important that payment systems not disadvantage cash payments, which have the best social equity. Many people are under-banked and in hard times such as these, many people use cash as a way to help them manage their budgets and spending. Cash is also the most privacy-friendly way to pay. As convenient as other systems are, and despite cash not being contactless, we need to protect people’s ability to use cash.

PopID is a charge-up-and-spend system. To lower their costs, PopID has its users charge up an account wn ith them using a credit card or debit card, and payments are deducted from that. Charge-and-spend systems are good for the store, and less good for the person using them; they amount to an interest-free loan that the consumer gives the merchant. This is no small thing: Starbucks, PayPal, and Walmart all have billions in interest-free loans from their customers. This further disadvantages people with budgets, as it requires them to give PopID money before it is spent and keep a balance in their system in anticipation of spending it.

PopID also requires their customers to have a smartphone for enrollment-by-selfie, which disadvantages those who don’t have one.

To be fair, these issues are largely fixable. PopID could allow someone to enroll without a phone at any payment station. They could allow charge-up with cash, and they could allow direct charge. But for now, the company does not offer these easy solutions.

Fitness to task

Looking beyond its potentially fixable perpetuation of systematic inequalities, it’s important that a system actually do what it’s intended to do. PopID is pitching it as a pandemic-friendly system, providing both contactless payments and as a COVID-19 screening device, using the camera as a temperature sensor. Neither of these is a good idea.

Temperature scanning with commodity cameras won’t work

PopID promotes their system as a temperature scanning device for employees and customers alike. Temperature screening itself has limited benefit, as around half the people who have COVID-19 are asymptomatic.

Moreover, accurate temperature screening is expensive and hard. PopID is not the only organization to promote cheap face recognition with COVID-19 screening as the excuse. In reality, the cheap camera in a point-of-sale terminal is both inaccurate and intrusive as Jay Stanley of the ACLU describes in detail.

There’s a wide range in the accuracy of temperature-scanning cameras, in normal human body temperature across a population, and even an individual’s temperature based on time of day and their physical activities. Even the best cameras are finicky, not working accurately if people are wearing hats, glasses, or masks, and require the camera to view only one subject at a time.

Speeding up a sandwich shop line does help prevent COVID-19, because we know that spending too much time too close to other people is the primary mode of transmission. But, temperature scanning along with payment doesn’t help people space themselves out or have shorter contact.

Face recognition raises COVID-19 risks

PopID pitches their system as good during the pandemic because it is contactless. Yet it is worse than payment alternatives.

PopID’s web site shows a picture of a payment terminal, with options to use contactless payment systems such as Apple Pay, Google Pay, and Samsung Pay. Presumably, any contactless credit card could be used. Additionally, a barcode system like the one Starbucks uses is contactless.

PopID point-of-sale terminal

PopID’s point of sale terminal

Any of these contactless payment alternatives are much better than PopID from a public health standpoint because they don’t require someone to remove their mask. The LA Times article comments parenthetically, “(The software struggles at recognizing faces with masks.)”

Indeed, any contactless payment system has less contact than using cash, yet even cash is low-risk. Almost all COVID-19 transmission is through breathing in virus particles in droplets or aerosols, not from fomites that we touch. Moreover, cash is easy to wash in soapy water.

This is a big deal for a supposedly pandemic-friendly system. The most recent restaurant-based superspreading event in the news is particularly relevant. A person in South Korea sat for two hours in a coffee shop under the air conditioning, and spread the disease to twenty-seven other people, who in turn spread it to twenty-nine other people, for a total of fifty-six people. And yet, none of the mask-wearing employees got the virus.

This is particularly relevant to PopID; a contactless system that makes someone take off a mask endangers the other customers. Ironically, if a customer sees a store using PopID, they better be wearing a mask because PopID is requiring them to come off momentarily. Or they could just shop somewhere else.


PopID brings in new security risks that do not exist in other systems. They have the user’s payment information (for charging up the payment store), their phone number (it’s part of registration), name, and of course the selfie that’s used for face recognition. There’s no reason to suppose they’re any worse than the cloud services that inevitably lose people information, but no reason to think they’re better. Thus, we should assume that eventually a hacker’s going to get all that information.

However, being a payment system, there is the obvious additional risk of fraud. PopID says, “Your Face now becomes your singular, ultra-secure ‘digital token’ across all PopID transactions and devices,” yet that can’t possibly be so.

Face recognition systems are well-known to be inaccurate as NIST recently showed, particularly with Black, Indigenous, Asian and other People of Color, women, and also Trans or nonbinary people. False positives are common and in a payment system, a false positive means a false charge. PopID says they will confirm any match through the verification process of asking someone their name. To be fair, this is not a bad secondary check but is hardly “ultra-secure.” Moreover, it requires every PopID customer to tell the whole store their name (or use a PopID pseudonym).

Lastly, PopID doesn’t say how they’ll permit someone to dispute charges, an important factor since the credit card industry is regulated with excellent consumer protection. In the event of fraud, it’s much easier to be issued a new credit card than a new face.

The end result is that PopID’s pay-by-face is less secure than using a contactless card, and less secure than cash.


PopID is an incipient privacy nightmare. The obvious privacy issues of an unregulated payment system that knows where your face has been is only the start of the problem. The LA Times writes:

But [CEO of PopID, John] Miller’s vision for a face-based network goes beyond paying for lunch or checking in to work. After users register for the service, he wants to build a world where they can “use it for everything: at work in the morning to unlock the door, at a restaurant to pay for tacos, then use it to sign in at the gym, for your ticket at the Lakers game that night, and even use it to authenticate your age to buy beers after.”

“You can imagine lots of things that you can do when you have a big database of faces that people trust,” Miller said.

Nothing more needs to be said. PopID as a payment system is a stalking horse for a face-surveillance panopticon and salable database of trusted faces.


PopID is less secure and less private than alternative forms of payment, contactless or not. It brings with it a lot of social equity issues that negatively impact marginalized communities. Moreover, any store using PopID and thus requiring other people to remove their masks to pay is exposing you to COVID-19 that you would not otherwise be exposed to.

Most alarmingly, it is also an insecure for-profit surveillance system building a database of you, your face, your purchases, your movements, and your habits.

Go to Source
Author: Jon Callas

A Look-Back and Ahead on Data Protection in Latin America and Spain

We’re proud to announce a new updated version of The State of Communications Privacy Laws in eight Latin American countries and Spain. For over a year, EFF has worked with partner organizations to develop detailed questions and answers (FAQs) around communications privacy laws. Our work builds upon previous and ongoing research of such developments in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. We aim to understand each country’s legal challenges, in order to help us spot trends, identify the best and worst standards, and provide recommendations to look ahead. This post about data protection developments in the region is one of a series of posts on the current State of Communications Privacy Laws in Latin America and Spain. 

As we look back at the past ten years in data protection, we have seen considerable legal progress in granting users’ control over their personal lives. Since 2010, sixty-two new countries have enacted data protection laws, giving a total of 142 countries with data protection laws worldwide. In Latin America, Chile was the first country to adopt such a law in 1999, followed by Argentina in 2000. Several countries have now followed suit: Uruguay (2008), Mexico (2010), Peru (2011), Colombia (2012), Brazil (2018), Barbados (2019), and Panama (2019). While there are still different privacy approaches, data protection laws are no longer a purely European phenomenon.

Yet, contemporary developments in European data protection law continue to have an enormous influence in the region—in particular, the EU’s 2018 General Data Protection Regulation (GDPR). Since 2018, several countries, including Barbados and Panama have led the way in adopting GDPR-inspired laws in the region, promising the beginning of a new generation of data protection legislation. In fact, the privacy protections of Brazil’s new GDPR-inspired law took effect this month, on September 18, after the Senate pushed back on a delaying order from President Jair Bolsonaro.

But when it comes to data protection in the law enforcement context, few countries have adopted the latest steps of the European Union. The EU Police Directive, a law on the processing of personal data for police forces, has not yet become a Latin American phenomenon. Mexico is the only country with a specific data protection regulation for the public sector. In doing so, countries in the Americas are missing a crucial opportunity to strengthen their communications privacy safeguards with rights and principles common to the global data protection toolkit.

New GDPR-Inspired Data Protection Laws
BrazilBarbados, and Panama have been the first countries in the region to adopt GDPR-inspired data protection laws. Panama’s law, approved in 2019, will enter into force in March 2021. 

Brazil’s law has faced an uphill battle. The provisions creating the oversight authority came into force in December 2018, but it took the government one and a half years to introduce a decree implementing its structure. The decree, however, will only have legal force when the President of the Board is officially appointed and approved by the Senate. No appointment has been made as of the publication of this post. For the rest of the law, February 2020 was the original deadline to enter into force. This was later changed to August 2020. The law was then further delayed to May 2021 through an Executive Act issued by President Bolsonaro. Yet, in a surprising positive twist, Brazil’s Senate stopped President Bolsonaro’s deferral in August. That means the law is now in effect, except for the penalties’ section which have been deferred again, to August 2021. 

Definition of Personal Data 
Like the GDPR, Brazil and Panama’s laws include a comprehensive definition of personal data. It includes any information concerning an identified or identifiable person. The definition of personal data in Barbados’s law has certain limitations. It only protects data which relates to an individual who can be identified “from that data; or from that data together with other information which is in the possession of or is likely to come into the possession of the provider.” Anonymized data in Brazil, Panama, and Barbados falls outside the scope of the law. There are also variations in how these countries define anonymized data.

Panama defines it as data that cannot be re-identified by reasonable means. However, the law doesn’t set explicit parameters to guide this assessment. Brazil’s law makes it clear that anonymized data will be considered personal data if the anonymization process is reversed using exclusively the provider’s own means, or if it can be reversed with reasonable efforts. The Brazilian law defines objective factors to determine what’s reasonable such as the cost and time necessary to reverse the anonymization process, according to the technologies available, and exclusive use of the provider’s own means.  These parameters affect big tech companies with extensive computational power and large collections of data, which will need to determine if their own resources could be used to re-identify anonymized data. This provision should not be interpreted in a way that ignores scenarios where the sharing or linking of anonymized data with other data sets, or publicly available information, leads to the re-identification of the data.

Right to Portability 
The three countries grant users the right to portability—a right to take their data from a service provider and transfer it elsewhere. Portability adds to the so-called ARCO (Access, Rectification, Cancellation, and Opposition) rights—a set of users’ rights that allow them to exercise control over their own personal data.

Enforcers of portability laws will need to make careful decisions about what happens when one person wants to port away data that relates both to them and another person, such as their social graph of contacts and contact information like phone numbers. This implicates the privacy and other rights of more than one person. Also, while portability helps users leave a platform, it doesn’t help them communicate with others who still use the previous one. Network effects can prevent upstart competitors from taking off. This is why we also need interoperability to enable users to interact with one another across the boundaries of large platforms. 

Again, different countries have different approaches. The Brazilian law tries to solve the multi-person data and interoperability issues by not limiting the “ported data” to data the user has given to a provider. It also doesn’t detail the format to be adopted. Instead, the data protection authority can set the standards among others for interoperability, security, retention periods, and transparency. In Panama, portability is a right and a principle. It is one of the general data protection principles that guide the interpretation and implementation of their overarching data protection law. As a right, it resembles the GDPR model. The user has the right to receive a copy of their personal data in a structured, commonly used, and machine-readable format. The right applies only when the user has provided their data directly to the service provider  and has given their consent or when the data is needed for the execution of a contract. Panama’s law expressly states that portability is “an irrevocable right” that can be requested at any moment. 

Portability rights in Barbados are similar to those in Panama. But, like the GDPR, there are some limitations. Users can only exercise their rights to directly port their data from one provider to another when technically feasible. Like Panama, users can port data that they have provided themselves to the providers, and not data about themselves that other users have shared.  

Automated Decision-Making About Individuals
Automated decision-making systems are making continuous decisions about our lives to aid or replace human decision making. So there is an emerging GDPR-inspired right not to be subjected to solely automated decision-making processes that can produce legal or similarly significant effects on the individual. This new right would apply, for example, to automated decision-making systems that use “profiles” to predict aspects of our personality, behavior, interests, locations, movements, and habits. With this new right, the user can contest the decisions made about them, and/or obtain an explanation about the logic of the decision. Here, too, there are a few variations among countries. 

Brazilian law establishes that the user has a right to review decisions affecting them that are based solely on the automated processing of personal data. These include decisions intended to define personal, professional, consumer, or credit profiles, or other traits of someone’s personality. Unfortunately, President Bolsonaro vetoed a provision requiring human review in this automated-decision-making. On the upside, the user has a right to request the provider to disclose information on the criteria and procedures adopted for automated decision-making, though unfortunately there is an exception for trade and industrial secrets.

In Barbados, the user has the right to know, upon request to the provider, about the existence of decisions based on automated processing of personal data, including profiling. As in other countries, this includes access to information about the logic involved and the envisaged consequences on them. Barbados users also have the right not to be subject to automated decision-making processes without human involvement, and to automated decisions that will produce legal or similarly significant effects on the individual, including profiling. There are exceptions for when automated decisions are: necessary for entering or performing a contract between the user and the provider; authorized by law; or based on user consent. Barbados has defined consent similar to the GDPR’s definition. That means there must be a freely given, specific, informed, and unambiguous indication of the user’s wishes to the processing of their personal data. The user has the ability to change their mind

Panama law also grants users the right not to be subject to a decision based solely on automated processing of their personal data, without human involvement, but this right only applies when the process produces negative legal effects concerning the user or detrimentally affects the users’ rights. As in Barbados, Panama allows automated decisions that are necessary for entering or performing a contract, based on the user’s consent, or permitted by law. But Panama defines “consent” in a less user-protective manner: when a person provides a “manifestation” of their will.

Legal Basis for the Processing of Personal Data
It is important for data privacy laws to require service providers to have a valid lawful basis in order to process personal data, and to document that basis before starting the processing. If not, the processing will be unlawful. Data protection regimes, including all principles and user’s rights, must apply regardless of whether consent is required or not.

Panama’s new law allows three legal bases other than consent: to comply with a contractual obligation, to comply with a legal obligation, or as authorized by a particular law. Brazil and Barbados set out ten legal bases for personal data processing—four more than the GDPR, with consent as only one of them. Brazilian and Barbados law seeks to balance this approach by providing users with clear and concise information about what providers do with their personal data. It also grants users the right to object to the processing of their data, which allows users to stop or prevent processing. 

Data Protection in the Law Enforcement Context
Latin America lags on a comprehensive data protection regime that applies not just to corporations, but also to public authorities when processing personal data for law enforcement purposes. The EU, on the other hand, has adopted not just the GDPR but also the EU Police Directive, a law that regulates the processing of personal data for police forces. Most Latam data protection laws exempt law enforcement and intelligence activities from the application of the law. However, in Colombia, some data protection rules apply to the public sector. That nation’s GDPL applies to the public sector, with exceptions for national security, defense, anti-money-laundering regulations, and intelligence. The Constitutional Court has stated that these exceptions are not absolute exclusions from the law’s application, but an exemption to just some provisions. Complementary statutory law should regulate them, subject to the proportionality principle. 

Spain has not implemented the EU’s Police Directive yet. As a result, personal data processing for law enforcement activities remains held to the standards of the country’s previous data protection law. Argentina’s and Chile’s laws do apply to law enforcement agencies, and Mexico has a specific data protection regulation for the public sector. But Peru and Panama exclude law enforcement agencies from the scope of their data protection laws. Brazil’s law creates an exception to personal data processing solely for public safety, national security, and criminal investigations. Still, it lays down that specific legislation has to be approved to regulate these activities. 

Recommendations and Looking Ahead
Communication privacy has much to gain with the intersection of its traditional inviolability safeguards and the data protection toolkit. That intersection helps entrench international human rights standards applicable to law enforcement access to communications data. The principles of data minimization and purpose limitation in the data protection world correlate with the necessity, adequacy, and proportionality principles under international human rights law. They are necessary to curb massive data retention or dragnet government access to data. The idea that any personal data processing requires a legitimate basis upholds the basic tenets of legality and legitimate aim to place limitations on fundamental rights. Law enforcement access to communications data must be clearly and precisely prescribed by law. No other legitimate basis than the compliance with a legal obligation is acceptable in this context. 

Data protection transparency and information safeguards reinforce a user’s right to a notification when government authorities have requested their data. European courts have asserted this right stems from privacy and data protection safeguards. In the Tele2 Sverige AB and Watson cases, the EU Court of Justice (CJEU) held that “national authorities to whom access to the retained data has been granted must notify the persons affected . . . as soon as that notification is no longer liable to jeopardize the investigations being undertaken by those authorities.” Before that, in Szabó and Vissy v. Hungary, the European Court of Human Rights (ECHR) had declared that notifying users of surveillance measures is also inextricably linked to the right to an effective remedy against the abuse of monitoring powers.

Data protection transparency and information safeguards can also play a key role in fostering greater insight into companies’ and governments’ practices when it comes to requesting and handing over users’ communications data. In collaboration with EFF, many Latin American NGOs have been pushing Internet Service Providers to publish their law enforcement guidelines and aggregate information on government data requests. We’ve made progress over the years, but there’s still plenty of room for improvement. When it comes to public oversight, data protection authorities should have the legal mandate to supervise personal data processing by public entities, including law enforcement agencies. They should be impartial and independent authorities, conversant in data protection and technology, and have adequate resources in exercising the functions assigned to them.

There are already many essential safeguards in the Latam region. Most countries’ constitutions have explicitly recognized privacy as a fundamental right, and most have adopted data protection laws.  Each constitution recognized a general right to private life or intimacy or a set of multiple, specific rights: a right to the inviolability of communications; an explicit data protection right (Chile, Mexico, Spain); or “habeas data” (Argentina, Peru, Brazil) as either a right or legal remedy. (In general, habeas data protects the right of any person to find out what data is held about themselves.) And, most recently, a landmark ruling of Brazil’s Supreme Court has recognized data protection as a fundamental right drawn from the country’s Constitution.

Across our work in the region, our FAQs help to spot loopholes, flag concerning standards, or highlight pivotal safeguards (or lack thereof). It’s clear that the rise of data protection laws has helped secure user privacy across the region: but more needs to be done. Strong data protection rules that apply to law enforcement activities would enhance communication privacy protections in the region. More transparency is urgently needed, both in how the regulations will be implemented, and what additional work private companies and the public sector are taking to pro-actively protect user data.

We invite everyone to read these reports and reflect on what work we should champion and defend in the days ahead, and what still needs to be done.

Go to Source
Author: Katitza Rodriguez

Plaintiffs Continue Effort to Overturn FOSTA, One of the Broadest Internet Censorship Laws

Special thanks to legal intern Ross Ufberg, who was lead author of this post.

A group of organizations and individuals are continuing their fight to overturn the Allow States and Victims to Fight Online Sex Trafficking Act, known as FOSTA, arguing that the law violates the Constitution in multiple respects.

In legal briefs filed in federal court recently, plaintiffs Woodhull Freedom Foundation, Human Rights Watch, the Internet Archive, Alex Andrews, and Eric Koszyk argued that the law violates the First and Fifth Amendments, and the Constitution’s prohibition against ex post facto laws. EFF, together with Daphne Keller at the Stanford Cyber Law Center, as well as lawyers from Davis Wright Tremaine and Walters Law Group, represent the plaintiffs.

How FOSTA Censored the Internet

FOSTA led to widespread Internet censorship, as websites and other online services either prohibited users from speaking or shut down entirely. FOSTA accomplished this comprehensive censorship by making three major changes in law:

First, FOSTA creates a new federal crime for any website owner to “promote” or “facilitate” prostitution, without defining what those words mean. Organizations doing educational, health, and safety-related work, such as The Woodhull Foundation, and one of the leaders of the Sex Workers Outreach Project USA (SWOP USA), fear that prosecutors may interpret advocacy on behalf of sex workers as the “promotion” of prostitution. Prosecutors may view creation of an app that makes it safer for sex workers out in the field the same way. Now, these organizations and individuals—the plaintiffs in the lawsuit—are reluctant to exercise their First Amendment rights for fear of being prosecuted or sued.

Second, FOSTA expands potential liability for federal sex trafficking offenses by adding vague definitions and expanding the pool of enforcers. In addition to federal prosecution, website operators and nonprofits now must fear prosecution from thousands of state and local prosecutors, as well as private parties. The cost of litigation is so high that many nonprofits will simply cease exercising their free speech, rather than risk a lawsuit where costs can run into the millions, even if they win.

Third, FOSTA limits the federal immunity provided to online intermediaries that host third-party speech under 47 U.S.C. § 230 (“Section 230”). This immunity has allowed for the proliferation of online services that host user-generated content, such as Craigslist, Reddit, YouTube, and Facebook. Section 230 helps ensure that the Internet supports diverse and divergent viewpoints, voices, and robust debate, without every website owner needing to worry about being sued for their users’ speech. The removal of Section 230 protections resulted in intermediaries shutting down entire sections or discussion boards for fear of being subject to criminal prosecution or civil suits under FOSTA.

How FOSTA Impacted the Plaintiffs

In their filings asking a federal district court in Washington, D.C. to rule that FOSTA is unconstitutional, the plaintiffs describe how FOSTA has impacted them and a broad swath of other Internet users. Some of those impacts have been small and subtle, while others have been devastating.

Eric Koszyk is a licensed massage therapist who heavily relied on Craigslist’s advertising platform to find new clients and schedule appointments. Since April 2018, it’s been hard for Koszyk to supplement his families’ income with his massage business. After Congress passed FOSTA, Craigslist shut down the Therapeutic Services of its website, where Koszyk had been most successful at advertising his services. Craigslist further prohibited him from posting his ads anywhere else on its site, despite the fact that his massage business is entirely legal. In a post about FOSTA, Craigslist said that they shut down portions of their site because the new law created too much risk. In the two years since Craigslist removed its Therapeutic Services section, Koszyk still hasn’t found a way to reach the same customer base through other outlets. His income is less than half of what it was before FOSTA.

Alex Andrews, a national leader in fighting for sex worker rights and safety, has had her activism curtailed by FOSTA. As a board member of SWOP USA, Andrews helped lead its efforts to develop a mobile app and website that would have allowed sex workers to report violence and harassment. The app would have included a database of reported clients that workers could query before engaging with a potential client, and would notify others nearby when a sex worker reported being in trouble. When Congress passed FOSTA, Alex and SWOP USA abandoned their plans to build this app. SWOP USA, a nonprofit, simply couldn’t risk facing prosecution under the new law.

FOSTA has also impacted a website that Andrews helped to create. The website Rate That Rescue is “a sex worker-led, public, free, community effort to help everyone share information” about organizations which aim to help sex workers leave their field or otherwise assist them. The website hosts ratings and reviews. But without the protections of Section 230, in Andrews’ words, the website “would not be able to function” because of the “incredible liability for the content of users’ speech.” It’s also likely that Rate That Rescue’s creators face criminal liability under FOSTA’s new criminal provisions because the website aims to make sex workers’ lives and work safer and easier. This could be considered to violate FOSTA’s provisions that make it a crime to promote or facilitate prostitution.

 Woodhull Freedom Foundation advocates for sexual freedom as a human right, which includes supporting the health, safety, and protection of sex workers. Each year, Woodhull organizes a Sexual Freedom Summit in Washington, DC, with the purpose of bringing together educators, therapists, legal and medical professionals, and advocacy leaders to strategize on ways to protect sexual freedom and health. There are workshops devoted to issues affecting sex workers, including harm reduction, disability, age, health, and personal safety. This year, COVID-19 has made an in person meeting impossible, so Woodhull is livestreaming some of the events. Woodhull has had to censor their ads on Facebook, and modify their programming on YouTube, just to get past those companies’ heightened moderation policies in the wake of FOSTA.

The Internet Archive, a nonprofit library that seeks to preserve digital materials, faces increased risk because FOSTA has dramatically increased the possibility that a prosecutor or private citizen might sue it simply for archiving newly illegal web pages. Such a lawsuit would be a real threat for the Archive, which is the Internet’s largest digital library.

FOSTA puts Human Rights Watch in danger, as well. Because the organization advocates for the decriminalization of sex work, they could easily face prosecution for “promoting” prostitution.

Where the Legal Fight Against FOSTA Stands Now

With the case now back in district court after the D.C. Circuit Court of Appeals reversed the lower court’s decision to dismiss the suit, both sides have filed motions for summary judgment. In their filings, the plaintiffs make several arguments for why FOSTA is unconstitutional.

First, they argue that FOSTA is vague and overbroad. The Supreme Court has said that if a law “fails to give ordinary people fair notice of the conduct it prohibits,” it is unconstitutional. That is especially true when the vagueness of the law raises special First Amendment concerns.

FOSTA does just that. The law makes it illegal to “facilitate” or “promote” prostitution without defining what that means. This has led to, and will continue to lead to, the censorship of speech that is protected by the First Amendment. Organizations like Woodhull, and individuals like Andrews, are already curbing their own speech. They fear their advocacy on behalf of sex workers may constitute “promotion” or “facilitation” of prostitution.

The government argues that the likelihood of anyone misconstruing these words is remote. But some courts interpret “facilitate” to simply mean make something easier. By this logic, anything that plaintiffs like Andrews or Woodhull do to make sex work safer, or make sex workers’ lives easier, could be considered illegal under FOSTA.

Second, the plaintiffs argue that FOSTA’s Section 230 carveouts violate the First Amendment. A provision of FOSTA eliminates some Section 230 immunity for intermediaries on the Web, which means anybody who hosts a blog where third parties can comment, or any company like Craigslist or Reddit, can be held liable for what other people say.

As the plaintiffs show, all the removal of Section 230 immunity really does is squelch free speech. Without the assurance that a host won’t be sued for what a commentator or poster says, those hosts simply won’t allow others to express their opinions. As discussed above, this is precisely what happened once FOSTA passed.

Third, the plaintiffs argued that FOSTA is not narrowly tailored to the government’s interest in stopping sex trafficking. Government lawyers say that Congress passed FOSTA because it was concerned about sex trafficking. The intent was to roll back Section 230 in order to make it easier for victims of trafficking to sue certain websites, such as Backpage.com. The plaintiffs agree with Congress that there is a strong public interest in stopping sex trafficking. But FOSTA doesn’t accomplish those goals—and instead, it sweeps up a host of speech and advocacy protected by the First Amendment.

There’s no evidence the law has reduced sex trafficking. The effect of FOSTA is that traffickers who once posted to legitimate online platforms will go even deeper underground—and law enforcement will have to look harder to find them and combat their illegal activity.

Finally, FOSTA violates the Constitution’s prohibition on criminalizing past conduct that was not previously illegal. It’s what is known as an “ex post facto” law. FOSTA creates new retroactive liability for conduct that occurred before Congress passed the law. During the debate over the bill, the U.S. Department of Justice even admitted this problem to Congress—but the DOJ later promised to “pursu[e] only newly prosecutable criminal conduct that takes place after the bill is enacted.” The government, in essence, is saying to the courts, “We promise to do what we say the law means, not what the law clearly says.” But the Department of Justice cannot control the actions of thousands of local and state prosecutors—much less private citizens who sue under FOSTA based on conduct that occurred long before it became law.

* * *

FOSTA sets out to tackle the genuine problem of sex trafficking. Unfortunately, the way the law is written achieves the opposite effect: it makes it harder for law enforcement to actually locate victims, and it punishes organizations and individuals doing important work. In the process, it does irreparable harm to the freedom of speech guaranteed by the First Amendment. FOSTA silences diverse viewpoints, makes the Internet less open, and makes critics and advocates more circumspect. The Internet should remain a place where robust debate occurs, without the fear of lawsuits or jail time.

Go to Source
Author: Aaron Mackey

EFF Joins Coalition Urging Senators to Reject the EARN IT Act

Whether we know it or not, all Internet users rely on multiple online services to connect, engage, and express themselves online. That means we also rely on 47 U.S.C. § 230 (“Section 230”), which provides important legal protections when platforms offer their services to the public and when they moderate…

Go to Source
Author: India McKinney

What the *, Nintendo? This in-game censorship is * terrible.

While many are staying at home and escaping into virtual worlds, it’s natural to discuss what’s going on in the physical world. But Nintendo is shutting down those conversations with its latest Switch system update (Sep. 14, 2020) by adding new terms like COVID, coronavirus and ACAB to its censorship list for usernames, in-game messages, and search terms for in-game custom designs (but not the designs themselves).

A screenshot in-game of a postcard sent from a friend in Animal Crossing. The message says "testing censorship of" , followed by three asterisks in place of the expected words.

While we understand the urge to prevent abuse and misinformation about COVID-19, censoring certain strings of characters is a blunderbuss approach unlikely to substantially improve the conversation. As an initial matter, it is easily circumvented: while our testing, shown above, confirmed that Nintendo censored coronavirus, COVID and ACAB, but does not restrict substitutes like c0vid or a.c.a.b., nor corona and virus, when written individually.

More importantly, it’s a bad idea, because these terms can be part of important conversations about politics or public health. Video games are not just for gaming and escapism, but are part of the fabric of our lives as a platform for political speech and expression.  As the world went into pandemic lockdown, Hong Kong democracy activists took to Nintendo’s hit Animal Crossing to keep their pro-democracy protest going online (and Animal Crossing was banned in China shortly after). Just as many Black Lives Matter protests took to the streets, other protesters voiced their support in-game.  Earlier this month, the Biden campaign introduced Animal Crossing yard signs which other players can download and place in front of their in-game home. EFF is part of this too—you can show your support for EFF with in-game hoodies and hats. 

A screenshot in-game showing Chow, an Animal Crossing panda villager, asking whether the player is an “Internet freedom fighter.” The player has highlighted “Yup.”

Nevertheless, Nintendo seems uncomfortable with political speech on its platform. The Japanese Terms of Use prohibit in-game “political advocacy” (政治的な主張 or seijitekina shuchou), which led to a candidate for Japan’s Prime Minister canceling an in-game campaign event. But it has not expanded this blanket ban to the Terms for Nintendo of America or Nintendo of Europe.

Nintendo has the right to host the platform as it sees fit. But just because they can do this, doesn’t mean they should. Nintendo needs to also recognize that it has provided a platform for political and social expression, and allow people to use words that are part of important conversations about our world, whether about the pandemic, protests against police violence, or democracy in Hong Kong.

Go to Source
Author: Kurt Opsahl

Trump’s Ban on TikTok Violates First Amendment by Eliminating Unique Platform for Political Speech, Activism of Millions of Users, EFF Tells Court

We filed a friend-of-the-court brief—primarily written by the First Amendment Clinic at the Sandra Day O’Connor College of Law—in support of a TikTok employee who is challenging President Donald Trump’s ban on TikTok and was seeking a temporary restraining order (TRO). The employee contends that Trump’s executive order infringes the Fifth Amendment rights of TikTok’s U.S.-based employees. Our brief, which is joined by two prominent TikTok users, urges the court to consider the First Amendment rights of millions of TikTok users when it evaluates the plaintiff’s claims.

Notwithstanding its simple premise, TikTok has grown to have an important influence in American political discourse and organizing. Unlike other platforms, users on TikTok do not need to “follow” other users to see what they post. TikTok thus uniquely allows its users to reach wide and diverse audiences. That’s why the two TikTok users who joined our brief use the platform. Lillith Ashworth, whose critiques of Democratic presidential candidates went viral last year, uses TikTok to talk about U.S. politics and geopolitics. The other user, Jynx, maintains an 18+ adult-only account, where they post content that centers on radical leftist liberation, feminism, and decolonial politics, as well as the labor rights of exotic dancers.

Our brief argues that in evaluating the plaintiff’s claims, the court must consider the ban’s First Amendment implications. The Supreme Court has established that rights set forth in the Bill of Rights work together; as a result the plaintiff’s Fifth Amendment claims are enhanced by the First Amendment considerations. We say in our brief:

A ban on TikTok violates fundamental First Amendment principles by eliminating a specific type of speaking, the unique expression of a TikTok user communicating with others through that platform, without sufficient considerations for the users’ speech. Even though the order facially targets the platform, its censorial effects are felt most directly by the users, and thus their First Amendment rights must be considered in analyzing its legality.

EFF, the First Amendment Clinic, and the individual amici urge the court to adopt a higher standard of scrutiny when reviewing the plaintiff’s claims against the president. Not only are the plaintiff’s Fifth Amendment liberties at stake, but millions of TikTok users have First Amendment freedoms at stake. The Fifth Amendment and the First Amendment are each critical in securing life, liberty, and due process of law. When these amendments are examined separately, they each deserve careful analysis; but when the interests protected by these amendments come together, a court should apply an even higher standard of scrutiny.

The hearing on the TRO scheduled for tomorrow was canceled after the government promised the court that it did not intend to include the payment of wages and salaries within the executive order’s definition of prohibited transactions, thus addressing the plaintiff’s most urgent claims.



Go to Source
Author: Nathaniel Sobel

Things to Know Before Your Neighborhood Installs an Automated License Plate Reader

Every week EFF receives emails from members of homeowner’s associations wondering if their Homeowner’s Association (HOA) or Neighborhood Association is making a smart choice by installing automated license plate readers (ALPRs). Local groups often turn to license plate readers thinking that they will protect their community from crime. But the truth is, these cameras—which record every license plate coming in and out of the neighborhood—may create more problems than they solve. 

The False Promise of ALPRs

Some members of a community think that, whether they’ve experienced crime in their neighborhood or not, a neighborhood needs increased surveillance in order to be safe. This is part of a larger nationwide trend that shows that people’s fear of crime is incredibly high and getting higher, despite the fact that crime rates in the United States are low by historical standards. 

People imagine that if a crime is committed, an association member can hand over to police the license plate numbers of everyone that drove past a camera around the time the crime is believed to have been committed. But this will lead to innocent people becoming suspects because they happened to drive through a specific neighborhood. For some communities, this might mean hundreds of cars end up under suspicion. 

Also, despite what ALPR vendors like Flock Safety and Vigilant Solutions claim, there is no real evidence that ALPRs reduce crime. ALPR vendors, like other surveillance salespeople, operate on the assumption that surveillance will reduce  crime by either making would-be criminals aware of the surveillance in hopes it will be a deterrent, or by using the technology to secure convictions of people that have allegedly committed crimes in the neighborhood. However, there is little empirical evidence that such surveillance reduces crime. 

Like all machines, ALPRs make mistakes

ALPRs do, however, present a host of other potential problems for people who live, work, or commute in a surveilled area. 

The Danger ALPRs Present To Your Neighborhood

ALPRs are billed as neighborhood watch tools that allow a community to record which cars enter and leave, and when. They essentially turn any neighborhood into a gated community by casting suspicion on everyone who comes and goes. And some of these ALPR systems (including Flock’s) can be programmed to allow all neighbors to have access to the records of vehicle comings and goings. But driving through a neighborhood should not lead to suspicion. There are thousands of reasons why a person might be passing through a community, but ALPRs allow anyone in the neighborhood to decide who belongs and who doesn’t. Whatever motivates that individual – racial biases, frustration with another neighbor, even disagreements among family members – could all be used in conjunction with ALPR records to implicate someone in a crime, or in any variety of other legal-but-uncomfortable situations. 

The fact that your car passes a certain stop sign at a particular time of day may not seem like invasive information. But you can actually tell a lot of personal information about a person by learning their daily routines—and when they deviate from those routines. If a person’s car stops leaving in the morning, a nosy neighbor at the neighborhood association could infer that they may have lost their job. If a married couple’s cars are never at the house at the same time, neighbors could infer relationship discord. These ALPR cameras also give law enforcement the ability to learn the comings and goings of every car, effectively making it impossible for drivers to protect their privacy. 

These dangers are only made worse by the broad dissemination of this sensitive information. It goes not just to neighbors, but also to Flock employees, and even your local police. It might also go to hundreds of other police departments around the country through Flock’s new and aptly-named TALON program, which links ALPRs around the country. 

ALPR Devices Lack Oversight

HOAs and Neighborhood Associations are rarely equipped or trained to make responsible decisions when it comes to invasive surveillance technology. After all, these people are not bound by the oversight that sometimes accompanies government use of technology–they’re your neighbors. While police are subject to legally-binding privacy rules (like the Fourth Amendment), HOA members are not. Neighbors could, for instance, use ALPRs to see when a neighbor comes home from work every day. They could see if a house has a regular visitor and what time that person arrives and leaves. In San Antonio, one HOA member was asked what they could do to prevent someone with access to the technology from obsessively following the movements of specific neighbors. He had never considered that possibility: “Asked whether board members had established rules to keep track of who searches for what and how often, Cronenberger said it hadn’t dawned on her that someone might use the system to track her neighbors’ movements.” 

Machine Error Endangers Black Lives

Like all machines, ALPRs make mistakes. And these mistakes can endanger people’s lives and physical safety. For example, an ALPR might erroneously conclude that a passing car’s license plate matches the plate of a car on a hotlist of stolen cars. This can lead police to stop the car and detain the motorists. As we know, these encounters can turn violent or even deadly, especially if those cars misidentified are being driven by Black motorists. 

This isn’t a hypothetical scenario. Just last month, a false alert from an ALPR led police to stop a Black family, point guns at them, and force them to lie on their bellies in a parking lot—including their children, aged six and eight. Tragically, this is not the first time that police have aimed a gun at a Black motorist because of a false ALPR hit.

Automated License Plate Reader Abuses by Police Foreshadow Abuses by Neighborhoods 

Though police have used these tools for decades, communities have only recently had the ability to install their own ALPR systems. In that time, EFF and many others have criticized both ALPR vendors and law enforcement for their egregious abuses of the data collected. 

Police abuse this technology regularly. And unfortunately, neighborhood users will likely do the same. 

A February 2020 California State Auditor’s report on four jurisdictions’ use of this tech raised several significant concerns. The data collected is primarily not related to individuals suspected of crimes. Many agencies did not implement privacy-protective oversight measures, despite laws requiring it. Several agencies did not have documented usage or retention policies. Many agencies lack guarantees that the stored data is appropriately secure. Several agencies did not adequately confirm that entities they shared data with had a right to receive that information. And many did not have appropriate safeguards for users accessing the data. 

California agencies aren’t unique: a state audit in Vermont found that 11% of ALPR searches violated state restrictions on when cops can and can’t look at the data. Simply put: police abuse this technology regularly. And unfortunately, neighborhood users will likely do the same. 

In fact, the growing ease with which this data can be shared is only increasing. Vigilant Solutions, a popular vendor for police ALPR tech, shares this data between thousands of departments via its LEARN database. Flock, a vendor that aims to offer this technology to neighborhoods, has just announced a new nationwide partnership that allows communities to share footage and data with law enforcement anywhere in the country, vastly expanding its reach. While Flock does include several safeguards that Vigilant Solutions does not, such as encrypted video and 30-day deletion policies, many potential abuses remain.

Additionally, some ALPR systems can automatically flag cars that don’t look a certain way—from rusted vehicles to cars with dents or poor paint jobs—endangering anyone who might not feel the need (or have the income required) to keep their car in perfect shape. These “vehicle fingerprints” might flag, not just a particular license plate, but “a blue Honda CRV with damage on the passenger side door and a GA license plate from Fulton County.” Rather than monitoring specific vehicles that come in and out of a neighborhood via their license plate, “vehicle fingerprint” features could create a trouble drag-net style of monitoring. Just because a person is driving a damaged car from an accident, or a long winter has left a person’s car rusty, does not mean they are worthy of suspicion or undue police or community harassment.  

Some ALPRs are even designed to search for certain bumper stickers, which could reveal information on the political or social views of the driver. While they aren’t in every ALPR system, and some are just planned, all of these features taken together increase the potential for abuse far beyond the dangers of collecting license plate numbers alone. 

What You Can Tell Your Neighbors if You’re Concerned 

Unfortunately, ALPR devices are not the first piece of technology to exploit irrational fear of crime in order to expand police surveillance and spy on neighbors and passersby. Amazon’s surveillance doorbell Ring currently has over 1,300 partnerships with individual police departments, which allow departments to directly request footage from an individual’s personal surveillance camera without presenting a warrant. ALPRs are at least as dangerous: they track our comings and goings; the data can indicate common travel patterns (or unique ones); and because license plates are required by law, there is no obvious way to protect yourself.

If your neighborhood is considering this technology, you have options. Remind your neighbors that it collects data on anyone, regardless of suspicion. They may think that only people with something to hide need to worry—but hide what? And from who? You may not want your neighbor knowing what time you leave your neighborhood in the morning and get back at night. You may also not want the police to know who visits your home and for how long. While the intention is to protect the neighborhood from crime, introducing this kind of surveillance may also end up incriminating your neighbors and friends for reasons you know nothing about. 

You can also point out that ALPRs have not been shown to reduce crime. Likewise, consider sending around the California State Auditor’s report on abuses by law enforcement. And if the technology is installed, you can (and should) limit the amount of data that’s shared with police, automatically or manually. Remind people of the type of information ALPRs collect and what your neighbors can infer about your private life. 

If you drive a car, you’re likely being tracked by ALPRs, at least sometimes. But that doesn’t mean your neighborhood should contribute to the surveillance state. Everyone ought to have a right to pass through a community without being tracked, and without accidentally revealing personal details about how they spend their day. Automatic license plate readers installed in neighborhoods are a step in the wrong direction. 

Go to Source
Author: Jason Kelley

Researchers Targeting AI Bias, Sex Worker Advocate, and Global Internet Freedom Community Honored at EFF’s Pioneer Award Ceremony

San Francisco – The Electronic Frontier Foundation (EFF) is honored to announce the 2020 Barlow recipients at its Pioneer Award Ceremony: artificial intelligence and racial bias experts Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji; sex worker activist and tech policy and content moderation researcher Danielle Blunt; and the global Internet freedom organization Open Technology Fund (OTF) and its community.

The virtual ceremony will be held October 15 from 5:30 pm to 7 pm PT. The keynote speaker for this year’s ceremony will be Cyrus Farivar, a longtime technology investigative reporter, author, and radio producer. The event will stream live and free on Twitch, YouTube, Facebook, and Twitter, and audience members are encouraged to give a $10 suggested donation. EFF is supported by small donors around the world and you can become an official member at https://eff.org/PAC-join.

Joy Buolamwini, Dr. Timit Gebru, and Deborah Raji’s trailblazing academic research on race and gender bias in facial analysis technology laid the groundwork for a national movement—and a growing number of legislative victories—aimed at banning law enforcement’s use of flawed and overbroad face surveillance in American cities. The trio collaborated on the Gender Shades series of papers based on Buolamwini’s MIT thesis, revealing alarming bias in AI services from companies like Microsoft, IBM, and Amazon. Their subsequent internal and external advocacy spans Stanford, University of Toronto, Black in AI, Project Include, and the Algorithmic Justice League. Buolamwini, Gebru, and Raji are bringing light to the profound impact of face recognition technologies on communities of color, personal privacy and free expression, and the fundamental freedom to go about our lives without having our movements and associations covertly monitored and analyzed.

Danielle Blunt is one of the co-founders of Hacking//Hustling, a collective of sex workers and accomplices working at the intersection of tech and social justice to interrupt state surveillance and violence facilitated by technology. A professional NYC-based Femdom and Dominatrix, Blunt researches sex work and equitable access to technology from a public health perspective. She is one of the lead researchers of Hacking//Hustling’s “Erased: The Impact of FOSTA-SESTA and the Removal of Backpage” and “Posting to the Void: CDA 230, Censorship, and Content Moderation,” studying the impact of content moderation on the movement work of sex workers and activists. She is also leading organizing efforts around sex worker opposition to the EARN IT Act, which threatens access to encrypted communications, a tool that many in the sex industry rely on for harm reduction, and would also increase platform policing of sex workers and queer and trans youth. Blunt is on the advisory board of Berkman Klein’s Initiative for a Representative First Amendment (IfRFA) and the Surveillance Technology Oversight Project in NYC. She enjoys redistributing money from institutions, watching her community thrive, and “making men cry.”

The Open Technology Fund (OTF) has fostered a global community and provided support—both monetary and in-kind—to more than 400 projects that seek to combat censorship and repressive surveillance. The OTF community has helped more than two billion people in over 60 countries access the open Internet more safely and advocate for democracy. OTF earned trust and built community through its open source ethos, transparency, and a commitment to independence from its funder, the U.S. Agency for Global Media (USAGM), and helped fund several technical projects at EFF. However, President Trump recently installed a new CEO for USAGM, who immediately sought to replace OTF’s leadership and board and to freeze the organization’s funds—threatening to leave many well-established global freedom tools, their users, and their developers in the lurch. Since then, OTF has made some progress in regaining control, but it remains at risk and, as of this writing, USAGM is still withholding critical funding. With this award, EFF is honoring the entire OTF community for their hard work and dedication to global Internet freedom and recognizing the need to protect this community and ensure its survival despite the current political attacks.

“One of EFF’s guiding principles is that technology should enhance our rights and freedoms instead of undermining them,” said EFF Executive Director Cindy Cohn. “All our honorees this year are on the front lines if this important work—striving to ensure that no matter where you are from, what you look like, or what you do for a living, the technology you rely on makes your life better and not worse. While most technology is here to stay, a technological dystopia is not inevitable. Used thoughtfully, and supported by the right laws and policies, technology can and will make the world better. We are so proud that all of our honorees are joining us to fight for this together.”

Awarded every year since 1992, EFF’s Pioneer Award Ceremony recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Malkia Cyril, William Gibson, danah boyd, Aaron Swartz, and Chelsea Manning. Sponsors of the 2020 Pioneer Award ceremony include Dropbox; No Starch Press; Ridder, Costa, and Johnstone LLP; and Ron Reed.

To attend the virtual Pioneer Awards ceremony:

For more on the Pioneer Award ceremony:

Go to Source
Author: Rebecca Jeschke

EFF to EU Commission on Article 17: Prioritize Users’ Rights, Let Go of Filters

During the Article 17 (formerly #Article13) discussions about the availability of copyright-protected works online, we fought hand-in-hand with European civil society to avoid all communications being subjected to interception and arbitrary censorship by automated upload filters. However, by turning tech companies and online services operators into copyright police, the final version of the EU Copyright Directive failed to live up to the expectations of millions of affected users who fought for an Internet in which their speech is not automatically scanned, filtered, weighed, and measured.

Our Watch Has Not Ended

EU “Directives” are not automatically applicable. EU member states must “transpose” the directives into national law. The Copyright Directive includes some safeguards to prevent the restriction of fundamental free expression rights, ultimately requiring national governments to balance the rights of users and copyright holders alike. At the EU level, the Commission has launched a Stakeholder Dialogue to support the drafting of guidelines for the application of Article 17, which must be implemented in national laws by June 7, 2021. EFF and other digital rights organizations have a seat at the table, alongside rightsholders from the music and film industries and representatives of big tech companies like Google and Facebook.

During the stakeholder meetings, we made a strong case for preserving users’ rights to free speech, making suggestions for averting a race among service providers to over-block user content. We also asked the EU Commission to share the draft guidelines with rights organizations and the public, and allow both to comment on and suggest improvements to ensure that they comply with European Union civil and human rights requirements.

The Targeted Consultation: Don’t Experiment With User Rights

The Commission has partly complied with EFF and its partners’ request for transparency and participation. The Commission launched a targeted consultation addressed to members of the EU Stakeholder Group on Article 17. Our response focuses on mitigating the dangerous consequences of the Article 17 experiment by focusing on user rights, specifically free speech, and by limiting the use of automated filtering, which is notoriously inaccurate.

Our main recommendations are:

  • Produce a non-exhaustive list of service providers that are excluded from the obligations under the Directive. Service providers not listed might not fall under the Directive’s rules, and would have to be evaluated on a case-by-case basis;
  • Ensure that the platforms’ obligation to show best efforts to obtain rightsholders’ authorization and ensure infringing content is not available is a mere due diligence duty and must be interpreted in light of the principles of proportionality and user rights exceptions;
  • Recommend that Member States not mandate the use of technology or impose any specific technological solutions on service providers in order to demonstrate “best efforts”;
  • Establish a requirement to avoid general user (content) monitoring. Spell out that the implementation of Art 17 should never lead to the adoption of upload filters and hence general monitoring of user content;
  • State that the mere fact that content recognition technology is used by some companies does not mean that it must be used to comply with Art 17. Quite the opposite is true: automated technologies to detect and remove content based on rightsholders’ information may not be in line with the balance sought by Article 17.
  • Safeguard the diversity of platforms and not put disproportionate burden on smaller companies, which play an important role in the EU tech ecosystem;
  • Establish that content recognition technology cannot assess whether the uploaded content is infringing or covered by a legitimate use. Filter technology may serve as assistants, but can never replace a (legal) review by a qualified human;
  • Filter-technology can also not assess whether user content is likely infringing copyright;
  • If you believe that filters work, prove it. The Guidance should contain a recommendation to create and maintain test suites if member states decide to establish copyright filters. These suites should evaluate the filters’ ability to correctly identify both infringing materials and non-infringing uses. Filters should not be approved for use unless they can meet this challenge;
  • Complaint and redress procedures are not enough. Fundamental rights must be protected from the start and not only after content has been taken down;
  • The Guidance should address the very problematic relationship between the use of automated filter technologies and privacy rights, in particular the right not to be subject to a decision based solely on automated processing under the GDPR.

Go to Source
Author: Christoph Schmon

EFF Tells California Supreme Court Not to Require ExamSoft for Bar Exam

This week, EFF sent a letter (pdf link) to the Supreme Court of California objecting to the required use of the proctoring tool ExamSoft for the October 2020 California Bar Exam. Test takers should not be forced to give their biometric data to ExamSoft, the letter says, which can use it for marketing purposes, share it with third parties, or hand it over to law enforcement, without the ability to opt out and delete this information. This remote proctoring solution forces Bar applicants to surrender the privacy and security of their personal biometric information, violating the California Consumer Privacy Act. EFF asked the California Bar to devise an alternative option for the five-thousand or so expected test takers next month. 

ExamSoft is a popular proctoring or assessment software product that purports to allow remote testing while determining whether a student is cheating. To do so, it uses various privacy-invasive technical monitoring techniques, such as, comparing test takers’ images using facial recognition, tracking eye movement, recording patterns of keystrokes, and recording video and audio of students’ surroundings as they take the test. The type of data ExamSoft collects includes “facial recognition and biometric data of each individual test taker for an extended period of time, including a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry”). Additionally, ExamSoft has access to the device’s webcam, including audio and video access, and screen, for the duration of the exam. 

ExamSoft’s collection of test takers’ biometric and other personal data implicates the California Consumer Privacy Act. At a minimum, the letter states, the State Bar of California must provide a mechanism for students to opt out of the sale of their data, and to delete it, to comply with this law: 

The California Bar should clearly inform test takers of their protections under the CCPA. Before test takers are asked to use such an invasive piece of software, the California Bar should confirm that, at an absolute minimum, it has in place a mechanism to allow test takers to access their ExamSoft data, to opt out of the “sale” of their data, and to request its deletion. Students should have all of these rights without facing threat of punishment. It is bad enough that the use of ExamSoft puts the state in the regrettable position of coercing students into compromising their privacy and security in exchange for their sole chance to take the Bar Exam. It should not compound that by denying them their rights under state privacy law.

In addition to these privacy invasions, proctoring software brings with it many potential other dangers, including threats to security: vast troves of personal data have already leaked from one proctoring company, ProctorU, affecting 440,000 users. The ACLU has also expressed concerns with the software’s use of facial recognition, which will “exacerbate racial and socioeconomic inequities in the legal profession and beyond.” And lastly, this type of software has been shown to have technical issues that could cause students to experience unexpected problems while taking the Bar Exam, and comes with requirements that could harm users who cannot meet them, such as requiring a laptop that is relatively new, and broadband speed that many households do not have. Other states have canceled the use of proctoring software for their bar exams due to the inability to ensure a “secure and reliable” experience. California should take this into account when considering its use of proctoring software.

The entrance fee for becoming a lawyer in California should not include compromising personal privacy and security. The Bar Exam is already a nerve-wracking, anxiety-inducing test. We ask the Supreme Court of California to take seriously the risks presented by ExamSoft and pursue alternatives that do not put exam takers in jeopardy.

Go to Source
Author: Jason Kelley