EFF to Court: Social Media Users Have Privacy and Free Speech Interests in Their Public Information

Special thanks to legal intern Rachel Sommers, who was the lead author of this post.

Visa applicants to the United States are required to disclose personal information including their work, travel, and family histories. And as of May 2019, they are required to register their social media accounts with the U.S. government. According to the State Department, approximately 14.7 million people will be affected by this new policy each year.

EFF recently filed an amicus brief in Doc Society v. Pompeo, a case challenging this “Registration Requirement” under the First Amendment. The plaintiffs in the case, two U.S.-based documentary film organizations that regularly collaborate with non-U.S. filmmakers and other international partners, argue that the Registration Requirement violates the expressive and associational rights of both their non-U.S.-based and U.S.-based members and partners. After the government filed a motion to dismiss the lawsuit, we filed our brief in district court in support of the plaintiffs’ opposition to dismissal. 

In our brief, we argue that the Registration Requirement invades privacy and chills free speech and association of both visa applicants and those in their social networks, including U.S. persons, despite the fact that the policy targets only publicly available information. This is amplified by the staggering number of social media users affected and the vast amounts of personal information they publicly share—both intentionally and unintentionally—on their social media accounts.

Social media profiles paint alarmingly detailed pictures of their users’ personal lives. By monitoring applicants’ social media profiles, the government can obtain information that it otherwise would not have access to through the visa application process. For example, visa applicants are not required to disclose their political views. However, applicants might choose to post their beliefs on their social media profiles. Those seeking to conceal such information might still be exposed by comments and tags made by other users. And due to the complex interactions of social media networks, studies have shown that personal information about users such as sexual orientation can reliably be inferred even when the user doesn’t expressly share that information. Although consular officers might be instructed to ignore this information, it is not unreasonable to fear that it might influence their decisions anyway.

Just as other users’ online activity can reveal information about visa applicants, so too can visa applicants’ online activity reveal information about other users, including U.S. persons. For example, if a visa applicant tags another user in a political rant or posts photographs of themselves and the other user at a political rally, government officials might correctly infer that the other user shares the applicant’s political beliefs. In fact, one study demonstrated that it is possible to accurately predict personal information about those who do not use any form of social media based solely on personal information and contact lists shared by those who do. The government’s surveillance of visa applicants’ social media profiles thus facilitates the surveillance of millions—if not billions—more people.

Because social media users have privacy interests in their public social media profiles, government surveillance of digital content risks chilling free speech. If visa applicants know that the government can glean vast amounts of personal information about them from their profiles—or that their anonymous or pseudonymous accounts can be linked to their real-world identities—they will be inclined to engage in self-censorship. Many will likely curtail or alter their behavior online—or even disengage from social media altogether. Importantly, because of the interconnected nature of social media, these chilling effects extend to those in visa applicants’ social networks, including U.S. persons.

Studies confirm these chilling effects. Citizen Lab found that 62 percent of survey respondents would be less likely to “speak or write about certain topics online” if they knew that the government was engaged in online surveillance. A Pew Research Center survey found that 34 percent of its survey respondents who were aware of the online surveillance programs revealed by Edward Snowden had taken at least one step to shield their information from the government, including using social media less often, uninstalling certain apps, and avoiding the use of certain terms in their digital communications.

One might be tempted to argue that concerned applicants can simply set their accounts to private. Some users choose to share their personal information—including their names, locations, photographs, relationships, interests, and opinions—with the public writ large. But others do so unintentionally. Given the difficulties associated with navigating privacy settings within and across platforms and the fact that privacy settings often change without warning, there is good reason to believe that many users publicly share more personal information than they think they do. Moreover, some applicants might fear that setting their accounts to private will negatively impact their applications. Others—especially those using social media anonymously or pseudonymously—might be loath to maximize their privacy settings because they use their platforms with the specific intention of reaching large audiences.

These chilling effects are further strengthened by the broad scope of the Registration Requirement, which allows the government to continue surveilling applicants’ social media profiles once the application process is over. Personal information obtained from those profiles can also be collected and stored in government databases for decades. And that information can be shared with other domestic and foreign governmental entities, as well as current and prospective employers and other third parties. It is no wonder, then, that social media users might severely limit or change the way they use social media.

Secrecy should not be a prerequisite for privacy—and the review and collection by the government of personal information that is clearly outside the scope of the visa application process creates unwarranted chilling effects on both visa applicants and their social media associates, including U.S. persons. We hope that the D.C. district court denies the government’s motion to dismiss the case and ultimately strikes down the Registration Requirement as unconstitutional under the First Amendment.

Go to Source
Author: Sophia Cope

Advertisements

Inside the Invasive, Secretive “Bossware” Tracking Workers

COVID-19 has pushed millions of people to work from home, and a flock of companies offering software for tracking workers has swooped in to pitch their products to employers across the country.

The services often sound relatively innocuous. Some vendors bill their tools as “automatic time tracking” or “workplace analytics” software. Others market to companies concerned about data breaches or intellectual property theft. We’ll call these tools, collectively, “bossware.” While aimed at helping employers, bossware puts workers’ privacy and security at risk by logging every click and keystroke, covertly gathering information for lawsuits, and using other spying features that go far beyond what is necessary and proportionate to manage a workforce.

This is not OK. When a home becomes an office, it remains a home. Workers should not be subject to nonconsensual surveillance or feel pressured to be scrutinized in their own homes to keep their jobs.

What can they do?

Bossware typically lives on a computer or smartphone and has privileges to access data about everything that happens on that device. Most bossware collects, more or less, everything that the user does. We looked at marketing materials, demos, and customer reviews to get a sense of how these tools work. There are too many individual types of monitoring to list here, but we’ll try to break down the ways these products can surveil into general categories.

The broadest and most common type of surveillance is “activity monitoring.” This typically includes a log of which applications and websites workers use. It may include who they email or message—including subject lines and other metadata—and any posts they make on social media. Most bossware also records levels of input from the keyboard and mouse—for example, many tools give a minute-by-minute breakdown of how much a user types and clicks, using that as a proxy for productivity. Productivity monitoring software will attempt to assemble all of this data into simple charts or graphs that give managers a high-level view of what workers are doing.

Every product we looked at has the ability to take frequent screenshots of each worker’s device, and some provide direct, live video feeds of their screens. This raw image data is often arrayed in a timeline, so bosses can go back through a worker’s day and see what they were doing at any given point. Several products also act as a keylogger, recording every keystroke a worker makes, including unsent emails and private passwords. A couple even let administrators jump in and take over remote control of a user’s desktop. These products usually don’t distinguish between work-related activity and personal account credentials, bank data, or medical information.

InterGuard advertises that its software “can be silently and remotely installed, so you can conduct covert investigations [of your workers] and bullet-proof evidence gathering without alarming the suspected wrongdoer.”

Some bossware goes even further, reaching into the physical world around a worker’s device. Companies that offer software for mobile devices nearly always include location tracking using GPS data. At least two services—StaffCop Enterprise and CleverControl—let employers secretly activate webcams and microphones on worker devices

There are, broadly, two ways bossware can be deployed: as an app that’s visible to (and maybe even controllable by) the worker, or as a secret background process that workers can’t see. Most companies we looked at give employers the option to install their software either way. 

Visible monitoring

Sometimes, workers can see the software that is surveilling them. They may have the option to turn the surveillance on or off, often framed as “clocking in” and “clocking out.” Of course, the fact that a worker has turned off monitoring will be visible to their employer. For example, with Time Doctor, workers may be given the option to delete particular screenshots from their work session. However, deleting a screenshot will also delete the associated work time, so workers only get credit for the time during which they are monitored. 

Workers may be given access to some, or all, of the information that’s collected about them. Crossover, the company behind WorkSmart, compares its product to a fitness tracker for computer work. Its interface allows workers to see the system’s conclusions about their own activity presented in an array of graphs and charts.

Different bossware companies offer different levels of transparency to workers. Some give workers access to all, or most, of the information that their managers have. Others, like Teramind, indicate that they are turned on and collecting data, but don’t reveal everything they’re collecting. In either case, it can often be unclear to the user what data, exactly, is being collected, without specific requests to their employer or careful scrutiny of the software itself.

Invisible monitoring

The majority of companies that build visible monitoring software also make products that try to hide themselves from the people they’re monitoring. Teramind, Time Doctor, StaffCop, and others make bossware that’s designed to be as difficult to detect and remove as possible. At a technical level, these products are indistinguishable from stalkerware. In fact, some companies require employers to specifically configure antivirus software before installing their products, so that the worker’s antivirus won’t detect and block the monitoring software’s activity.

A screenshot from TimeDoctor’s sign-up flow, which allows employers to choose between visible and invisible monitoring.

This kind of software is marketed for a specific purpose: monitoring workers. However, most of these products are really just general purpose monitoring tools. StaffCop offers a version of their product specifically designed for monitoring children’s use of the Internet at home, and ActivTrak states that their software can also be used by parents or school officials to monitor kids’ activity. Customer reviews for some of the software indicate that many customers do indeed use these tools outside of the office.

Most companies that offer invisible monitoring recommend that it only be used for devices that the employer owns. However, many also offer features like remote and “silent” installation that can load monitoring software on worker computers, without their knowledge, while their devices are outside the office. This works because many employers have administrative privileges on computers they distribute. But for some workers, the company laptop they use is their only computer, so company monitoring is ever-present. There is great potential for misuse of this software by employers, school officials, and intimate partners. And the victims may never know that they are subject to such monitoring.

The table below shows the monitoring and control features available from a small sample of bossware vendors. This isn’t a comprehensive list, and may not be representative of the industry as a whole; we looked at companies that were referred to in industry guides and search results that had informative publicly-facing marketing materials. 

Table: Common surveillance features of bossware products

Features of several worker-monitoring products, based on the companies’ marketing material. 9 of the 10 companies we looked at offered “silent” or “invisible” monitoring software, which can collect data without worker knowledge.

How common is bossware?

The worker surveillance business is not new, and it was already quite large before the outbreak of a global pandemic. While it’s difficult to assess how common bossware is, it’s undoubtedly become much more common as workers are forced to work from home due to COVID-19. Awareness Technologies, which owns InterGuard, claimed to have grown its customer base by over 300% in just the first few weeks after the outbreak. Many of the vendors we looked at exploit COVID-19 in their marketing pitches to companies.

Some of the biggest companies in the world use bossware. Hubstaff customers include Instacart, Groupon, and Ring. Time Doctor claims 83,000 users; its customers include Allstate, Ericsson, Verizon, and Re/Max. ActivTrak is used by more than 6,500 organizations, including Arizona State University, Emory University, and the cities of Denver and Malibu. Companies like StaffCop and Teramind do not disclose information about their customers, but claim to serve clients in industries like health care, banking, fashion, manufacturing, and call centers. Customer reviews of monitoring software give more examples of how these tools are used. 

Let’s be clear: this software is specifically designed to help employers read workers’ private messages without their knowledge or consent. By any measure, this is unnecessary and unethical.

We don’t know how many of these organizations choose to use invisible monitoring, since the employers themselves don’t tend to advertise it. In addition, there isn’t a reliable way for workers themselves to know, since so much invisible software is explicitly designed to evade detection. Some workers have contracts that authorize certain kinds of monitoring or prevent others. But for many workers, it may be impossible to tell whether they’re being watched. Workers who are concerned about the possibility of monitoring may be safest to assume that any employer-provided device is tracking them.

What is the data used for?

Bossware vendors market their products for a wide variety of uses. Some of the most common are time tracking, productivity tracking, compliance with data protection laws, and IP theft prevention. Some use cases may be valid: for example, companies that deal with sensitive data often have legal obligations to make sure data isn’t leaked or stolen from company computers. For off-site workers, this may necessitate a certain level of on-device monitoring. But an employer should not undertake any monitoring for such security purposes unless they can show it is necessary, proportionate, and specific to the problems it’s trying to solve.

Unfortunately, many use cases involve employers wielding excessive power over workers. Perhaps the largest class of products we looked at are designed for “productivity monitoring” or enhanced time tracking—that is, recording everything that workers do to make sure they’re working hard enough. Some companies frame their tools as potential boons for both managers and workers. Collecting information about every second of a worker’s day isn’t just good for bosses, they claim—it supposedly helps the worker, too. Other vendors, like Work Examiner and StaffCop, market themselves directly to managers who don’t trust their staff. These companies often recommend tying layoffs or bonuses to performance metrics derived from their products.

Marketing material from Work Examiner’s home page, https://www.workexaminer.com/

Some firms also market their products as punitive tools, or as ways to gather evidence for potential worker lawsuits. InterGuard advertises that its software “can be silently and remotely installed, so you can conduct covert investigations [of your workers] and bullet-proof evidence gathering without alarming the suspected wrongdoer.” This evidence, it continues, can be used to fight “wrongful termination suits.” In other words, InterGuard can provide employers with an astronomical amount of private, secretly-gathered information to try to quash workers’ legal recourse against unfair treatment.

None of these use cases, even the less-disturbing ones discussed above, warrant the amount of information that bossware usually collects. And nothing justifies hiding the fact that the surveillance is happening at all.

Most products take periodic screenshots, and few of them allow workers to choose which ones to share. This means that sensitive medical, banking, or other personal information are captured alongside screenshots of work emails and social media. Products that include keyloggers are even more invasive, and often end up capturing passwords to workers’ personal accounts. 

Work Examiner’s description of its Keylogging feature, specifically highlighting its ability to capture private passwords.

Unfortunately, excessive information collection often isn’t an accident, it’s a feature. Work Examiner specifically advertises its product’s ability to capture private passwords. Another company, Teramind, reports on every piece of information typed into an email client—even if that information is subsequently deleted. Several products also parse out strings of text from private messages on social media so that employers can know the most intimate details of workers’ personal conversations. 

Let’s be clear: this software is specifically designed to help employers read workers’ private messages without their knowledge or consent. By any measure, this is unnecessary and unethical.

What can you do?

Under current U.S. law, employers have too much leeway to install surveillance software on devices they own. In addition, little prevents them from coercing workers to install software on their own devices (as long as the surveillance can be disabled outside of work hours). Different states have different rules about what employers can and can’t do. But workers often have limited legal recourse against intrusive monitoring software. 

That can and must change. As state and national legislatures continue to adopt consumer data privacy laws, they must also establish protections for workers with respect to their employers. To start:

  • Surveillance of workers—even on employer-owned devices—should be necessary and proportionate. 
  • Tools should minimize the information they collect, and avoid vacuuming up personal data like private messages and passwords. 
  • Workers should have the right to know what exactly their managers are collecting. 
  • And workers need a private right of action, so they can sue employers that violate these statutory privacy protections.

In the meantime, workers who know they are subject to surveillance— and feel comfortable doing so—should engage in conversations with their employers. Companies that have adopted bossware must consider what their goals are, and should try to accomplish them in less-intrusive ways. Bossware often incentivizes the wrong kinds of productivity—for example, forcing people to jiggle their mouse and type every few minutes instead of reading or pausing to think. Constant monitoring can stifle creativity, diminish trust, and contribute to burnout. If employers are concerned about data security, they should consider tools that are specifically tailored to real threats, and which minimize the personal data caught up in the process.

Many workers won’t feel comfortable speaking up, or may suspect that their employers are monitoring them in secret. If they are unaware of the scope of monitoring, they should consider that work devices may collect everything—from web history to private messages to passwords. If possible, they should avoid using work devices for anything personal. And if workers are asked to install monitoring software on their personal devices, they may be able to ask their employers for a separate, work-specific device from which private information can be more easily siloed away.

Finally, workers may not feel comfortable speaking up about being surveilled out of concern for staying employed in a time with record unemployment. A choice between invasive and excessive monitoring and joblessness is not really a choice at all.

COVID-19 has put new stresses on us all, and it is likely to fundamentally change the ways we work as well. However, we must not let it usher in a new era of even-more-pervasive monitoring. We live more of our lives through our devices than ever before. That makes it more important than ever that we have a right to keep our digital lives private—from governments, tech companies, and our employers.

Go to Source
Author: Bennett Cyphers

Tell Your Senator: Vote No on the EARN IT Act

This month, Americans are out in the streets, demanding police accountability. But rather than consider reform proposals, a key Senate committee is focused on giving unprecedented powers to law enforcement—including the ability to break into our private messages by creating encryption backdoors.

TAKE ACTION

STOP THE EARN IT BILL BEFORE IT BREAKS ENCRYPTION

This Thursday, the Senate Judiciary Committee is scheduled to debate and vote on the so-called EARN IT Act, S. 3398. It’s a law that would allow the government to scan every message sent online. The EARN IT Act creates a 19-person commission that would be dominated by law enforcement agencies, with Attorney General William Barr at the top. This unelected commission will be authorized to make new rules on “best practices” that Internet websites will have to follow. Any Internet platform that doesn’t comply with this law enforcement wish list will lose the legal protections of Section 230.

The new rules that Attorney General Barr creates won’t just apply to social media or giant Internet companies. Section 230 is what protects owners of small online forums, websites, and blogs with comment sections, from being punished for the speech of others. Without Section 230 protections, platform owners and online moderators will have every incentive to over-censor speech, since they could potentially be sued out of existence based on someone else’s statements.

Proponents of EARN IT say that the bill isn’t about encryption or privacy. They’re cynically using crimes against children as an excuse to change online privacy standards. But it’s perfectly clear what the sponsors’ priorities are. Sen. Lindsay Graham (R-SC), one of EARN IT’s cosponsors, has introduced another bill that’s a direct attack on encrypted messaging. And Barr has said over and over again that encrypted services should be forced to offer police special access.

The EARN IT Act could end user privacy as we know it. Tech companies that provide private, encrypted messaging could have to re-write their software to allow police special access to their users’ messages.

This bill is a power grab by police. We need your help to stop it today. Contact your Senator and tell them to oppose the EARN IT Act.

TAKE ACTION

STOP THE EARN IT BILL BEFORE IT BREAKS ENCRYPTION

Go to Source
Author: Joe Mullin

EFF Successfully Defends Users’ Right to Challenge Patents and Still Recover Legal Fees

When individuals and companies are wrongly accused of patent infringement, they should be encouraged to stand up and defend themselves. When they win, the public does too. While the patent owner loses revenue, the rest of society gets greater access to knowledge, product choice, and space for innovation. This is especially true when defendants win by proving the patent asserted against them is invalid. In such cases, the patent gets cancelled, and the risk of wrongful threats against others vanishes.

The need to encourage parties to pursue meritorious defenses, is partly why patent law gives judges the power to force losers to pay a winner’s legal fees in “exceptional” patent cases. The fee-shifting allowed in patent cases is especially important because there are so many invalid patents in the possession of patent trolls, which are entities that exploit the exorbitant costs of litigating in federal court to scare defendants into paid settlements. When patent trolls abuse the litigation system, judges have to make sure that they pay a price. That’s why the selective fee-shifting that happens in patent cases is so important.

However, proving invalidity in district court takes a lot of time and money. That’s why Congress created a faster, cheaper alternative when it passed the America Invents Act in 2011.

That alternative is the IPR system, which allows parties to get a decision on a patent’s validity in less expensive, streamlined, proceedings at the Patent Office. One benefit of this system is the huge savings to parties and courts of avoiding needless patent litigation. Another is that going to the Patent Office should, in theory, yield more accurate decisions. When Congress created the IPR system, the whole point was to encourage parties to use it to make patent litigation cheaper and faster while improving the quality of issued patents by allowing the Patent Office to weed out those it shouldn’t have granted. 

Fee-shifting and IPR are both meant to deter meritless patent lawsuits. That’s why we weighed in last year in a case called Dragon Intellectual Property v. Dish Network. In April, a panel of Federal Circuit agreed with our position. It’s a win for properly applied fee-shifting in the patent system, and for every company that wants to fight back after being hit with a meritless patent threat.

In the Dragon v. Dish case, a federal circuit tried to stop defendant Dish Network from getting its fees because of Dish’s success using the Patent Office’s inter partes review (IPR) system. That’s right—Dish was penalized for winning.

In this case, the district court saw that a party was successful at proving invalidity in an IPR—but then actually held that against the winning party. Dish Networks was one of several defendants Dragon sued for infringement. After the suit was filed, Dish initiated IPR proceedings. But before those proceedings finished, the District Court construed the patent’s claims in a way that required finding non-infringement. While that decision was on appeal, the Patent Office finished its review and found Dragon’s patent invalid. 

Yet when Dish tried to recover the cost of litigating Dragon’s invalid patent, the District Court refused on the ground that Dish wasn’t the prevailing party. Oddly, its success proving invalidity at the Patent Office became grounds for stripping it of separately prevailing on non-infringement.

That ruling made no sense; if anything, Dish’s success in proving invalidity should reinforce, rather than undo,  its status as the prevailing party. It would have also created a big new downside for defendants considering IPR proceedings: if they won, they could have lost prevailing party status in district court, and thus the possibility of recovering the cost of paying their attorneys. So EFF weighed in, filing an amicus brief in support of Dish’s prevailing party status with the Federal Circuit in February of 2019.

More than a year later, on April 21, 2020, the Federal Circuit finally ruled, agreeing with EFF that the district court’s finding of non-infringement made Dish the prevailing party. Dish’s parallel success in proving invalidity at the Patent Office did not change that. The Federal Circuit’s decision makes clear that proving invalidity at the Patent Office doesn’t make an earlier non-infringement win—and thus the possibility of recovering attorneys’ fees— disappear. That principle is important: If patent owners could save themselves from fee awards by having their patents invalidated by the Patent Office, they would have a perverse incentive to assert the worst patents in litigation.

But a month after the Federal Circuit issued its decision, Dragon filed a petition asking the full court to convene and re-hear the case. On June 24, the Federal Circuit finally denied that petition, making its decision final.

Even though the Federal Circuit was skeptical that fees would ultimately be recoverable in this case, its decision will help protect the IPR system, as well as proper fee-shifting. Those who need an efficient way to challenge a wrongly-granted patent will have one, and it won’t make the cost of district court litigation even greater. 

Go to Source
Author: Alex Moss

Your Phone is Vulnerable Because of 2G, But it Doesn’t Have to Be

Security researchers have been talking about the vulnerabilities in 2G for years. 2G technology, which at one point underpinned the entire cellular communications network, is widely known to be vulnerable to eavesdropping and spoofing. But even though its insecurities are well-known and it has quickly become archaic, many people still rely on it as the main mobile technology, especially in rural areas. Even as carriers start rolling out the fifth generation of mobile communications, known as 5G, 2G technology is still supported by modern smartphones.

The manufacturers of operating systems for smartphones (e.g. Apple, Google, and Samsung)  are in the perfect position to solve this problem by allowing users to switch off 2G.

What is 2G and why is it vulnerable?

2G is the second generation of mobile communications, created in 1991. It’s an old technology that at the time did not consider certain risk scenarios to protect its users. As years have gone, many vulnerabilities have been discovered in 2G and it’s companion SS7.

The primary problem with 2G stems from two facts. First, it uses weak encryption between the tower and device that can be cracked in real time by an attacker to intercept calls or text messages. In fact, the attacker can do this passively without ever transmitting a single packet. The second problem with 2G is that there is no authentication of the tower to the phone, which means that anyone can seamlessly impersonate a real 2G tower and your phone will never be the wiser. 

Cell-site simulators sometimes work this way. They can exploit security flaws in 2G in order to intercept your communications. Even though many of the security flaws in 2G have been fixed in 4G, more advanced cell-site simulators can take advantage of remaining flaws to downgrade your connection to 2G, making your phone susceptible to the above attacks. This makes every user vulnerable—from journalists and activists to medical professionals, government officials, and law enforcement.

How do we fix it?

3G, 4G, and 5G deployments fix the worst vulnerabilities in 2G that allow for cell-site simulators to eavesdrop on SMS text messages and phone calls (though there are still some vulnerabilities left to fix). Unfortunately, many people worldwide still depend on 2G networks. Therefore, brand-new, top-of-the-line phones on the market today—such as Samsung Galaxy, Google Pixel, and the iPhone 11—still support 2G technology. And the vast majority of these smartphones don’t give users any way to switch off 2G support.  That means these modern 3G and 4G phones are still vulnerable to being downgraded to 2G.

The simplest solution for users is to use encrypted messaging such as Signal whenever possible. But a better solution would be to be able to switch 2G off entirely so the connection can’t be downgraded. Unfortunately, this is not an option in iPhones or most Android Phones.

Apple, Google, and Samsung should allow users to choose to switch 2G off in order to better protect ourselves. Ideally, smartphone OS makers would block 2G by default and allow users to turn it back on if they need it for connectivity in a remote area. Either way, with this simple action, Apple, Google, and Samsung could protect millions of their users from the worst harms of cell-site simulators.

Go to Source
Author: Cooper Quintin

5 Serious Flaws in the New Brazilian “Fake News” Bill that Will Undermine Human Rights

The Brazilian Senate is scheduled to make its vote this week on the most recent version of “PLS 2630/2020” the so-called “Fake News” bill. This new version, supposedly aimed at safety and curbing “malicious coordinated actions” by users of social networks and private messaging apps, will allow the government to identify and track countless innocent users who haven’t committed any wrongdoing in order to catch a few malicious actors. 

The bill creates a clumsy regulatory regime to intervene in the technology and policy decisions of both public and private messaging services in Brazil, requiring them to institute new takedown procedures, enforce various kinds of identification of all their users, and greatly increase the amount of information that they gather and store from and about their users. They also have to ensure that all of that information can be directly accessed by staff in Brasil, so it is directly and immediately available to their government—bypassing the strong safeguards for users’ rights of existing international mechanisms such as Mutual Legal Assistance Treaties.

This sprawling bill is moving quickly, and it comes at a very bad time. Right now, secure communication technologies are more important than ever to cope with the COVID-19 pandemic, to collaborate and work securely, and to protest or organize online. It’s also really important for people to be able to have private conversations, including private political conversations. There are many things wrong with this bill, far more than we could fit into one article. For now, we’ll do a deep dive into five serious flaws in the existing bill that would undermine privacy, expression and security.

Flaw 1: Forcing Social Media and Messaging Companies to Collect Legal Identification of All Users

The new draft of Article 7 is both clumsy and contradictory. First, the bill (Article 7, paragraph 3) requires “large” social networks and private messaging apps (that offer service in Brazil to more than two million users) to identify every account’s user by requesting their national identity cards. It’s a retroactive and general requirement, meaning that identification must be requested for each and every existing user. Article 7 main provision is not limited to  the identification of a user by a  court order, also including when there is a complaint about an account’s activity, or when the company finds itself unsure of a user’s identity. While users are explicitly permitted to use pseudonyms, they may not  keep their legal identities confidential from the service provider. Compelling companies to identify an online user should only be done in response to a request by a competent authority, not a priori. In India, a similar proposal is expected to be released by the country’s IT Ministry, although reports indicate that ID verification would be optional.

In 2003, Brazil made SIM card registration mandatory for prepaid cell phones, requiring prepaid subscribers to present a proof of identity, such as their official national identity card, driver’s license, or taxpayer number. Article 39 of the new draft expands that law by creating new mandatory identification requirements for obtaining telephone SIM cards, and Article 8 explicitly requires private message applications that identify their users via an associated telephone number to delete accounts whenever the underlying telephone number is deregistered. Telephone operators are required to help with this process by providing a list of numbers that are no longer used by the original subscriber. SIM card registration undermines peoples’ ability to communicate, organize, and associate with others anonymously. David Kaye, United Nations’ Special Rapporteur on Freedom of Expression and Opinion have asked states to refrain from making the identification of users a condition for access to digital communications and online services and requiring SIM card registration for mobile users;

Even if the draft text eliminates Article 7, the draft remains dangerous to free expression because authorities will still be allowed to identify users of private messaging services by linking a cell phone number to an account. The Brazilian authorities will have to unmask the identity of the internet user by following domestic procedures for accessing such data from the telecom provider.

Internet users will be obliged to hand over identifying information to big tech companies if Article 7 is approved as currently written, with or without paragraph 3. The compulsory identification provision is a blatant infringement on the due process rights of individuals. Countries like China and South Korea have mandated that users register their real names and identification numbers with online service providers. South Korea used to require websites with more than 100,000 visitors per day to authenticate their identities by entering their resident ID numbers when they use portals or other sites. But South Korea’s Supreme Court revoked the law as unconstitutional, stating that “the [mandatory identification] system does not seem to have been beneficial to the public. Despite the enforcement of the system, the number of illegal or malicious postings online has not decreased.”

Flaw 2: Forcing Social Networking and Messaging Companies to Retain Immense Logs of User Communications  

A Brazilian political cartoon of a man being arrested by a police officer in his home.

Man: What happened? Police officer: You shared that message that went viral accusing someone of a corruption scheme. They’re saying that it’s a lie and is calúnia. Descriptive text: It’s easy to imagine how the new traceability rule could be abused and make us all afraid to share content online. We can’t let that happen.

Article 10 compels social networks and private messaging applications to retain the chain  of all communications that have been “massively forwarded”, for the purpose of potential criminal investigation or prosecution. The new draft requires three months of data storage of the complete chain of communication for such messages, including date and time of forwarding, and the total number of users who receive the message. These obligations are conditioned on virality thresholds and apply when an instance of a message has been forwarded to groups or lists by more than 5 users within 15 days, where a message’s content has reached 1,000 or more users. The service provider is also apparently expected to temporarily retain this data for all forwarded messages during the 15-day period in order to determine whether or not the virality threshold for “massively forwarded” will be met. This provision blatantly infringes on due process rights by compelling providers to retain everyone’s communication before anyone has committed any legally defined offense.

There have also been significant changes to how this text interacts with encryption and with communications’ providers efforts to know less about what their users are doing. These mandatory retention requirements may create an incentive to weaken end-to-end encryption, because end-to-end encrypted services may not be able to comply with provisions requiring them to recognize when a particular message has been independently forwarded a certain number of times without undermining the security of their encryption. 

Although the current draft (unlike previous versions) does not create new crimes, it requires providers to trace messages before any crime has been committed so the information could be used in the future in the context of a criminal investigation or prosecution of crimes for specific crimes defined in articles 138 to 140, or article 147 of the Brazil’s Penal Code, such as defamation, threats, and calúnia. This means, for example, that if you share a message that denounces corruption of a local authority and it gets forwarded more than 1,000 times, authorities may criminally accuse you of calúnia against your local authority. 

Companies must limit the retention of personal data to what is reasonably necessary, proportionate to certain legitimate business purposes. This is “data minimization,” that is, the principle that any company should minimize its processing of consumer data. Minimization is an important tool in the data protection toolbox. This bill goes against that, favoring dangerous big data collection practices.

Flaw 3: Banning Messaging Companies from Allowing Broadcast Groups, Even if Users Sign Up

Articles 9 and 11 require broadcast and discussion group sizes in private messaging tools to have a maximum membership limit (something that WhatsApp does today, but that not every communications tool necessarily does or will do), and that the ability to reach mass audiences via private messaging platforms must be strictly limited and controlled, even when those audiences opt in. The vision of the bill seems to be that mass discussion and mass broadcast are inherently dangerous and must only happen in public, and that no one should create forums or media for these interactions to happen in a truly private way, even with clear and explicit consent by the participants or recipients.

If an organization like an NGO, or a labor union, or a political party wanted to have a discussion forum among its membership or send its newsletter to all its members who’ve chosen to receive it through a similar tool as WhatsApp, Articles 9 and 11 will require that that content would have to be visible to and subject to the control of a platform operator—at least once some (unspecified) audience size limit was reached. 

Flaw 4: Forcing Social Media and Messaging Companies to Make Private User Logs Available Remotely

Article 37 compels large social networks and private messaging apps to appoint legal representatives in Brazil. It also forces those companies to provide remote access to their user databases and logs to their staff in Brazil so the local employees can be directly forced to turn them over. 

This undermines user security and privacy. It increases the number of employees (and devices) that can access sensitive data and reduces the company’s ability to control vulnerabilities and unauthorized access, not least because this is global in scale and, should it be adopted in Brazil, could be replicated by other countries. Each new person and each new device adds a new security risk. 

Flaw 5: No Limitations on Applying this Law to Users Outside of Brazil 

Paragraphs 1 and 2 of Article 1 provide some jurisdictional exclusions, but all of these are applied at the company level—that is, a foreign company could be exempt if it is small (less than 2,000,000 users) or does not offer services to Brazil. None of these limitations, however, relate to the users’ nationality or location. Thus, the bill, by its terms, requires a company to create certain policies and procedures about content takedowns, mandatory identification of users, and other topics, which are not themselves in any way limited to people based in Brazil. Even if the intent is only to force the collection of ID documents from users who are based in Brazil, the bill neglects to say so.

Addressing “Fake News” Without Undermining Human Rights

There are many innovative new responses being developed to help cut down on abuses of messaging and social media apps, both through policy responses and technical solutions. WhatsApp, for example, already limits the number of recipients of a single forwarded message at a time and shows users that messages were forwarded, viral messages are labeled with double arrows to indicate they did not originate from a close contact. However, shutting down bad actors cannot come at the expense of silencing millions of other users, invading their privacy, or undermining their security. To ensure that human rights are preserved, the Brazilian legislature must reject the current version of this bill. Moving forward, human rights such as privacy, expression, security must be baked into the law from the beginning. 

Go to Source
Author: Katitza Rodriguez

Dutch Law Proposes a Wholesale Jettisoning of Human Rights Considerations in Copyright Enforcement

With the passage of last year’s Copyright Directive, the EU demanded that member states pass laws that reduce copyright infringement by internet users while also requiring that they safeguard the fundamental rights of users (such as the right to free expression) and also the limitations to copyright. These safeguards must include protections for the new EU-wide exemption for commentary and criticism. Meanwhile states are also required to uphold the GDPR, which safeguards users against mass, indiscriminate surveillance, while somehow monitoring everything every user posts to decide whether it infringes copyright.

Serving these goals means that when EU member states turn the Directive into their national laws (the “transposition” process), their governments will have to decide to give more weight to some parts of the Directive, and that courts would have to figure out whether the resulting laws passed constitutional muster while satisfying the requirement of EU members to follow its rules.

The initial forays into transposition were catastrophic. First came France’s disastrous proposal, which “balanced” copyright enforcement with Europeans’ fundamental rights to fairness, free expression, and privacy by simply ignoring those public rights.

Now, the Dutch Parliament has landed in the same untenable legislative cul-de-sac as their French counterparts, proposing a Made-in-Holland version of the Copyright Directive that omits:

  • Legally sufficient protections for users unjustly censored due to false accusations of copyright infringement;
  • Legally sufficient protection for users whose work makes use of the mandatory, statutory exemptions for parody and criticism;
  • A ban on “general monitoring”— that is, continuous, mass surveillance;
  • Legally sufficient protection for “legitimate uses” of copyright works.

These are not optional elements of the Copyright Directive. These protections were enshrined in the Directive as part of the bargain meant to balance the fundamental rights of Europeans against the commercial interests of entertainment corporations. The Dutch Parliament’s willingness to pay mere lip-service to these human rights-preserving measures as legislative inconveniences is a grim harbinger of other EU nations’ pending lawmaking, and an indictment of the Dutch Parliament’s commitment to human rights.

EFF was pleased to lead a coalition of libraries, human rights NGOs, and users’ rights organizations in an open letter to the EU Commission asking them to monitor national implementations that respect human rights.

In April, we followed this letter with a note to the EC’s Copyright Stakeholder Dialogue Team, setting out the impossibility of squaring the Copyright Directive with the GDPR’s rules protecting Europeans from “general monitoring,” and calling on them to direct member-states to create test suites that can evaluate whether companies’ responses to their laws live up to their human rights obligations.

Today, we renew these and other demands, and we ask that Dutch Parliamentarians do their job in transposing the Copyright Directive, with the understanding that the provisions that protect Europeans’ rights are not mere ornaments, and any law that fails to uphold those provisions is on a collision course with years of painful, costly litigation.

Go to Source
Author: Cory Doctorow

Egypt’s Crackdown on Free Expression Will Cost Lives

For years, EFF has been monitoring a dangerous situation in Egypt: journalists, bloggers, and activists have been harassed, detained, arrested, and jailed, sometimes without trial, in increasing numbers by the Sisi regime. Since the COVID-19 pandemic began, these incidents have skyrocketed, affecting free expression both online and offline. 

As we’ve said before, this crisis means it is more important than ever for individuals to be able to speak out and share information with one another online. Free expression and access to information are particularly critical under authoritarian rulers and governments that dismiss or distort scientific data. But at a time when true information about the pandemic may save lives, instead, the Egyptian government has expelled journalists from the country for their reporting on the pandemic, and arrested others on spurious charges for seeking information about prison conditions. Shortly after the coronavirus crisis began, a reporter for The Guardian was deported, while a reporter for the The New York Times was issued a warning.. Just last week the editor of Al Manassa, Nora Younis, was arrested on cybercrime charges (and later released). And the Committee to Protect Journalists reported today that at least four journalists arrested during the pandemic remain imprisoned. 

Social media is also being monitored more closely than ever, with disastrous results: the Supreme Council for Media Regulation has banned the publishing of any data that contradicts the Ministry of Health’s official data. It has sent warning letters to news websites and social networks’ accounts it claims are sharing false news, and has arrested individuals for posting about the virus. Claiming national security interests, the far-reaching ban, which also limits the use of pseudonyms by journalists and criminalizes discussion of other “sensitive” topics, such as Libya, is being seen (rightfully) as censorship across the country. At a moment when obtaining true information is extremely important, the fact that Egypt’s government is increasing its attack on free expression is especially dangerous.

The government’s attacks on expression aren’t only damaging free speech online: rather than limiting the number of individuals in prison who are potentially exposed to the virus, Egyptian police have made matters worse, by harassing, beating, and even arresting protestors who are demanding the release of prisoners in dangerously overcrowded cells or simply ask for information on their arrested loved ones. Just last week, the family of Alaa Abd El Fattah, a leading Egyptian coder, blogger and activist who we’ve profiled in our Offline campaign, was attacked by police while protesting in front of Tora Prison. The next day, Alaa’s sister, Sanaa Seif, was forced into an unmarked car in front of the Prosecutor-General’s office as she arrived to submit a complaint regarding the assault and Alaa’s detention. She is now being held in pre-trial detention on charges of “broadcast[ing] fake news and rumors about the country’s deteriorating health conditions and the spread of the coronavirus in prisons” on Facebook, among others—according to police, for a fifteen day period, though there is no way to know for sure that it will end then. 

All of these actions put the health and safety of the Egyptian population at risk. We join the international coalition of human rights and civil liberties organizations demanding both Alaa and Sanaa be released, and asking Egypt’s government to immediately halt its assault on free speech and free expression. We must lift up the voices of those who are being silenced to ensure the safety of everyone throughout the country. 

Banner image CC-BY, by Molly Crabapple.

Go to Source
Author: Jason Kelley

Your Objections to the Google-Fitbit Merger

EFF Legal Intern Rachel Sommers contributed to this post.

When Google announced its intention to buy Fitbit in April, we had deep concerns. Google, a notoriously data-hungry company with a track record of reneging on its privacy policies, was about to buy one of the most successful wearables company in the world —after Google had repeatedly tried to launch a competing product, only to fail, over and over.

Fitbit users give their devices extraordinary access to their sensitive personal details, from their menstrual cycles to their alcohol consumption. In many cases, these “customers” didn’t come to Fitbit willingly, but instead were coerced into giving the company their data in order to get the full benefit of their employer-provided health insurance.

Companies can grow by making things that people love, or they can grow by buying things that people love. One produces innovation, the other produces monopolies.

Last month, EFF put out a call for Fitbit owners’ own thoughts about the merger, so that we could tell your story to the public and to the regulators who will have the final say over the merger. You obliged with a collection of thoughtful, insightful, and illuminating remarks that you generously permitted us to share. Here’s a sampling from the collection:

From K.H.: “It makes me very uncomfortable to think of Google being able to track and store even more of my information. Especially the more sensitive, personal info that is collected on my Fitbit.”

From L.B.: “Despite the fact that I continue to use a Gmail account (sigh), I never intended for Google to own my fitness data and have been seeking an alternative fitness tracker ever since the merger was announced.”

From B.C.: “I just read your article about this and wanted to say that while I’ve owned and worn a Fitbit since the Charge (before the HR), I have been looking for an alternative since I read that Google was looking to acquire Fitbit. I really don’t want “targeted advertisements” based on my health data or my information being sold to the highest bidder.”

From T.F.: “I stopped confirming my period dates, drinks and weight loss on my fitbit since i read about the [Google] merger. Somehow, i would prefer not to become a statistic on [Google].” 

From D.M.: “My family has used Fitbit products for years now and the idea of Google merging with them, in my opinion, is good and bad. Like everything in the tech industry, there are companies that hog all of the spotlight like Google. Google owns so many smaller companies and ideas that almost every productivity and shopping app on any mobile platform is in some way linked or owned by them. Fitbit has been doing just fine making their own trackers and products without any help from the tech giants, and that doesn’t need to stop now. I’m not against Google, but they have had a few security issues and their own phone line, the pixel, hasn’t been doing that well anyway. I think Fitbit should stay a stand alone company and keep making great products.”

From A.S.: “A few years back, I bought a Fitbit explicitly because they were doing well but didn’t seem to be on the verge of being acquired. I genuinely prefer using Android over iOS, and no longer want to take on the work of maintaining devices on third party OSes, so I wanted to be able to monitor steps without thinking it was all going to a central location.

Upon hearing about the merger, I found myself relieved I didn’t use the Fitbit for long (I found I got plenty of steps already and it was just a source of anxiety) so that the data can’t be merged with my massive Google owned footprint.”

From L.O.: “A few years ago, I bought a Fitbit to track my progress against weight-loss goals that I had established. Moreover, I have a long-term cardiac condition that requires monitoring by a third-party (via an ICD). So I wanted to have access to medical data that I could collect for myself. I had the choice to buy either an Apple Watch, Samsung Gear, Google Fit gear, or a Fitbit. I chose to purchase a Fitbit for one simple reason: I wanted to have a fitness device that did not belong to an OEM and/or data scavenger. So I bought a very expensive Fitbit Charge 2. I was delighted by the purchase. I had a top-of-the-line fitness device. And I had confidence that my intimate and personal data would be secure; I knew that my personal and confidential data would not be used to either target me or to include me in a targeted group.

Now that Google has purchased Fitbit, I have few options left that will allow me to confidentially collect and store my personal (and private) fitness information. I don’t trust Google with my data. They have repeatedly lied about data collection. So I have no confidence in their assertions that they will once again “protect” my data. I trust that their history of extravagant claims followed by adulterous actions will be repeated.

My fears concerning Google are well-founded. And as a result, I finally had to switch my email to an encrypted email from a neutral nation (i.e., Switzerland). And now, I have to spend even more money to protect myself from past purchases that are being hijacked by a nefarious content broker.  Why should I have to spend even more money in order to ensure my privacy? My privacy is guaranteed by the United States Constitution, isn’t it? And it in an inalienable right, isn’t it? Since when can someone steal my right to privacy and transform it into their right to generate even more money? As a citizen, I demand that my right to privacy be recognized and defended by local, state, and federal governments. And in the meantime, I’m hoping that someone will create a truly private service for collecting and storing my personal medical information.”

From E.R.: “Around this time last year, I went to the Nest website. I am slowly making my condo a smart home with Alexa and I like making sure everything can connect to each other. I hopped on and was instantly asked to log in via Google. I was instantly filled with regret. I had my thermostat for just over a year and I knew that I hadn’t done my research and the Google giant had one more point of data collection on me – plus it was connected to my light bulbs and Echo. Great. 

Soon, I learn the Versa 2 is coming out – best part? It has ALEXA! I sign up right away—this is still safe. Sure. Amazon isn’t that great at data secrets, but a heck of a lot better than Google connected apps. Then, I got the news of the merger. I told my boyfriend this would be the last FitBit I owned—but have been torn as it has been a motivating tool for me and a way to be in competition with my family now that we live in different states. But it would be yet another data point for Google, leaving me wondering when it will possibly end. 

This may be odd coming from a Gmail account—but frankly, Gmail is the preferred UI for me. I tried to avoid Google search, but it proved futile when I just wasn’t getting the same results. Google slowly has more and more of my life—from YouTube videos, to email, to home heating, and now fitness… when is enough enough?”

From J.R.: “My choice to buy a Fitbit device instead of using a GoogleFit related device/app is largely about avoiding giving google more data. 

My choice to try Waze during its infancy was as much about its promise to the future as it was that it was not a Google Product and therefore google wouldn’t have all of my families sensitive driving data.

Google paid a cheap 1 Billion to purchase all my data from Waze and then proceed to do nothing to improve the app. The app actually performs worse now on the same phone, sometimes taking 30 minutes to acquire GPS satellites that Google Maps (which i can’t uninstall) see immediately. 

Google now has all my historic driving data for years…. besides the fact that there is no real competitor to Waze and it does not seem like any company will ever try to compete with Google again on Maps and traffic data… why not continue using it? from my history, they can probably predict my future better than me.

The same with Fitbit… Now google will know every place I Run, Jog and walk…. not just where I park but exactly where i go…. is it not enough for them to know i went to the hospital but now they will know which floor (elevation), which wing (precise location data)…. they will get into mapping hospitals and other areas…. they will know exactly where we are and what we are doing….  

They will also sell our health data to various types of insurance companies, etc.

I believe Google should be broken up and not allowed to share data between the separate companies. I don’t believe google should be able to buy out companies that harvest data as part of their mission. If google buys fitbit, i will certainly close the account, delete what I can from it and sell the fitbit (if it has value left)….”

While the overwhelming majority of comments sought to halt the merger, a few people wrote to us in support of it. Here’s one of those comments.

From T.W.: “I’m really looking forward to the merger. I see the integration of Fitbit and Google Fit as a great bonus and hope to get far more insights than I get now. Hopefully the integration will progress really soon!”

If you’re a Fitbit owner and you’re alarmed by the thought of your data being handed to Google, we’d love to hear from you. Write to us at mergerstories@eff.org, and please let us know:

  • If we can publish your story (and, if so, whether you’d prefer to be anonymous);
  • If we can share your story with government agencies;
  • If we can share your email address with regulators looking for testimony.

Go to Source
Author: Cory Doctorow

California Agency Blocks Release of Police Use of Force and Surveillance Training, Claiming Copyright

Under a California law that went into effect on January 1, 2020, all law enforcement training materials must be “conspicuously” published on the California Commission on Peace Officer Standards and Training (POST) website. 

However, if you visit POST’s Open Data hub and try to download the officer training materials relating to face recognition technology or automated license plate readers (ALPRs), or the California Peace Officers Association’s course on use of force, you will receive only a Word document with a single sentence: 

 "The course presenter has claimed copyright for the expanded course outline."

This is unlawful, and unacceptable, EFF told POST in a letter submitted today. Under the new California law, SB 978, POST must post law enforcement training materials online if the materials would be available to the public under the California Public Records Act. Copyrighted material is available to the public under the California Public Records Act—in fact, EFF obtained a full, unredacted copy of POST’s ALPR training through a records request just last year. 

The company that creates POST’s courses on ALPR and face recognition is the same company that sells the technology: Vigilant Solutions (now a subsidiary of Motorola Solutions). This company has a long history of including non-publication clauses in its contracts with law enforcement as a means to control its intellectual property. But, as we explain in our letter, SB 978 is clear: copyright law is not a valid excuse for POST to evade its transparency obligations.

And what’s just as bad is that even when copyright isn’t an issue, POST has only released training course outlines, and not the training materials themselves. Indeed, POST has made no training materials about the use of force available, sharing only the outlines. With police use of force currently a hotly debated issue throughout the state and nation, it is all the more concerning that POST is unlawfully hiding this material.

SB 978 was sponsored by California State Senator Steven Bradford and supported by EFF and a number of civil rights groups in order to create a new level of transparency and public accountability. 

When EFF obtained the ALPR training last year, we found that Vigilant Solutions’ training was gravely outdated, included incorrect information, and raised questions about whether the presentation served the company’s commercial interests more than the public’s. EFF called on POST to suspend the course. However, since POST has not published the current training materials, the public does not yet know whether these problems have been adequately addressed. 

Our elected officials can pass laws regulating the police, and watchdog bodies can review law enforcement policies, but if training materials are kept secret, it provides a back door for manufacturers of surveillance technology and private organizations to influence police practices without oversight or accountability. 

If California POST is going to set and uphold police standards, then it cannot ignore the law. POST must make its training materials available online immediately.

Read EFF’s letter to POST on SB 978 violations.

Go to Source
Author: Dave Maass