How to Protect Privacy When Aggregating Location Data to Fight COVID-19

As governments, the private sector, NGOs, and others mobilize to fight the COVID-19 pandemic, we’ve seen calls to use location information—typically drawn from GPS and cell tower data—to inform public health efforts. Among the proposed uses of location data, one of the most widely discussed is analyzing aggregated data about which locations people are visiting, whether they are traveling less, and other collective measurements of individuals’ movement. This analysis might be used to inform judgments about the effectiveness of shelter-in-place orders and other social distancing measures. Projects making use of aggregated location data have graded residents of each state on their social distancing and visualized the travel patterns of people on returning from spring break. Most recently, Google announced that it would publish ongoing “COVID-19 Community Mobility Reports,” which draw on the company’s store of location data to report on changes at a community level in people’s travel to various locations such as grocery stores, parks, and mass transit stations.

Compared to using individualized location data for contact tracing—as many governments around the world are already doing—deriving public health insights from aggregated location data poses far fewer privacy and other civil liberties risks such as restrictions on freedom of expression and association. However, even “aggregated” location data comes with potential pitfalls. This post discusses those pitfalls and describes some high-level best practices for those who seek to use aggregated location data in the fight against COVID-19.

What Does “Aggregated” Mean?

At the most basic level, there’s a difference between “aggregated” location data and “anonymized” or “deidentified” location data. Practically speaking, there is no way to deidentify individual location data. Information about where a person is and has been itself is usually enough to reidentify them. Someone who travels frequently between a given office building and a single family home is probably unique in those habits and therefore identifiable from other readily identifiable sources. One widely cited study from 2013 even found that researchers could uniquely characterize 50% of people using only two randomly chosen time and location data points.   

Aggregation to preserve individual privacy, on the other hand, can potentially be useful. Aggregating location data involves producing counts of behaviors instead of detailed timelines of individual location history. For instance, an aggregation might tell you how many people’s phones reported their location as being in a certain city within the last month. Or it might tell you, for a given area in a city, how many people traveled to that area during each hour in the last month. Whether or not a given scheme for aggregating location data works to improve privacy depends deeply on the details: On what timescale is the data aggregated? How large of an area does each count cover? When is a count considered too low and dropped from the data set? 

For example, Facebook uses differential privacy techniques such as injecting statistical noise into the dataset as part of the methodology of its “Data for Good” project. This project aggregates Facebook users’ location data and shares it with various NGOs, academics, and governments engaged in responding to natural disasters and fighting the spread of disease, including COVID-19

There is no single magic formula for aggregating individual location data such that it provides insights that might be useful for some decisions and yet still cannot be reidentified. Instead, it’s a question of tradeoffs. As a matter of public policy, it is critical that user privacy not be sacrificed when creating aggregated location datasets to inform decisions about COVID-19 or anything else.

How Do We Evaluate the Use of Aggregated Location Data to Fight COVID-19?

Because aggregation reduces the risk of revealing intimate information about individuals’ lives, we are less concerned about this use of location data to fight COVID-19 compared to individualized tracking. Of course, the choice of the aggregation parameters generally needs to be done by domain experts. As in the Facebook and Google examples above, these experts will often be working within private companies with proprietary access to the data. Even if they make all the right choices, the public needs to be able to review these choices because the companies are sharing the public’s data. For the experts doing the aggregation, there’s often pressure to reduce the privacy properties in order to generate an aggregate data set that a particular decision-maker claims must be more granular in order to be meaningful to them. Ideally, companies would also consult outside experts before moving forward with plans to aggregate and share location data. Getting public input on whether a given data-sharing scheme sufficiently preserves privacy can help reduce the bias that such pressure creates. 

As a result, companies like Google that produce reports based on aggregated location data from users should release their full methodology as well as information about who these reports are shared with and for what purpose. To the extent they only share certain data with selected “partners,” these groups should agree not to use the data for other purposes or attempt to re-identify individuals whose data is included in the aggregation. And, as Google has already done, companies should pledge to end the use of this data when the need to fight COVID-19 subsides.

For any data sharing plan, consent is critical: Did each person consent to the method of data collection, and did they consent to the use? Consent must be specific, informed, opt-in, and voluntary. Ordinarily, users should have the choice of whether to opt-in to every new use of their data, but we recognize that obtaining consent to aggregate previously acquired location data to fight COVID-19 may be difficult with sufficient speed to address the public health need. That’s why it’s especially important that users should be able to review and delete their data at any time. The same should be true for anyone who truly consents to the collection of this information. Many entities that hold location information, like data brokers that collect location from ads and hidden tracking in apps, can’t meet these consent standards. Yet many of the uses of aggregated location data that we’ve seen in response to COVID-19 draw from these tainted sources. At the very least, data brokers should not profit from public health insights derived from their stores of location data, including through free advertising. Nor should they be allowed to “COVID wash” their business practices: the existence of these data stores is unethical, and should be addressed with new consumer data privacy laws

Finally, we should remember that location data collected from smartphones has limitations and biases. Smartphone ownership remains a proxy for relative wealth, even in regions like the United States where 80% of adults have a smartphone. People without smartphones tend to already be marginalized, so making public policy based on aggregate location data can wind up disregarding the needs of those who simply don’t show up in the data, and who may need services the most. Even among the people with smartphones, the seeming authoritativeness and comprehensiveness of large scale data can cause leaders to reach erroneous conclusions that overlook the needs of people with fewer resources. For example, data showing that people in one region are traveling more than people in another region might not mean, as first appears, that these people are failing to take social distancing seriously. It might mean, instead, that they live in an underserved area and must thus travel longer distances for essential services like groceries and pharmacies.

In general, our advice to organizations that consider sharing aggregate location data: Get consent from the users who supply the data. Be cautious about the details. Aggregate on the highest level of generality that will be useful. Share your plans with the public before you release the data. And avoid sharing “deidentified” or “anonymized” location data that is not aggregated—it doesn’t work.

Go to Source
Author: Jacob Hoffman-Andrews

Advertisements

How EFF Evaluates Government Demands for New Surveillance Powers

The COVID-19 public health crisis has no precedent in living memory. But government demands for new high-tech surveillance powers are all too familiar. This includes well-meaning proposals to use various forms of data about disease transmission among people. Even in the midst of a crisis, the public must carefully evaluate such government demands, because surveillance invades privacy, deters free speech, and unfairly burdens vulnerable groups. It also metastasizes behind closed doors. And new surveillance powers tend to stick around. For example, nearly two decades after the 9/11 attacks, the NSA is still conducting dragnet Internet surveillance.

Thus, when governments demand new surveillance powers—especially now, in the midst of a crisis like the ongoing COVID-19 outbreak—EFF has three questions:

  • First, has the government shown its surveillance would be effective at solving the problem?
  • Second, if the government shows efficacy, we ask: Would the surveillance do too much harm to our freedoms?
  • Third, if the government shows efficacy, and the harm to our freedoms is not excessive, we ask: Are there sufficient guardrails around the surveillance?

Would It Work?

The threshold question is whether the government has shown that its surveillance plan would be effective at solving the problem at hand. This must include published details about what the government plans, why this would help, and what rules would apply. Absent efficacy, there is no reason to advance to the next questions. Surveillance technology is always a threat to our freedoms, so it is only justified where (among other things) it would actually do its job.

Sometimes, we simply can’t tell whether the plan would hit its target. For example, governments around the world are conducting location surveillance with phone records, or making plans to do so, in order to contain COVID-19. As we recently wrote, governments so far haven’t shown this surveillance works.

Would It Do Too Much Harm?

Even if the government shows that a surveillance power would be effective, EFF still opposes its use if it would too greatly burden our freedoms. High-tech surveillance can turn our lives into open books. It can chill and deter our participation in protests, advocacy groups, and online forums. Its burdens fall all too often on people of color, immigrants, and other vulnerable groups. Breaches of government data systems can expose intimate details about our lives to scrutiny by adversaries including identity thieves, foreign governments, and stalkers. In short, even if surveillance would be effective at solving a problem, it must also be necessary and proportionate to that problem, and not have an outsized impact on vulnerable groups.

Thus, for example, EFF opposes NSA dragnet Internet surveillance, even if it can theoretically provide leads to uncovering terrorists, such as the proverbial needle in the haystack. We believe this sort of mass, suspicionless surveillance is simply incompatible with universal human rights.  Similarly, we oppose face surveillance, even if this technology sometimes contributes to solving crime. The price to our freedoms is simply too great.

On the other hand, the CDC’s proposed program for contact tracing of international flights might be necessary and proportionate. It would require airlines to maintain the names and contact information of passengers and crews arriving from abroad. If a person on a flight turned out to be infected, the program would then require the airline to send the CDC the names and contact information of the other people on the flight. This program applies to a discrete set of information about a discrete set of people. It will only occasionally lead to disclosure of this information to the government. And it is tailored to a heightened transmission risk: people returning from a foreign country, who are densely packed for many hours in a sealed chamber. However, as we recently wrote, we don’t know whether this program has sufficient safeguards.

Are the Safeguards Sufficient?

Even if the government shows a form of high-tech surveillance is effective, and even if such surveillance would not intolerably burden our freedoms, EFF still seeks guardrails to limit whether and how the government may conduct this surveillance. These include, in the context of surveillance for public health purposes:

1.  Consent. For reasons of both personal autonomy and effective public health response, people should have the power to decide whether or not to participate in surveillance systems, such as an app built for virus-related location tracking. Such consent must be informed, voluntary, specific, and opt-in.

2. Minimization. Surveillance programs must collect, retain, use, and disclose the least possible amount of personal information needed to solve the problem at hand. For example, information collected for one purpose must not be used for another purpose, and must be deleted as soon as it is no longer useful to the original purpose. In the public health context, it may often be possible to engineer systems that do not share personal information with the government. When the government has access to public health information, it must not use it for other purposes, such as enforcement of criminal or immigration laws.

3. Information security. Surveillance programs must process personal information in a secure manner, and thereby minimize risk of abuse or breach. Robust security programs must include encryption, third-party audits, and penetration tests. And there must be transparency about security practices.

4. Privacy by design. Governments that undertake surveillance programs, and any corporate vendors that help build them, must employ privacy officers, who are knowledgeable about technology and privacy, and who ensure privacy safeguards are designed into the program.

5. Community control. Before a government agency uses a new form of surveillance, or uses a form of surveillance it has already acquired in a new way, it must first obtain permission from its legislative authority, including approval of the agency’s proposed privacy policy. The legislative authority must consider community input based on the agency’s privacy impact report and proposed privacy policy.

6. Transparency. The government must publish its policies and training materials, and regularly publish statistics and other information about its use of each surveillance program in the greatest detail possible. Also, it must regularly conduct and publish the results of audits by independent experts about the effectiveness and any misuse of each program. Further, it must fully respond to public records requests about its programs, taking into account the privacy interests of people whose personal information has been collected.

7. Anti-bias. Surveillance must not intentionally or disparately burden people on the basis of categories such as race, ethnicity, religion, nationality, immigration status, LGBTQ status, or disability.

8. Expression. Surveillance must not target, or document information about, people’s political or religious speech, association, or practices.

9. Enforcement. Members of the community must have the power to go to court to enforce these safeguards, and evidence collected in violation of these safeguards must be excluded from court proceedings.

10. Expiration. If the government acquires a new surveillance power to address a crisis, that power must expire when the crisis ends. Likewise, personal data that is collected during the crisis, and used to help mitigate the crisis, must be deleted or minimized when the crisis is over. And crises cannot be defined to last in perpetuity.

Outside the context of public health, surveillance systems need additional safeguards. For example, before using a surveillance tool to enforce criminal laws, the government must first obtain a warrant from a judge, based on probable cause that evidence of a crime or contraband would be found, and particularly describing who and what may be surveilled. Targets of such surveillance must be promptly notified, whether or not they are ever prosecuted. Additional limits are needed for more intrusive forms of surveillance: use must be limited to investigation of serious violent crimes, and only after exhaustion of less intrusive investigative methods.

Conclusion

Once the genie is out of the bottle, it is hard to put back. That’s why we ask these questions about government demands for new high-tech surveillance powers, especially in the midst of a crisis. Has the government shown it would be effective? Would it do too much harm to our freedoms? Are there sufficient guardrails?

Go to Source
Author: Adam Schwartz

Harden Your Zoom Settings to Protect Your Privacy and Avoid Trolls

Whether you are on Zoom because your employer or school requires it or you just downloaded it to stay in touch with friends and family, people have rushed to the video chat platform in the wake of COVID-19 stay-at-home orders—and journalists, researchers, and regulators have noticed its many security and privacy problems. Zoom has responded with a surprisingly good plan for next steps, but talk is cheap. Zoom will have to follow through on its security and privacy promises if it wants to regain users’ trust.

In the meantime, take these steps to harden your Zoom privacy settings and protect your meetings from “Zoombombing” trolls. The settings below are all separate, which means you don’t need to change them all, and you don’t need to change them in any particular order. Consider which settings make sense for you and the groups you communicate with, and do your best to make sure meeting organizers and participants are on the same page about settings and shared expectations.

Privacy Settings

Make Sure Chat Auto-Saving Is Off

In your Zoom account settings under In Meeting (Basic), make sure Auto saving chats is toggled off to the left.

The autosave chats setting toggled off to the left

Make Sure “Attention Tracking” Is Off

In your Zoom account settings under In Meeting (Advanced), make sure Attention tracking is toggled off to the left.

The attention tracking setting toggled off to the left

Use a Virtual Background

The space you’re in during a call can expose a lot of information about where you live, your habits, and your hobbies. If you’re uncomfortable having your living space in the background of your calls, set a virtual background. From the zoom.us menu in the top right corner of your screen while using Zoom, navigate to Preferences and then Virtual backgrounds.

Best Practices for Avoiding Trolls

With Zoom now more widely used than ever, the mechanics of its public meeting IDs have allowed bad actors to invade people’s meetings with harassment, slurs, and disturbing images. When you host a meeting, consider taking the steps below to protect yourself and your participants from this “Zoombombing.”

Bad actors can find your meeting in one of two ways: they can cycle through random meeting IDs until they find an active one, or they can take advantage of meeting links and invites that have been posted in public places, like Facebook groups, Twitter, or personal websites. So, protecting yourself boils down to controlling who can enter your meeting, and keeping your meeting IDs private.

Keep the Meeting ID Private

Whenever possible, do not post the link to your meeting or the meeting ID publicly. Send it directly to trusted people and groups instead. 

Set a Meeting Password, and Carefully Inspect the Meeting Link

In your Zoom account settings under Schedule Meeting, toggle Require a password when scheduling new meetings on to the right. You’ll find additional password options in this area of the settings as well.

Several password settings toggled on to the right

You can also set a password when scheduling a meeting from the Zoom desktop app by checking the “Require meeting pass” checkbox.

BEWARE, however, that Zoom passwords can behave in unexpected ways. If you use the “Copy Invitation” functionality to copy the meeting link and send it to your participants, that link might include your meeting password. Look out for an unusually long URL with a question mark in it, which indicates it includes your meeting password.

If you plan to send the meeting link link directly to trusted participants, having the password included in the link will be no problem—but if you want to post the meeting link in a Facebook group, on Twitter, or in another public space, then it means the password itself will also be public. If you need to publicize your event online, consider posting only the meeting ID, and then separately sending the password to vetted participants shortly before the meeting begins.

Lock Down Screen Sharing

In your Zoom account settings under In Meeting (Basic), set Screen sharing to Host Only. That means that, when you are hosting a meeting, only you and no other meeting participants will be able to share their screen.

The screensharing setting set to host only

Depending on the calls you plan to host, you can also turn screen sharing off entirely by toggling it off to the left.

Use Waiting Rooms to Approve Participants

In your Zoom account settings under In Meeting (Advanced), enable Waiting room by toggling it on to the right. A waiting room allows hosts to screen new participants before letting them join, which can help prevent disruptions or unexpected participants.

The waiting room setting toggled on to the right

Lock the Meeting

When you are actively in a meeting and all your expected participants have arrived, you can “lock” the meeting to prevent anyone else from joining. Click Participants at the bottom of the Zoom window, and select Lock Meeting.

Go to Source
Author: Gennie Gebhart

Automated Moderation Must be Temporary, Transparent and Easily Appealable

For most of us, social media has never been more crucial than it is right now: it’s keeping us informed and connected during an unprecedented moment in time. People have been using major platforms for all kinds of things, from following and posting news, to organizing aid—such as coordinating the donations of masks across international boundaries—to sharing tips on working from home to, of course, pure entertainment.

At the same time, the content moderation challenges faced by social media platforms have not disappeared—and in some cases have been exacerbated by the pandemic. In the past weeks, YouTube, Twitter, and Facebook have all made public statements about their moderation strategies at this time. While they differ in details, they all have one key element in common: the increased reliance on automated tools.

Setting aside the justifications for this decision—especially the likely concern that allowing content moderators to do that work from home may offer particular challenges to user privacy and moderator mental health—it will inevitably present problems for online expression. Automated technology doesn’t work at scale; it can’t read nuance in speech the way humans can, and for some languages it barely works at all. Over the years, we’ve seen the use of automation result in numerous wrongful takedowns. In short: automation is not a sufficient replacement for having a human in the loop.

And that’s a problem, perhaps now more than ever when so many of us have few alternative outlets to speak, educate and learn. Conferences are moving online, schools are relying on online platforms, and individuals are tuning in to videos to learn everything from yoga to gardening. Likewise, platforms continue to provide space for vital information, be it messages from governments to people, or documentation of human rights violations.

It’s important to give credit where credit is due. In their announcements, YouTube and Twitter both acknowledged the shortcomings of artificial intelligence, and are taking that into account as they moderate speech. YouTube will not be issuing strikes on video content except in cases where they have “high confidence” that it violates their rules, and Twitter will only be issuing temporary suspensions—not permanent bans—at this time. For its part, Facebook acknowledged that it will be relying on full-time employees to moderate certain types of content, such as terrorism.

These temporary measures will help mitigate the inevitable over-censorship that follows from the use of automated tools.  But history suggests that protocols adopted in times of crisis often persist when the crisis is over. Social media platforms should publicly commit, now, that they will restore and expand human review as soon as the crisis has abated. Until then, the meaningful transparency, notice, and robust appeals processes called for in the Santa Clara Principles will be more important than ever.

Notice and Appeals: We know the content moderation system is flawed, and that it’s going to get worse before it gets better. So now more than ever, users need a way to get the mistakes fixed, quickly and fairly. That starts with clear and detailed notice of why content is taken down, combined with a simple, streamlined means of challenging and reversing improper takedown decisions.

Transparency: The most robust appeals process will do users little good if they don’t know why their content is taken down. Moreover, without good data, users and researchers cannot review whether the takedowns were fair, unbiased, proportional, and respectful of users’ rights, even subject to the exigencies of the crisis. That data should include how many posts were removed and accounts permanently or temporarily suspended, for what reason, at whose behest.  

The Santa Clara Principles provide a set of baseline standards to which all companies should adhere. But as companies turn to automation, they may not be enough. That’s why, over the coming months, we will be engaging with civil society and the public in a series of consultations to expand and adapt these principles. Watch this space for more on that process.

Finally, platforms and policymakers operating in the EU should remember that using automation for content moderation may undermine user privacy. Often, automated decision-making will be based on the processing of users’ personal data. As noted, however, automated content removal systems do not understand context, are notoriously inaccurate and prone to overblocking. The GDPR provides users with a right not to be subject to significant decisions that are based solely on automated processing of data (Article 22). While this right is not absolute, it requires safeguarding user expectations and freedoms. 

Go to Source
Author: Jillian C. York

The EARN IT Act Violates the Constitution

Since senators introduced the EARN IT Act (S. 3398) in early March, EFF has called attention to the many ways in which the bill would be a disaster for Internet users’ free speech and security.

We’ve explained how the EARN IT Act could be used to drastically undermine encryption. Although the bill doesn’t use the word “encryption” in its text, it gives government officials like Attorney General William Barr the power to compel online service providers to break encryption or be exposed to potentially crushing legal liability.

The bill also violates the Constitution’s protections for free speech and privacy. As Congress considers the EARN IT Act—which would require online platforms to comply with to-be-determined “best practices” in order to preserve certain protections from criminal and civil liability for user-generated content under Section 230 (47 U.S.C. § 230)—it’s important to highlight the bill’s First and Fourth Amendment problems.

First Amendment

As we explained in a letter to Congress, the EARN IT Act violates the First Amendment in several ways.

1. The bill’s broad categories of “best practices” for online service providers amount to an impermissible regulation of editorial activity protected by the First Amendment.

The bill’s stated purpose is “to prevent, reduce, and respond to the online sexual exploitation of children.” However, it doesn’t directly target child sexual abuse material (CSAM, also referred to as child pornography) or child sex trafficking ads. (CSAM is universally condemned, and there is a broad framework of existing laws that seek to eradicate it, as we explain in the Fourth Amendment section below).

Instead, the bill would allow the government to go much further and regulate how online service providers operate their platforms and manage user-generated content—the very definition of editorial activity in the Internet age. Just as Congress cannot pass a law demanding news media cover specific stories or present the news a certain way, it similarly cannot direct how and whether online platforms host user-generated content.

2. The EARN IT Act’s selective removal of Section 230 immunity creates an unconstitutional condition.

Congress created Section 230 and, therefore, has wide authority to modify or repeal the law without violating the First Amendment (though as a policy matter, we don’t support that). However, the Supreme Court has said that the government may not condition the granting of a governmental privilege on individuals or entities doing things that amount to a violation of their First Amendment rights.

Thus, Congress may not selectively grant Section 230 immunity only to online platforms that comply with “best practices” that interfere with their First Amendment right to make editorial choices regarding their hosting of user-generated content.

3. The EARN IT Act fails strict scrutiny.

The bill seeks to hold online service providers responsible for a particular type of content and the choices they make regarding user-generated content, and so it must satisfy the strictest form of judicial scrutiny.

Although the content the EARN IT Act seeks to regulate is abhorrent and the government’s interest in stopping the creation and distribution of that content is compelling, the First Amendment still requires that the law be narrowly tailored to address those weighty concerns. Yet, given the bill’s broad scope, it will inevitably force online platforms to censor the constitutionally protected speech of their users.

Fourth Amendment

The EARN IT Act violates the Fourth Amendment by turning online platforms into government actors that search users’ accounts without a warrant based on probable cause.

The bill states, “Nothing in this Act or the amendments made by this Act shall be construed to require a provider of an interactive computer service to search, screen, or scan for instances of online child sexual exploitation.” Nevertheless, given the bill’s stated goal to, among other things, “prevent” online child sexual exploitation, it’s likely that the “best practices” will effectively coerce online platforms into proactively scanning users’ accounts for content such as CSAM or child sex trafficking ads.

Contrast this with what happens today: if an online service provider obtains actual knowledge of an apparent or imminent violation of anti-child pornography laws, it’s required to make a report to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline. NCMEC then forwards actionable reports to the appropriate law enforcement agencies.

Under this current statutory scheme, an influential decision by the U.S. Court of Appeals for the Tenth Circuit, written by then-Judge Neil Gorsuch, held that NCMEC is not simply an agent of the government, it is a government entity established by act of Congress with unique powers and duties that are granted only to the government.

On the other hand, courts have largely rejected arguments that online service providers are agents of the government in this context. That’s because the government argues that companies voluntarily scan their own networks for private purposes, namely to ensure that their services stay safe for all users. Thus, courts typically rule that these scans are considered “private searches” that are not subject to the Fourth Amendment’s warrant requirement. Under this doctrine, NCMEC and law enforcement agencies also do not need a warrant to view users’ account content already searched by the companies.

However, the EARN IT Act’s “best practices” may effectively coerce online platforms into proactively scanning users’ accounts in order to keep the companies’ legal immunity under Section 230. Not only would this result in invasive scans that risk violating all users’ privacy and security, companies would arguably become government agents subject to the Fourth Amendment. In analogous cases, courts have found private parties to be government agents when the “government knew of and acquiesced in the intrusive conduct” and “the party performing the search intended to assist law enforcement efforts or to further his own ends.”

Thus, to the extent that online service providers scan users’ accounts to comply with the EARN IT Act, and do so without a probable cause warrant, defendants would have a much stronger argument that these scans violate the Fourth Amendment. Given Congress’ goal of protecting children from online sexual exploitation, it should not risk the suppression of evidence by effectively coercing companies to scan their networks.

Next Steps

Presently, the EARN IT Act has been introduced in the Senate and assigned to the Senate Judiciary Committee, which held a hearing on March 11. The next step is for the committee to consider amendments during a markup proceeding (though given the current state of affairs it’s unclear when that will be). We urge you to contact your members of Congress and ask them to reject the bill.

Take Action

PROTECT OUR SPEECH AND SECURITY ONLINE

Go to Source
Author: Sophia Cope

Victory! Federal Circuit Enables Public to Hear Arguments In Important Patent Case

Just like us, federal judges are continuing to grapple with the challenges of COVID-19 and its impact on their ability to do their jobs. Less than two weeks ago, the U.S. Court of Appeals for the Federal Circuit in Washington, D.C. announced that April’s oral arguments in our case would take place telephonically or not at all. Since that time, the court has cancelled arguments for a substantial number of cases on its calendar, but EFF’s argument on behalf of the public’s right to access court documents in patent cases is among those the Court has scheduled for telephonic argument.

Whatever challenges lie ahead, courts must ensure that their proceedings remain as accessible to the public as possible.

Before the court ruled out in-person argument, EFF had filed a motion asking the court to make public video of the oral argument so that people unable to travel to the Washington, D.C., courtroom could see the argument too. The motion for video access was, of course, denied when in-person arguments were cancelled. But today, the Federal Circuit embraced the EFF’s push for live access to oral argument, announcing that it will provide “media and public access to the live audio of each panel scheduled for argument during the April 2020 session.

This is the first time the Federal Circuit has provided the public and press with access to oral argument audio in real time. It will ensure that during the outbreak, the public and press do not altogether lose the ability to access court proceedings as they happen. We commend the Court for taking this crucial step to enhance public access. And we are deeply grateful to the court staff working to make sure that arguments can proceed and be heard by members of the press and public alike.

Whatever challenges lie ahead, courts must ensure that their proceedings remain as accessible to the public as possible. We hope this precedent cements the public’s right to remotely access oral arguments in real time, and paves the way for greater public access in the future.

Go to Source
Author: Alex Moss

Speaking Freely: Sandra Ordoñez

Sandra (Sandy) Ordoñez is dedicated to protecting women being harassed online. Sandra is an experienced community engagement specialist, a proud NYC Latina resident of Sunset Park Brooklyn, and a recipient of Fundación Carolina’s Hispanic Leadership Award. She is also a long-time diversity and inclusion advocate, with extensive experience incubating and creating FLOSS and Internet Freedom community tools.

These commitments and principles drive Sandra’s work as the co-founder and Director of the Internet Freedom Festival (IFF) at Article19. Even before launching the Internet Freedom Festival, Sandra was helping to grow and diversify the global Internet Freedom community. As their inaugural Director of Community and Outreach, Sandra led the creation of Open Technology Fund’s (OTF) Community Lab. Before her time at OTF, Sandra was Head of Communications and Outreach at OpenITP where she supported the community behind FLOSS anti-surveillance and anti-censorship tools. She also served as the first Communications Director for the Wikimedia Foundation.

As a researcher Sandra has conducted over 400 expert interviews on the future of journalism, and conducted some of the first research on how Search Engine Optimization (SEO) reinforces stereotypes. She also provides consultation on privacy-respecting community marketing, community building, organizational communication, event management, program design, and digital strategy. All while serving on the board of the Open Technology Fund, Trollbusters, and Equality Labs.

In recent months Facebook, and others, have proposed the creation of oversight boards to set content moderation policies internationally. In the US, the fight to protect free expression has taken on a new urgency with Senators Graham and Blumenthal introducing the EARN IT Act. A bill that, if enacted, would erode critical free speech protections and create a government commission with the power to codify best practices, with criminal and civil liability on platforms that failed to meet them. With these committees in mind, I was eager to speak with Sandy about how these proposals would impact communities that are often the most directly affected, and the last consulted.

Nathan “nash” Sheard: What does free speech mean to you?

Oh, that’s a good one. Free speech, to me, means the ability to share your thoughts, your analysis of things, your experience as a human being. Your experience can be anything from what you lived, to the challenges that you’re facing, to your goals and hopes for the future.

The reason I’m wording it that way is because it really bothers me how free speech is being used recently. Hate speech, for me, is not free speech. I think that’s why I’m phrasing it that way because I really think that the idea of free speech is to not censor people. To be able to express ideas and experiences and things like that. But it does not constitute an opportunity to basically hate against others or bring people down.

My partner is a philosophy professor, so I think of it in relation to critical thinking. I think of creating spaces that allow people to be honest and truthful, but towards improving society, not making it worse.

nash: What are your thoughts on responsible ways to navigate concerns around censorship and speech that is hateful or incites harm?

If I had that answer, I think I would be a billionaire right now. I think there’s a very clear distinction, when folks are trying to debate, or share information, or when they’re attacking another group of people. From my perspective, when speech incites violence or hatred against another group of people, that’s not free speech.

Nowadays, because of the context, because of the situation that we’re living in, ensuring that speech doesn’t lead to violence is really important. I think that a lot of times, cultivating healthy communities, whether it’s local advocacy, parents, professional or society in general, it requires not just having these debates about what exactly is free speech, but really about investing more resources and energy in creating tools that allow us to create safe spaces for folks. Once you have a safe space where people feel protected, and there’s rules that each community is able to create for themselves, you know what’s acceptable and not acceptable.

You can have free speech without hurting others rights, so I don’t think it’s a question of semantics. I think it’s a question of shifting our culture to create safer spaces.

nash: What do you think about the control that corporations have over deciding what those parameters look like right now?

I think they’re doing a horrible job. In fact, it gets me really angry because a lot of the corporations that are dominating these conversations have more resources than others. So, I think for them, really, they need to have a wakeup call. Communities have to start shifting resources into creating safe, healthy spaces. These corporations have to do that as well. It’s kind of like diversity and inclusion, right? Corporations may have diversity and inclusion initiatives but that doesn’t mean they really cause change or really care. Same for other areas. It feels as though the safety of people and community health is always the last thing they consider.

So, I think that if they’re going to be leaders. If they’re creating these tools, or spaces, that they want so many people to use, they have a social responsibility to make sure that what they’re offering is not going to harm society. That it’s going to protect society. So, I think it’s really about them readjusting where and how they’re spending resources. Obviously, it’s a complex question like it’s a complex problem, but it’s not impossible. In fact, it’s very, very, possible. But, it requires intent and resources.

nash: Are there steps that we as folks engaged with technology the way we are—and in the technology space with the intent to empower communities and users—should be taking to help reclaim that power for the users and for communities, rather than leaving those decisions to be made within the silos of corporate power?

I mean, more and more people, rightfully so, are pushing for more community-owned infrastructures. Ideas on what that will look like are really diverse, and it’s really going to depend on the community. Some folks will advocate for mesh networks, others will advocate for alternatives to Facebook and Twitter.

I really think it’s up to the community to decide what that looks like. We need to start brainstorming along with these communities, and finding ways to shift how we’ve done tech. In the past, a lot of folks kind of had an idea, and then they started working on the idea, and then whoever used that tool, or not, that was the end of it. I think now we really have to—especially if we care about movements and the users right to privacy and security—we really need to start working more hand in hand with not just users, but specific communities, to basically help them and empower them with options of what they can use. And, also, empower them to make decisions for themselves. That’s a really hard shift. I feel like in some ways we’re going towards that direction, but it’s going to take a kind of reprogramming of how we do things. There are so many things baked into that—power structures and how we relate to the world, and how others relate to us. I do think that investing more in brainstorming, in conjunction with communities, of what the possibilities are, is a really good first step.

nash: Some in the platform moderation community are looking to committees that would decide what is acceptable. We should obviously be exploring many different kinds of ideas. Still, I get concerned with the idea that a committee of folks who might exist and move around the world in one context will be making decisions that will affect folks in completely different contexts that they might not be able to relate with. Do you have thoughts on that strategy?

It’s a really broad question, and it’s hard because I think there are different situations that require different measures. But what I would say is ‘localize, localize, localize’. If you have moderators that are looking over content, you have to make sure you have a variety of moderators from different cultural backgrounds, different languages, different socioeconomic backgrounds, so they really understand what’s happening.

This problem requires a lot more investment, and understanding that the world is very diverse. You can’t just solve it with one thing, so that means investing in the actual communities that you’re serving. Using India as an example—India is a country that has multiple languages, multiple groups. That means that your moderation group, or your employees that you hire to do moderation, are probably going to have to be plentiful and just as diverse. The bigger tech companies may realize how much money actually is required to solve the problem, and may not feel ready to do that. But the problem is that while they’re questioning what to do, or coming up with what they think is a simple solution to a very complex problem, it’s going to impact more and more humans.

nash: I get concerned that we won’t be able to move in that direction and not create a scenario where only Facebook or Twitter have the funds to execute these schemes effectively. That we’ll set up guidelines that say you must do x or y, and then in doing so we inadvertently lock in Facebook and Twitter as the only platforms that can ever exist, because they’re the only ones with the funds to comply effectively.

Like I said, it’s a very complex problem. You know, when the Internet first started, it was almost like tons and tons of small communities everywhere that weren’t necessarily connected to other folks around the world. Then we moved to this phase where we are now. On these large global platforms like Facebook and Twitter and Instagram where we’re connected globally. I can find anybody in the world through Facebook. I think that we’re going to start seeing a shift to more local groups again and mesh networks. Or, whatever that may be for your community.

I think a lot of these companies are going to see a decrease in users. Lots of people in my life that don’t work in tech are ready. They’re just overwhelmed with the use of these platforms because it really has impacted their real human interactions. They’re looking for ways to really connect to the communities that they’re part of. That doesn’t mean that you won’t be able to connect to folks you know across the globe, but communities can mean many different things. Communities can mean the people that live in your neighborhood, or it could be colleagues that are interested in the same topic that you’re interested in.

The issue is that it’s a more complex problem than just Facebook or Twitter, and honestly it really just requires rethinking how we are bringing people together. How are we creating safe spaces? How are we defining community? How do you bring them together? How do you make sure they’re okay? It’s a rethinking. The Internet’s not that old. Right? And so it’s not surprising that in like 2020, I think we’re in 2020, that we’re starting to reconfigure how we actually want that to impact our society.

nash: I really appreciate your thoughtfulness here.  Do you have any final words you would like to offer?

This is really an important time for everybody to get involved in their communities. I just see how tired people are. And we really need to build more capacity. So, whatever people can do. If they’re interested in supporting an open Internet where people are secure and protected they really really really need to start supporting the folks that are doing work, because people are really really tired and we need to build capacity, not just in existing communities but we have to build capacity where capacity doesn’t exist. Going back to what you were saying before about platform accountability. Creating a group is not going to solve it. We need to invest in people and invest in people that can help us shift culture. That’s it.

nash: thank you, so much.

Go to Source
Author: Nathan Sheard

Vallejo Must Suspend Cell-Site Simulator Purchase

[unable to retrieve full-text content]

As Bay Area residents sheltered at home due to the COVID-19 pandemic, the Vallejo City Council assembled via teleconference last week to vote on the purchase of one of the most controversial pieces of surveillance equipment—a cell-site simulator. What’s worse is that the city council approved the purchase in violation of state law regulating the acquisition of such technology. 

Any decision to acquire this technology must happen in broad daylight, not at a time when civic engagement faces the greatest barriers in modern history due to a global pandemic. EFF has submitted a letter to the Vallejo mayor and city council asking the city to suspend the purchase and hold a fresh hearing once the COVID-19 emergency has passed and state and local officials lift the shelter-at-home restrictions. 

A cell-site simulator (also referred to as an IMSI catcher or “Stingray”) pretends to act as a cell tower in order to surveil and locate cellular devices that connect to it. After borrowing such a device from another agency, the Vallejo Police Department argued it needed its own, and proposed spending $766,000 on cell-site simulator devices from KeyW Corporation, along with a vehicle in which police would install it. 

As EFF told the council, the privacy and civil liberties concerns around cell-site simulators “have triggered Congressional investigations, high-profile legal challenges, a Federal Communications Commission complaint, and an immense amount of critical media coverage.” To combat secrecy around cell-site simulators, the California legislature passed a law in 2015 that prohibits local government agencies from acquiring cell-site simulators without the local governing body approving a privacy and usage policy that “is consistent with respect for an individual’s privacy and civil liberties.” This policy needs to be available to the public, published online, and voted on during an open hearing.

As Oakland Privacy—a local ally in the Electronic Frontier Alliance—pointed out in its own letter, no such policy was presented or approved at the hearing. EFF further notes that the city council, however, did approve a non-disclosure agreement with the cell-site simulator seller, KeyW Corporation, that could hinder the public’s right to access information.

The Vallejo City Council must follow the law and put the cell-site simulator on the shelf. 

Read EFF’s letter to the Vallejo City Council here. 

Go to Source
Author: Dave Maass

EFF to Supreme Court: Losing Your Phone Shouldn’t Mean You Lose Your Fourth Amendment Rights

You probably know the feeling: you reach for your phone only to realize it’s not where you thought it was. Total panic quickly sets in. If you’re like me (us), you don’t stop in the moment to think about why losing a phone is so scary. But the answer is clear: In addition to being an expensive gadget, all your private stuff is on there.  

Now imagine that the police find your phone. Should they be able to look through all that private stuff without a warrant? What if they believe you intentionally “abandoned” it? Last week, EFF filed an amicus brief in Small v. United States asking the Supreme Court to take on these questions.

In Small, police pursued a robbery suspect in a high-speed car chase near Baltimore, ending with a dramatic crash through the gates of the NSA’s campus in Fort Meade, Maryland. The suspect left his car, and officers searched the area. They quickly found some apparently discarded clothing, but many hours later they also find a cell phone on the ground, over a hundred feet from the clothing and the car. Despite the intervening time and the distance from the other items, the police believed that the phone also belonged to their suspect. So they looked through it and called one of the stored contacts, who eventually led them to the defendant, Mr. Small.

The Fourth Circuit Court of Appeals upheld this warrantless search of Small’s phone under the Fourth Amendment’s “abandonment doctrine.” This rule says that police don’t need a warrant to search and seize property that is abandoned, as determined by an objective assessment of facts known to the police at the time. Mr. Small filed a petition for certiorari, asking the Supreme Court to review the Fourth Circuit’s decision.

EFF’s brief in support of Small’s petition argues police shouldn’t be able to search a phone they find separated from its owner without a warrant. That’s because phones have an immense storage capacity, allowing people to carry around a comprehensive record of their lives stored on their phones. And if you’ve ever experienced that panicky feeling when you can’t find your phone, you know that, despite their intimate contents, phones are all too easy to lose. Even where someone truly chooses to abandon a phone, such as when they turn in an old phone to upgrade to a new one, they probably don’t intend to abandon any and all data that phone can store or access from the Internet—think of cloud storage, social media accounts, and the many other files accessible from your phone, but not actually located there. As a result, we argue phones are unlike any other object that individuals might carry with them and subsequently lose or even voluntarily abandon. Even when it’s arguable that the owner “abandoned” their cell phone, rather than simply misplacing it, police should be required to get a warrant to search it.

If this reasoning all sounds familiar, it’s because the Supreme Court relied on it in a landmark case involving the warrantless search of phones all the way back in 2014, in Riley v. California. Riley involved the warrantless searches of phones found on suspects during lawful arrests. Even though police can search items in a suspect’s pockets during an arrest to avoid destruction of evidence and identify any danger to the officers, the Court recognized in its opinion that phones are different: “Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans ‘the privacies of life.’”  In a unanimous decision by Chief Justice Roberts, the Court wrote, “Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple — get a warrant.”

Even though the warrant rule in Riley seemed clear and broadly applicable, the lower court in Small ruled it was limited to searches of phones found on suspects during an arrest. That’s not only a misreading of everything the Supreme Court said in Riley about why phones are different than other personal property, it’s also a bad rule that creates terrible incentives for law enforcement. It encourages warrantless searches of unattended phones, which are especially likely to lead to trawling through irrelevant and sensitive personal information.

Losing a phone is scary enough; we shouldn’t have to worry that it also means the government has free rein to look through it. We hope the Supreme Court agrees, and grants review in Small. A decision on the petition is expected by June.

Go to Source
Author: Andrew Crocker