Go to Source
Go to Source
Go to Source
Under the bipartisan Protecting Data at the Border Act, border officers would be required to get a warrant before searching a traveler’s electronic device. Last month, the bill was re-introduced into the U.S. Senate by Sen. Ron Wyden (D-Ore.) and Sen. Rand Paul (R-Ky.). It is co-sponsored by Sen. Ed Markey (D-Mass.) and Sen. Jeff Merkley (D-Ore.), and the House companion bill is co-sponsored by Rep. Ted Lieu (D-Cal.).
The rights guaranteed by the U.S. constitution don’t fade away at the border. And yet the Department of Homeland Security (DHS) asserts the power to freely search the electronic devices of travelers before allowing them entrance into, or exit from, the United States. This practice will end if Congress passes the Protecting Data at the Border Act.
Think about all of the things your cell phone or laptop computer could tell a stranger about you. Modern electronic devices could reveal your romantic and familial connections, daily routines, and financial standings. Ordinarily, law enforcement cannot obtain this sensitive information absent a signed warrant from a judge based on probable cause. But DHS claims they need no suspicion at all to search and seize this information at the border.
The bill does much more to protect digital liberty at the border. It would protect free speech by preventing federal agents from requiring a person to reveal their social media handles, usernames, or passwords. No one crossing the U.S. border should fear that a tweet critical of ICE or CBP will complicate their travel plans.
The bill also blocks agents from denying entry or exit from the United States to any U.S. person who refuses to disclose digital account information, the contents of social media accounts, or provide access to electronic equipment. Further, the bill would prevent border agencies from holding any lawful U.S. persons for over four hours in pursuit of consensual access to online accounts or the information on electronic equipment. It would also prevent the retention of traveler’s private information absent probable cause—a protection that is increasingly important after CBP admitted this week that photographs of almost 100,000 travelers’ faces and license plates were stolen from a federal subcontractor. Can we really trust this agency to securely retain our text messages and phone camera rolls?
The bill has teeth. It forbids the use of any materials gathered in violation of the Act from being used as evidence in court, including any immigration hearings.
More than ever before, our devices hold all sorts of personal and sensitive information about us, and this bill would be an important step forward in recognizing and protecting us and our devices. Congress should pass the Protecting Data at the Border Bill.
To learn more, check out EFF’s pages on how you can protect your privacy when you travel, on our lawsuit challenging border searches of traveler’s devices without a warrant, and our support for the original version of this bill.
Go to Source
Author: Matthew Guariglia
San Francisco—The Electronic Frontier Foundation asked a federal appeals court today to make public a ruling that reportedly forbade the Justice Department from forcing Facebook to break the encryption of a communications service for users.
Media widely reported last fall that a federal court in Fresno, California denied the government’s effort to compromise the security and privacy promised to users of Facebook’s Messenger application. But the court’s order and details about the legal dispute have been kept secret, preventing people from learning about how DOJ sought to break encryption, and why a federal judge rejected those efforts.
EFF, the ACLU, and Stanford cybersecurity scholar Riana Pfefferkorn told the appeals court in a filing today that the public has First Amendment and common law rights to access judicial opinions and court records about the laws that govern us. Unsealing documents in the Facebook Messenger case is especially important because the public deserves to know when law enforcement tries to compel a company that hosts massive amounts of private communications to circumvent its own security features and hand over users’ private data, EFF said in a filing to the U.S. Court of Appeals for the Ninth Circuit. ACLU and Pfefferkorn, Associate Director of Surveillance and Cybersecurity at Stanford University’s Center for Internet and Society, joined EFF’s request to unseal. A federal judge in Fresno denied a motion to unseal the documents, leading to this appeal.
Media reports last year revealed DOJ’s attempt to get Facebook to turn over customer data and unencrypted Messenger voice calls based on a wiretap order in an investigation of suspected M-13 gang activity. Facebook refused the government’s request, leading DOJ to try to hold the company in contempt. Because the judge’s ruling denying the government’s request is entirely under seal, the public has no way of knowing how the government tried to justify its request or why the judge turned it down—both of which could impact users’ ability to protect their communications from prying eyes.
“The ruling likely interprets the scope of the Wiretap Act, which impacts the privacy and security of Americans’ communications, and it involves an application used by hundreds of millions of people around the world,” said EFF Senior Staff Attorney Andrew Crocker. “Unsealing the court records could help us understand how this case fits into the government’s larger campaign to make sure it can access any encrypted communication.’’
In 2016 the FBI attempted to force Apple to disable security features of its mobile operating system to allow access to a locked iPhone belonging to one of the shooters alleged to have killed 14 people in San Bernardino, California. Apple fought the order, and EFF supported the company’s efforts. Eventually the FBI announced that it had received a third-party tip with a method to unlock the phone without Apple’s assistance. We believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security mechanisms.
“The government should not be able to rely on a secret body of law for accessing encrypted communications and surveilling Americans,” said EFF Staff Attorney Aaron Mackey. “We are asking the court to rule that every American has a right to know about rules governing who can access their private conversations.
Go to Source
Author: Karen Gullo
It should be clear now that messing around with Section 101 of the Patent Act is a bad idea. A Senate subcommittee has just finished hearing testimony about a bill that would wreak havoc on the patent system. Dozens of witnesses have testified, including EFF Staff Attorney Alex Moss. Alex’s testimony [PDF] emphasized EFF’s success in protecting individuals and small businesses from threats of meritless patent litigation, thanks to Section 101.
Section 101 is one the most powerful tools patent law provides for defending against patents that never should have been issued in the first place. We’ve written many times about small businesses that were saved because the patents being used to sue them were thrown out under Section 101, especially following the Supreme Court’s Alice v. CLS Bank decision. Now, the Senate IP subcommittee is currently considering a proposal that will eviscerate Section 101, opening the door to more stupid patents, more aggressive patent licensing demands, and more litigation threats from patent trolls.
Three days of testimony has made it clear that we’re far from alone in seeing the problems in this bill. Patents that would fail today’s Section 101 aren’t necessary to promote innovation. We’ve written about how the proposal, by Senators Thom Tillis and Chris Coons, would create a field day for patent trolls with abstract software patents. Here, we’ll take a look at a few of the other potential effects of the proposal, none of them good.
The ACLU, together with 169 other civil rights, medical, and scientific groups, has sent a letter to the Senate Judiciary Committee explaining that the draft bill would open the door to patents on human genes.
The bill sponsors have said they don’t intend to allow for patents on the human genome. But as currently written, the draft bill would do just that. The bill explicitly overrules recent Supreme Court rulings that prevent patents on things that occur in nature, like cells in the human body. Those protections were made explicit in the 2013 Myriad decision, which held that Section 101 bars patents on genes as they occur in the human body. A Utah company called Myriad Genetics had monopolized tests on the BRCA1 and BRCA2 genes, which can be used to determine a person’s likelihood of developing breast or ovarian cancer. Myriad said that because its scientists had identified and isolated the genes from the rest of the human genome, it had invented something that warranted a patent. The Supreme Court disagreed, holding that DNA is a product of nature and “is not patent eligible merely because it has been isolated.”
Once Myriad couldn’t enforce its patents, competitors offering diagnostic screening for breast and ovarian cancer could, and did, enter the market immediately, charging just a fraction of what Myriad’s test cost. Myriad’s patent did not claim to invent any of the technology actually used to perform the DNA analysis or isolation, which was available before and apart from Myriad’s gene patents.
It’s just one example of how Section 101 protects innovation and enhances access to medicine, by prohibiting monopolies on things no person could have invented.
Starting around the late 1990s, the Federal Circuit opened the door to broad patenting of software.
“The problem of patent trolls grew to epic proportions,” Stanford Law Professor Mark Lemley told the Senate subcommittee last week. “One of the things that brought it under control was the Alice case and Section 101.”
A representative of the National Retail Federation (NRF) explained how, before Alice, small Main Street businesses were subject to constant litigation brought by “non-practicing entities,” also known as patent trolls. Patent trolls are not a thing of the past—even after Alice, the majority of patent lawsuits continue to be filed by non-practicing entities.
“Our members are a target-rich environment for those with loose patent claims,” NRF’s Stephanie Martz told the subcommittee.
She went on to give examples of patents that were rightfully invalidated under Section 101, like a patent for posting nutrition information and picture menus online, which was used to sue Whataburger, Dairy Queen, and other chain restaurants—more than 60 cases in all. A patent for an online shopping cart was used to sue candy shops and 1-800-Flowers. And a patent for online maps showing properties in a particular area was used to sue Realtors and homeowners [PDF], leading to decades of litigation.
The Alice decision didn’t end such cases, but it did make it much easier to fight back. As Martz explained, since Alice, the cost of litigation has gone down between 40 and 45 percent.
The sponsors of the draft litigation have made it clear they intend to overturn Alice. That will take us back to a time not so long ago, when small businesses had to pay unjustified licensing fees to patent trolls, or face the possibility of multimillion-dollar legal bills to fight off wrongly issued patents.
The High Tech Inventors Alliance (HTIA), a group of large technology companies, also spoke against the current draft proposal.
The proposal “would allow patenting of business methods, fundamental scientific principles, and mathematical equations, as long as they were performed on a computer,” said David Jones, representing HTIA. “A more stringent test is needed, and perhaps even required by the Constitution.”
Jones also cited recent research showing that the availability of business method patents actually lowered R&D among firms that sought those patents. After Alice limited their availability, the same companies that had been seeking those patents stopped doing so, and increased their research and development budgets.
The current legal test for patents is not arbitrary or harmful to innovation, Jones argued. On the contrary, the Alice-Mayo framework “has improved patent clarity and decreased spurious litigation.”
EFF’s Alex Moss also disagreed that the current case law was “a mess” or “confusing.” Rather than throw out decades of case law, she urged Congress to look to history to consider changes that could actually point the patent system towards promoting progress.
“In the 19th century, when patent owners wanted to get a term extension, they would come to Congress and bring their accounting papers, and say—look how much we invested,” Moss explained. “I’d like to see that practical element, to make sure our patent system is promoting innovation—which is its job under the Constitution—and not just a proliferation of patents.”
At the conclusion of testimony, Sen. Tillis has said that he and Sen. Coons will take these testimonies into account as they work towards a bill that could be introduced as early as next month. We hope the Senators will begin to consider proposals that could improve the patent system, rather than open the door to the worst kinds of patents. In the meantime, please tell your members of Congress that the proposed bill is not the right solution.
TELL CONGRESS WE DON’T NEED MORE BAD PATENTS
Go to Source
Author: Joe Mullin
San Francisco and Tunis, Tunisia—While social media platforms are increasingly giving users the opportunity to appeal decisions to censor their posts, very few platforms comprehensively commit to notifying users that their content has been removed in the first place, raising questions about their accountability and transparency, the Electronic Frontier Foundation (EFF) said today in a new report.
How users are supposed to challenge content removals that they’ve never been told about is among the key issues illuminated by EFF in the second installment of its Who Has Your Back: Censorship Edition report. The paper comes amid a wave of new government regulations and actions around the world meant to rid platforms of extremist content. But in response to calls to remove objectionable content, social media companies and platforms have all too often censored valuable speech.
EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store, and Instagram. Only four companies—Facebook, Reddit, Apple, and GitHub—commit to notifying users when any content is censored and specifying the legal request or community guideline violation that led to the removal. While Twitter notifies users when tweets are removed, it carves out an exception for tweets related to “terrorism,” a class of content that is difficult to accurately identify and can include counter-speech or documentation of war crimes. Notably, Facebook and GitHub were found to have more comprehensive notice policies than their peers.
“Providing an appeals process is great for users, but its utility is undermined by the fact that users can’t count on companies to tell them when or why their content is taken down,” said Gennie Gebhart, EFF associate director of research, who co-authored the report. “Notifying people when their content has been removed or censored is a challenge when your users number in the millions or billions, but social media platforms should be making investments to provide meaningful notice.”
In the report, EFF awarded stars in six categories, including transparency reporting of government takedown requests, providing meaningful notice to users when content or accounts are removed, allowing users to appeal removal decisions, and public support of the Santa Clara Principles, a set of guidelines for speech moderation based on a human rights framework. The report was released today at the RightsCon summit on human rights in the digital age, held in Tunis, Tunisia.
Reddit leads the pack with six stars, followed by Apple’s App Store and GitHub with five stars, and Medium, Google Play, and YouTube with four stars. Facebook, Reddit, Pinterest and Snap each improved their scores over the past year since our inaugural censorship edition of Who Has Your Back in 2018. Nine companies meet our criteria for transparency reporting of takedown requests from governments, and 11 have appeals policies, but only one—Reddit—discloses the number of appeals it receives. Reddit also takes the extra step of disclosing the percentage of appeals resolved in favor of or against the appeal.
Importantly, 12 companies are publicly supporting the Santa Clara Principles, which outline a set of minimum content moderation policy standards in three areas: transparency, notice, and appeals.
“Our goal in publishing Who Has Your Back is to inform users about how transparent social media companies are about content removal and encourage improved content moderation practices across the industry,” said EFF Director of International Free Expression Jillian York. “People around the world rely heavily on social media platforms to communicate and share ideas, including activists, dissidents, journalists, and struggling communities. So it’s important for tech companies to disclose the extent to which governments censor speech, and which governments are doing it.”
For the report:
For more on platform censorship:
Go to Source
Author: Karen Gullo
For decades, journalists, activists and lawyers who work on human rights issues around the world have been harassed, and even detained, by repressive and authoritarian regimes seeking to halt any assistance they provide to human rights defenders. Digital communication technology and privacy-protective tools like end-to-end encryption have made this work safer, in part by making it harder for governments to target those doing the work. But that has led to technologists building those tools being increasingly targeted for the same harassment and arrest, most commonly under overbroad cybercrime laws that cast suspicion on even the most innocent online activities.
Right now, that combination of misplaced suspicion, and arbitrary detention under cyber-security regulations, is being played out in Ecuador. Ola Bini, a Swedish security researcher, is being detained in that country under unsubstantiated accusations, based on an overbroad reading of the country’s cybercrime law. This week, we submitted comments to the Office of the U.N. High Commissioner for Human Rights (OHCHR) and the Inter-American Commission on Human Rights (IACHR) for their upcoming 2019 joint report on the situation of human rights defenders in the Americas. Our comments focus on how Ola Bini’s detainment is a flagship case of the targeting of technologists, and dangers of cyber-crime laws.
While the pattern of demonizing benign uses of technology is global, EFF has noted its rise in the Americas in particular. Our 2018 report, “Protecting Security Researchers’ Rights in the Americas,” was created in part to push back against ill-defined, broadly interpreted cybercrime laws. It also promotes standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights might use to protect the fundamental rights of security researchers, and ensure the safe and secure development of the Internet and digital technology in the Americas and across the world.
We noted that these laws fail in several ways. First, they don’t meet the requirements established by the Inter-American Human Rights Standards, which bars any restriction of a right through the use of criminal law. Vague and ambiguous criminal laws are an impermissible basis to restrict the rights of a person.
These criminal provisions also fail to clarify the definition of malicious intent or mens rea, and actual damage turning general behaviors into strict liability crimes. That means they can affect the free expression of security researchers since they can be interpreted broadly by prosecutors seeking to target individuals.
For instance, Ola Bini is currently being charged under Article 232 of the Ecuadorian Criminal Code:
Any person who destroys, damages, erases, deteriorates, alters, suspends, blocks, causes malfunctions, unwanted behavior or deletes computer data, e-mails, information processing systems, telematics or telecommunications from all or parts of its governing logical components shall be liable to a term of imprisonment of three to five years, or:
Designs, develops, programs, acquires, sends, introduces, executes, sells or distributes in any way, devices or malicious computer programs or programs destined to cause the effects indicated in the first paragraph of this article, or:
Destroys or alters, without the authorization of its owner, the technological infrastructure necessary for the transmission, reception or processing of information in general.
If the offense is committed on computer goods intended for the provision of a public service or linked to public safety, the penalty shall be five to seven years’ deprivation of liberty.
Bini’s case highlights two consistent problems with cybercrime laws: the statute can be interpreted in such a way that any software that could be misused creates criminal liability for its creator; indeed, potentially more liability than on those who conduct malicious acts. This allows misguided prosecutions against human rights defenders to proceed on the basis that the code created by technologists might possibly be used for malicious purposes.
Additionally, we point the OHCHR-IACHR to the chain of events associated with Ola Bini’s arrest. Bini is a free software developer, who works to improve the security and privacy of the Internet for all its users. He has contributed to several key open source projects used to maintain the infrastructure of public Internet services, including JRuby, several Ruby libraries, as well as multiple implementations of the secure and open communication protocol OTR. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world.
His arrest and detention was full of irregularities: his warrant was for a “Russian hacker” (Bini is neither Russian nor a hacker); he was not read his rights, nor allowed to contact his lawyer, nor offered a translator. The arrest was preceded by a press conference, and framed as part of a process of defending Ecuador from retaliation by associates of Wikileaks. During the press conference, Ecuador’s Interior Minister announced that the government was about to apprehend individuals who are supposedly involved in trying to establish a “piracy center” in Ecuador, including two Russian hackers, a Wikileaks collaborator, and a person close to Julian Assange. She stated: “We are not going to allow Ecuador to become a hacking center, and we cannot allow illegal activities to take place in the country, either to harm Ecuadorian citizens or those from other countries or any government.”
Neither she nor any investigative authority has provided any evidence to back these claims.
As we wrote in our comments, prosecutions of technologists working in this space should be treated in the same way as the prosecution of journalists, lawyers, and other human rights defenders — with extreme caution, and with regard to the risk of politicization and misuse of such prosecutions. Unfortunately, Bini’s arrest is typical of the treatment of security researchers conducting human rights work.
We hope that the OHCHR and IACHR carefully consider our comments, and recognize how broad cybercrime laws, and their misuse by political actors, can directly challenge human rights defenders. Ola Bini’s case—and the other examples we’ve given—present clear evidence for why we must treat cybercrime law as connected to human rights considerations.
Go to Source
Author: Danny O'Brien
For years, the Eastern District of Texas (EDTX) has been a magnet for lawsuits filed by patent trolls—companies who make money with patent threats, rather than selling products or services. Technology companies large and small were sued in EDTX every week. We’ve written about how that district’s unfair and irregular procedures made it a haven for patent trolls.
In 2017, the Supreme Court put limits on this venue abuse with its TC Heartland decision. The court ruled that companies can only be sued in a particular venue if they are incorporated there, or have a “regular and established” place of business.
That was great for tech companies that had no connection to EDTX, but it left brick-and-mortar retailers exposed. In February, Apple, a company that has been sued hundreds of times in EDTX, closed its only two stores that were in the district, located in Richardson and Plano. With no stores located in EDTX, Apple will be able to ask for a transfer in any future patent cases.
In the last few days those stores were open, Apple was sued for patent infringement four times, as patent trolls took what is likely their last chance to sue Apple in EDTX.
This month, as part of our Stupid Patent of the Month series, we’re taking a closer look at one of these last-minute lawsuits against Apple. On April 12, the last day the store was open, Apple was sued by LBS Innovations, LLC, a patent-licensing company owned by two New York patent lawyers, Daniel Mitry and Timothy Salmon. Since it was formed in 2011, LBS has sued more than 60 companies, all in the Eastern District of Texas. Those defendants include some companies that make their own technology, like Yahoo, Waze, and Microsoft, but they’re mostly retailers that use software made by others. LBS has sued tire stores, pizza shops, pet-food stores, and many others, all for using internet-based maps and “store location” features. LBS has sued retailers that use software made by Microsoft, others that use Mapquest, some that use Google, as well as those that use the open-source provider OpenStreetMaps.
LBS’ lawsuits accuse retailers of infringing one or more claims of U.S. Patent No. 6,091,956, titled “Situation Information System.” The most relevant claim, which is specifically cited in many lawsuits, is claim 11, which describes a method of showing “transmittable mappable hypertext items” to a user. The claim language describes “buildings, roads, vehicles, and signs” as possible examples of those items. It also describes providing “timely situation information” on the hypertext map.
There’s a big problem with the ’956 patent, and its owners’ broad claim to have invented Internet mapping. The patent application was filed on June 12, 1997—but electronic maps, and specifically Internet-based maps, were well-known by then. Not only that, but the maps were already were adding what one would think of as “timely situation information,” such as weather and traffic updates.
Mapquest, the first commercial internet mapping service, is one example. Mapquest launched in 1996—before this patent’s 1997 priority date—and by July of that year, it was offering not just driving directions but personalized maps of cities that included favorite destinations.
And Mapquest wasn’t the first. Xerox Parc’s free interactive map was online as far back as 1993. By January 1997, it was getting more than 80,000 mapping requests per day. Michigan State University was getting 159,000 daily requests [PDF] for its weather map, which was updated regularly, in March 1997. Some cities, such as Houston, had online traffic maps available in that time period, which also got timely updates.
In 1997, any Internet user, let alone anyone actually developing online maps, would have been aware of these very public examples.
As technology advanced, and Internet use became widespread, the information available on the electronic maps we all use became richer and more frequently updated. This was no surprise. What’s described in the ‘956 patent added nothing to this clear and well-known path.
How has the LBS Innovations patent hold up in court? Despite the fact that these examples of earlier Internet maps can be found online fairly easily, that doesn’t mean it’s easy to get rid of a patent like the ‘956 patent in court. The process of invalidating patents using prior art—patent law’s term for relevant knowledge about earlier inventions—is difficult and expensive. It requires the hiring of high-priced experts, the filing of long reports, and months or years of litigation. And it often requires the substantial risk of a jury trial, since it’s difficult to get an early ruling on prior art defenses.
Because of that drawn-out process, LBS has been able to extract settlements from dozens of defendants. It’s also reached settlements with companies like Microsoft and Google, which intervened after users of their respective mapping software were sued. In one case where LBS got near trial, after having settled with several other defendants, it simply dropped its lawsuit against the final company that was willing to fight, avoiding an invalidity judgment against its patent.
LBS never should have been issued this patent in the first place. But patent examiners are given less than 20 hours, on average, to examine an application. Faced with far-reaching claims by an ambitious applicant, but little time to scrutinize them, examiners don’t have many options—especially since applicants can return again and again. That means the only way for examiners can get applications off their desk for good is by approving them. Given that incentive, it’s no surprise judges and juries often find issued patents invalid.
For software, it can be extremely difficult to find prior art that can invalidate the patent. Software was generally not patentable until the mid-1990s, when a Federal Circuit decision called State Street Bank opened the door. That means patents aren’t good prior art for the vast majority of 20th century advances in computer science. Also, software is often protected by copyright or trade secret, and therefore not published or otherwise made public.
Often, published information may not precisely match the limitations of each patent claim. Did the earlier maps search “unique mappable information code sequences,” where each code sequence represented the mapped items, “copied from the memory of said computer”? They may well have done so—but published papers on internet mapping wouldn’t bother specifying inane steps that just recite basic computer technology.
The success of a litigation campaign like the one pushed by LBS Innovations shows why we can’t rely on the parts of the Patent Act that cover prior art to weed out bad patents. Section 101 allows courts to find patents ineligible on their face and early in a case. That saves defendants the staggering costs of litigation or an unnecessary settlement. Since the Alice v. CLS Bank decision, Section 101 has been used to dispose of hundreds of abstract software patents before trial.
Right now, key U.S. Senators are crafting a bill that would weaken Section 101. That will greatly increase the leverage of patent trolls like LBS Innovations, and their claims to own widespread Internet technology.
Proponents of the Tillis-Coons patent bill argue that there’s little need to worry about bad patents slipping through Section 101, because other sections of the patent law—the sections which allow for patents to be invalidated because of earlier inventions—will ensure that wrongly granted patents don’t win in court. But patent trolls simply aren’t afraid of those sections of law, because their effects are so limited. For many defendants, the costs of attempting to prove a patent invalid under these sections makes them unusable. Faced with legal bills of hundreds of thousands of dollars, if not millions, many defendants will have little choice but to settle.
We all lose when small businesses and independent programmers lose their most powerful means of fighting against bad patents. That’s why we’re asking EFF supporters to contact their representatives in Congress, and ask them to reject the Tillis-Coons patent proposal.
Go to Source
Author: Joe Mullin
EFF is proud to announce our newest member of our already star-studded advisory board: Michael R. Nelson. Michael has worked on Internet-related global public policy issues for more than 30 years, including working on technology policy in the U.S. Senate and the Clinton White House.
Michael’s broad expertise in many different aspects of technology will be invaluable to the work we do at EFF. His experience includes launching the Washington, D.C. policy office for Cloudflare, and working as a Principal Technology Policy Strategist in Microsoft’s Technology Policy Group, a Senior Technology and Telecommunications Analyst with Bloomberg Government, and the Director of Internet Technology and Strategy at IBM. In addition, Michael has been affiliated with the CCT Program at Georgetown University for more than ten years, teaching courses and doing research on the future of the Internet, cyber-policy, technology policy, innovation policy, and e-government.
In the 1990s, Michael was Director for Technology Policy at the Federal Communications Commission and Special Assistant for Information Technology at the White House Office of Science and Technology Policy. There, he worked with Vice President Al Gore and the President’s Science Advisor on issues relating telecommunications policy, information technology, encryption, electronic commerce, and information policy. He also served as a professional staff member for the Senate’s Subcommittee on Science, Technology, and Space, chaired by then-Senator Gore and was the lead Senate staffer for the High-Performance Computing Act. He has a B.S. from Caltech and a Ph.D. from MIT. Welcome Michael!
Go to Source
Author: Rebecca Jeschke
EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other biometric surveillance technologies to footage collected by body-worn cameras.
About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4 million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into metadata, and stored in searchable databases.
Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting introduced A.B. 1215 to prohibit the use of face recognition, or other forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.
“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”
Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by banning government use of face recognition. Meanwhile, law enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.
The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.
EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice – Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our Electronic Frontier Alliance allies have joined forces in supporting this critical legislation.
Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.
Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU’s polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person using their biometric information.
Last month, Reuters reported that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.
“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”
We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.
Go to Source
Author: Nathan Sheard
If you rely on shared biked or scooters, your location privacy is at risk. Cities across the United States are currently pushing companies that operate shared mobility services like Jump, Lime, and Bird to share individual trip data for any and all trips taken within their boundaries, including where and when trips start and stop and granular details about the specific routes taken. This data is extremely sensitive, as it can be used to reidentify riders—particularly for habitual trips—and to track movements and patterns over time. While it is beneficial for cities to have access to aggregate data about shared mobility devices to ensure that they are deployed safely, efficiently, and equitably, cities should not be allowed to force operators to turn over sensitive, personally identifiable information about riders.
As these programs become more common, the California Legislature is considering a bill, A.B. 1112, that would ensure that local authorities receive only aggregated or non-identifiable trip data from shared mobility providers. EFF supports A.B. 1112, authored by Assemblymember Laura Friedman, which strikes the appropriate balance between protecting individual privacy and ensuring that local authorities have enough information to regulate our public streets so that they work for all Californians. The bill makes sure that local authorities will have the ability to impose deployment requirements in low-income areas to ensure equitable access, fleet caps to decrease congestion, and limits on device speed to ensure safety. And importantly, the bill clarifies that CalEPCA—California’s landmark electronic privacy law—applies to data generated by shared mobility devices, just as it would any other electronic devices.
Five California cities, however, are opposing this privacy-protective legislation. At least four of these cities—Los Angeles, Santa Monica, San Francisco, and Oakland—have pilot programs underway that require shared mobility companies to turn over sensitive individual trip data as a condition to receiving a permit. Currently, any company that does not comply cannot operate in the city. The cities want continued access to individual trip data and argue that removing “customer identifiers” like names from this data should be enough to protect rider privacy.
The problem? Even with names stripped out, location information is notoriously easy to reidentify, particularly for habitual trips. This is especially true when location information is aggregated over time. And the data shows that riders are, in fact, using dockless mobility vehicles for their regular commutes. For example, as documented in Lime’s Year End Report for 2018, 40 percent of Lime riders reported commuting to or from work or school during their most recent trip. And remember, in the case of dockless scooters and bikes, these devices may be parked directly outside a rider’s home or work. If a rider used the same shared scooter or bike service every day to commute between their work and home, it’s not hard to imagine how easy it might be to reidentify them—even if their name was not explicitly connected to their trip data. Time-stamped geolocation data could also reveal trips to medical specialists, specific places of worship, and particular neighborhoods or bars. Patterns in the data could reveal social relationships, and potentially even extramarital affairs, as well as personal habits, such as when people typically leave the house in the morning, go to the gym or run errands, how often they go out on evenings and weekends, and where they like to go.
The cities claim that they will institute “technical safeguards” and “business processes” to prohibit reidentification of individual consumers, but so long as the cities have the individual trip data, reidentification will be possible—by city transportation agencies, law enforcement, ICE, or any other third parties that receive data from cities.
The cities’ promises to keep the data confidential and make sure the records are exempt from disclosure under public records laws also fall flat. One big issue is that the cities have not outlined and limited the specific purposes for which they plan to use the geolocation data they are demanding. They also have not delineated how they will minimize their collection of personal information (including trip data) to data necessary to achieve those objectives. This violates both the letter and the spirit of the California Constitution’s right to privacy, which explicitly lists privacy as an inalienable right of all people and, in the words of the California Supreme Court, “prevents government and business interests from collecting and stockpiling unnecessary information about us” or “misusing information gathered for one purpose in order to serve other purposes[.]”
The biggest mistake local jurisdictions could make would be to collect data first and think about what to do with it later—after consumers’ privacy has been put at risk. That’s unfortunately what cities are doing now, and A.B. 1112 will put a stop to it.
The time is ripe for thoughtful state regulation reining in local demands for individual trip data. As we’ve told the California legislature, bike- and scooter- sharing services are proliferating in cities across the United States, and local authorities should have the right to regulate their use. But those efforts should not come at the cost of riders’ privacy.
We urge the California legislature to pass A.B. 1112 and protect the privacy of all Californians who rely on shared mobility devices for their transportation needs. And we urge cities in California and across the United States to start respecting the privacy of riders. Cities should start working with regulators and the public to strike the right balance between their need to obtain data for city planning purposes and the need to protect individual privacy—and they should stop working to undermine rider privacy.
Go to Source
Author: Jamie Williams