EFF Calls on California to End Vendor-Driven ALPR Training

[unable to retrieve full-text content]

A single surveillance vendor has garnered a monopoly on training law enforcement in California on the use of automated license plate readers (ALPRs)—a mass surveillance technology used to track the movements of drivers. After examining the course materials, EFF is now calling on the state body that oversees police standards to revoke the training certification. 

In a letter to the California Commission on Peace Officer Standards and Training (POST) sent today, EFF raises a variety of concerns related to factual accuracy of its ALPR training on legal matters. Additionally, we are concerned about the apparent conflict of interest and threat to civil liberties that occurs when a sales-driven company also provides instruction on “best practices” to police.

ALPRs are camera systems that capture license plates and character-recognition software to document the travel patterns of vehicles. The cameras are often attached to fixed locations, such as streetlights and overpasses, and to police cars, which collect data while patrolling neighborhoods. This data is uploaded to a central database that investigators can use to analyze a driver’s travel patterns, identify visitors to particular destinations, predict individuals’ locations, and track targeted vehicles in real-time. ALPR is a mass surveillance technology in the sense that the systems collect information on every driver—regardless of whether the vehicles have a nexus to a criminal investigation. 

In California, Vigilant Solutions offers ALPR training through a program it calls the “Vigilant Solutions Law Enforcement Academy,” which advertises training courses that come with free trial accounts for the company’s ALPR and face recognition platforms. Vigilant has garnered controversy due to its data-sharing contracts with ICE and its business model, which includes selling data collected with its own ALPR cameras to the private sector in addition to law enforcement. The company also has a history of requiring government agencies to sign agreements prohibiting them from talking publicly without the company’s sign-off in an effort to control media messaging. 

Vigilant claims to be the sole entity capable of providing POST-certified training on ALPR to law enforcement agencies. Through the California Public Records Act, EFF obtained copies of the training, as well as the submission materials seeking certification. These records triggered several concerns.

Most notably, the training presentation instructs police that there are no laws in California regulating the use of ALPR. While that may have been true in 2014, it has not been the case for nearly four years. In 2015, California passed a law, S.B. 34, regulating the use of ALPR systems and data collected by ALPR. These regulations include developing policies that protect civil liberties and privacy, as well as a long list of requirements related to cybersecurity and transparency. The training also does not touch on the California Values Act, a law passed in 2017 to protect California resources and data from being used in immigration enforcement. Additionally, the training module includes outdated information on case law, such as claims EFF and the ACLU lost a lawsuit over public access to ALPR data. The California Supreme Court ultimately reversed the lower court rulings outlined in the presentation. 

In emails to EFF, Vigilant has indicated that it may have updated the presentation. But if so, that version was not resubmitted for certification, as required by POST regulations, according to records obtained by EFF. POST should investigate whether Vigilant is providing its own interpretation of recent developments in law, and if so, whether that instruction serves the public interest. When a surveillance vendor offers cloud storage and sharing services, it has a profit incentive when police collect more data and share it widely. 

Troublingly, Vigilant Solutions uses the ALPR training as a platform to sell its products. The training materials are filled with promotion, such as a pitch for its ALPR databases consisting of law enforcement and commercial data, and its mobile software that comes with face recognition capabilities. By having a monopoly on ALPR trainings, Vigilant is able to promote its products and its version of the law surrounding ALPR at the expense of protecting civil liberties and privacy. 

Over the last few years, EFF has filed public records requests with hundreds of agencies throughout California and found widespread failure to comply with state law for regulating ALPR technology. These failures necessitate an examination of whether agencies are being properly trained on the use of ALPR. So far, EFF’s research has led the legislature to order the California State Auditor to initiate a statewide investigation into the use of ALPR, including deep audits of entities using Vigilant’s products. In this case, EFF urges POST to initiate the decertification proceeding for the Vigilant course and encourages law enforcement agencies to seek alternatives to Vigilant’s training.

Go to Source
Author: Dave Maass


A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition

As long we’ve had electronic mass media, audiences and creators have benefited from periods of technological upheaval that force old gatekeepers to compete with brash newcomers with new ideas about what constitutes acceptable culture and art. Those newcomers eventually became gatekeepers themselves, who then faced their own crop of revolutionaries. But today, the cycle is broken: as media, telecoms, and tech have all grown concentrated, the markets have become winner-take-all clashes among titans who seek to dominate our culture, our discourse and our communications.

How did the cycle end? Can we bring it back? To understand the answers to these questions, we need to consider how the cycle worked — back when it was still working.

How Things Used to Work

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own “community antenna television” networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony’s Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were “capable of substantial noninfringing uses.”

It’s hard to imagine how controversial the VCR was in its day. MPAA president Jack Valenti made history by attending a congressional hearing where he thundered ,”I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

Despite that unequivocal condemnation, home recording is so normal today that your cable operator likely offers to bundle a digital recorder with your subscription. Just as the record companies made peace with broadcasters, and broadcasters made peace with cable, cable has made its peace with home recording.

It’s easy to imagine that this is the general cycle of technology: a new technology comes along and rudely shoulders its way into the marketplace, pouring the old wine of the old guard into its shiny new bottles. The old guard insist that these brash newcomers are mere criminals, and demand justice.

The public flocks to the new technology, and, before you know it, the old guard and the newcomers are toasting one another at banquets and getting ready to sue the next vulgarian who has the temerity to enter their market and pour their old wine into even newer bottles.

That’s how it used to work, but the cycle has been interrupted.

The Cycle is Broken

In 1998, Congress passed the Digital Millennium Copyright Act, whose Section 1201 bans bypassing a “technical measure” that “controls access” to copyrighted works. The statute does not make an exemption for people who need to bypass a copyright lock to do something legal, so traditional acts of “adversarial interoperability” (making a new thing that plugs into an old thing without asking for permission) can be headed off before they even get started. Once a company adds a digital lock to its products, it can scare away other companies that want to give it the broadcasters vs records/cable vs broadcasters/VCRs vs cable treatment. These challengers will have to overcome their fear that “trafficking” in a “circumvention device” could trigger DMCA 1201’s civil damages or even criminal penalties—$500,000 and 5 years in prison…for a first offense.

When companies like Sony made the first analog TV recorders, they focused on what their customer wanted, not what the winners of last year’s technological battle thought was proper. That’s how we got VCRs that could record off the air or cable (so you could record any show, even major Hollywood movies getting their first broadcast airing) and that allowed recordings made on one VCR to be played on another recorder (so you could bring that movie over to a friend’s house to watch with a bowl of popcorn).

Today’s digital video products are different. Cable TV, satellite TV, DVDs/HD DVDs/Blu-Ray, and streaming services all use digital locks that scramble their videos. This allows them to threaten any would-be adversarial interoperators with legal reprisals under DMCA 1201, should they have the temerity to make a user-focused recorder for their products. That stifles a lot of common-sense ideas: for example, a recorder that works on all the programs your cable delivers (even pay-per-views and blockbusters); a recorder that lets you store the Christmas videos that Netflix and Amazon Prime take out of rotation at Christmastime so that you have to pay an upcharge to watch them when they’re most relevant; or a recorder that lets you record a video and take it over to a friend’s house or transfer it to an archival drive so you can be sure you can watch it ten years (or even ten minutes from now.

Since the first record players, every generation of entertainment technology has been overtaken by a new generation—a generation that allowed new artists to find new audiences, a new generation that overturned the biases and preconceptions of the executives that controlled the industry and allowed for new modes of expression and new ideas.

Today, as markets concentrate—cable, telecoms, movie studios, and tech platforms—the competition is shifting from the short-lived drive to produce the best TV possible to a long-term strategy of figuring out how to use a few successful shows to sell bundles of mediocre ones.

In a world where the cycle that led to the rise of cable and streaming was still in effect, you could record your favorite shows before they were locked behind a rival’s paywalls. You could search all the streaming services’ catalogs from a single interface and figure out how to make your dollar go farther by automatically assembling a mix of one-off payments and subscriptions. You could stream the videos your home devices received to your phone while you were on the road…and more.

And just as last year’s pirates — the broadcasters, the cable operators, the VCR makers — became this year’s admirals, the companies that got their start by making new services that centered your satisfaction instead of the goodwill of the entrenched industries would someday grow to be tomorrow’s Goliaths, facing a new army of Davids.

Fatalistic explanations for the unchecked rise of today’s monopolized markets—things like network effects and first-mover advantage—are not the whole story. They are not unstoppable forces of nature. The cycle of concentration and renewal in media-tech shows us that, whatever role the forces of first-mover advantage and network effects are playing in market concentration, they are abetted by some badly written and oft-abused legal rules.

DMCA 1201 let companies declare certain kinds of competition illegal: adversarial interoperability, one of the most historically tried-and-true methods for challenging dominant companies, can be made into a crime simply by designing products so that connecting to them requires you to bypass a copyright lock. Since DMCA 1201 bans this “circumvention,” it also bans any competition that requires circumvention.

That’s why we’re challenging DMCA 1201 in court: we don’t think that companies should be able to make up their own laws, because inevitably, these turn into “Felony Contempt of Business Model.”

DMCA 1201 is just one of the laws and policies that have created the thicket that would-be adversarial interoperators run up against when they seek to upend the established hierarchy: software patents, overreaching license agreements, and theories of tortious interference with contractual relations are all so broadly worded and interpreted that they can be used to intimidate would-be competitors no matter how exciting their products are and no matter how big the market for them would be.

Go to Source
Author: Cory Doctorow

EFF se suma a organizaciones de América Latina que se oponen a la acusación de Ola Bini

Este lunes se cumple el cuarto mes de procesamiento de Ola Bini, el desarrollador de código abierto que se encuentra, actualmente, bajo investigación por parte de las autoridades ecuatorianas. Los fiscales todavía no han revelado ninguna prueba real que apoye las acusaciones formuladas contra Bini. Tras el 12º Foro Regional de Gobernanza de Internet para América Latina y el Caribe (LACIGF) la semana pasada, organizaciones civiles de la región hicieron pública una declaración en la que destacaron las irregularidades en el debido proceso y las presiones políticas que han marcado el caso hasta ahora. EFF se les suma. Después de viajar a Quito para hablar con periodistas, políticos, abogados, académicos, así como con el propio Bini y su equipo de defensa, llegamos a conclusiones similares: que el procesamiento de Bini es un caso político, no criminal. Nosotros también nos oponemos al uso indebido de su enjuiciamiento en nombre de intereses políticos, lo que compromete su derecho a un juicio justo.

Desde la fundación de EFF en 1990, hemos trabajado para garantizar que los investigadores y expertos en seguridad como Bini puedan hacer su trabajo sin ser malinterpretados o perseguidos por los que están en el poder, un trabajo que mejora la seguridad de todos en línea. El trabajo de Bini no sólo es legal: ayuda a mejorar la privacidad y seguridad de todos en línea, como explicamos en nuestro documento Derechos de los codificadores en América Latina en 2018, donde conectamos dicho trabajo con los derechos fundamentales de sus profesionales y beneficiarios en la región.

Para mayor información, véase la declaración que figura a continuación:

Contra la persecución política a Ola Bini

Ola Bini es un reconocido activista por el software libre y experto en seguridad digital. Desde el 11 de abril de 2019 se encuentra sujeto a un proceso judicial en Ecuador, acusado de haber vulnerado sistemas informáticos. Tal proceso, sin embargo, ha sido ampliamente cuestionado por la multiplicidad de irregularidades cometidas y por estar bajo un sinnúmero de presiones políticas.

El primer elemento ha sido confirmado por el Habeas Corpus otorgado en junio pasado por parte del tribunal de la Corte Provincial de Pichincha y por las expresiones oportunamente realizadas por las Relatorías Especiales sobre la Libertad de Expresión de la Organización de Estados Americanos (OEA) y la Organización de las Naciones Unidas (ONU).[1] [2]

Por su parte, la Misión Internacional de la Electronic Frontier Foundation (EFF) enviada recientemente a Ecuador, tras conversar sobre esta situación con políticos, académicos y periodistas de distintas tendencias, ha concluido que la motivación tras el caso de Ola Bini es política, no criminal.[3] De hecho, todavía se desconoce cuáles son los sistemas informáticos de cuya vulneración se le acusó en un principio.

Junto con ello, una serie de hechos recientes han encendido nuevas alertas. En primer lugar, la vinculación de una nueva persona a la causa por el sólo hecho de mantener un vínculo profesional con Bini y a pesar de que en la audiencia respectiva no se presentaron los elementos jurídicos necesarios para cumplir con dicho trámite. Además, el Fiscal a cargo de la acusación decidió abrir dos nuevas líneas de investigación contra Ola Bini: por “defraudación fiscal” y “tráfico de influencias”. De tal forma, la fiscalía ahora se propone investigar por hasta el plazo de 2 años más al activista.

Esta última decisión sugiere que no se cuentan con pruebas que sustenten las acusaciones originalmente realizadas contra Bini y que la atención de la justicia y el gobierno ecuatoriano no está puesta en un delito, sino en una persona. Esto nos lleva a confirmar el temor expresado por algunas organizaciones internacionales que trabajan por los derechos humanos en internet que desde el momento de la detención de Ola Bini alertaron sobre la espiral de persecución política contra un activista de renombre internacional, cuyo trabajo es globalmente reconocido por la protección de la privacidad.

Considerando lo expresado más arriba y las conversaciones mantenidas en el marco del XII Foro de Gobernanza de Internet de América Latina y el Caribe (LACIGF por sus siglas en inglés), los abajo firmantes rechazamos el escenario persecutorio montado contra Bini, demandamos que se respete el debido proceso por parte de todas las funciones del Estado e instamos a que los actores políticos dejen de interferir en la justicia.

Asociación para el Progreso de la Comunicaciones

Derechos Digitales

Electronic Frontier Foundation

Internet Bolivia



[1] https://cnnespanol.cnn.com/2019/06/20/tribunal-de-ecuador-acepta-recurso-de-habeas-corpus-para-ola-bini/

[2] https://www.eluniverso.com/noticias/2019/04/15/nota/7287350/relatorias-onu-oea-cuestionan-detencion-ola-bini

[3] https://www.eff.org/es/deeplinks/2019/08/ecuador-political-actors-must-step-away-ola-binis-case

Go to Source
Author: Veridiana Alimonti

Interoperability and Privacy: Squaring the Circle

[unable to retrieve full-text content]

Last summer, we published a comprehensive look at the ways that Facebook could and should open up its data so that users could control their experience on the service, and to make it easier for competing services to thrive.

In the time since, Facebook has continued to be rocked by scandals: privacy breaches, livestreamed terrorist attacks, harassment, and more. At the same time, competition regulators, scholars and technologists have stepped up calls for Facebook to create and/or adopt interoperability standards to open up its messenger products (and others) to competitors.

To make matters more complex, there is an increasing appetite in both the USA and Europe, to hold Facebook and other online services directly accountable for the actions of its users: both in terms of what those users make available (copyright infringement, political extremism, incitements to violence, etc) and in how they treat each other (harassment, stalking, etc).

Fool me twice…

Facebook execs have complained that these goals are in conflict: they say that for the company to detect and block undesirable user behaviors as well as interdicting future Cambridge Analytica-style data-hijacking, they need to be able to observe and analyze everything every user does, both to train automated filters and to allow them to block abusers. But by allowing third parties to both inject data into their network and pull data out of it–that is, allowing interoperability–the company’s ability to monitor and control its users’ bad behavior will be weakened.

There is a good deal of truth to this, but buried in that truth is a critical (and highly debatable) assumption: “If you believe that Facebook has the will and ability to stop 2.3 billion people from abusing its systems and each other, then weakening Facebook’s control over these 2.3 billion people might limit the company’s ability to make that happen.”

But if there’s one thing we’ve learned from more than a decade of Facebook scandals, it’s that there’s little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook’s ambition to put more and more of our daily lives behind its walled garden, it’s hard to see why we would ever trust Facebook to be the one to fix all that’s wrong with Facebook.

After all, Facebook’s moderation efforts to date have been a mess of backfiring, overblocking, and self-censorship, a “solution” that no one is happy with.

Which is why interoperability is an important piece of the puzzle when it comes to addressing the very real harms of market concentration in the tech sector, including Facebook’s dominance over social media. Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company’s walled garden. Interoperability presents a means for people to remain partially on Facebook, but while using third-party tools that are designed to respond to their idiosyncratic needs. While it seems likely that no one is able to build a single system that protects 2.3 billion users, it’s certainly possible to build a service whose social norms and technological rules are suited to smaller groups. Facebook can’t figure out how to serve every individual and community’s needs–but those individuals and communities might be able to do so for themselves, especially if they get to choose which toolsmith’s tools they use to mediate their Facebook experience.

Standards-washing: the lesson of Bush v Gore

But not all interoperability is created equal. Companies have historically shown themselves to be more than capable of subverting mandates to adhere to standards and allow for interconnection.

A good historic example of this is the drive to standardize voting machines in the wake of the Supreme Court’s decision in Bush v Gore. Ambiguous results from voting machines resulted in an election whose outcome had to be determined by the Supreme Court, which led to Congress passing the Help America Vote Act, which mandated standards for voting machines.

The process did include a top-tier standards development organization to oversee its work: the Institute of Electrical and Electronics Engineers (IEEE), which set about creating a standard for their products. But rather than creating a “performance standard” describing how a voting machine should process ballots, the industry sneakily tried to get the IEEE to create a “design standard” that largely described the machines they’d already sold to local election officials: in other words, rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, the could have simply slapped a “complies with IEEE standard” label on everything they were already selling and declared themselves to have fixed the problem…without doing the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.

Big Tech is even more concentrated than the voting machine industry is, and it’s far more concentrated than the voting machine industry was in 2003 (most industries are more concentrated today than they were in 2003). Legislatures, courts or regulators that seek to define “interoperability” should be aware of the real risk of the definition being hijacked by the dominant players (who are already very skilled at subverting standardization processes). Any interoperability standard developed without recognizing Facebook’s current power and interest is at risk of standardizing the parts of Facebook’s business that it does not view as competitive risks, while leaving the company’s core business (and its bad business practices) untouched.

Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that’s not a standard interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like “tortious interference with contract.”

Everything not forbidden is mandatory

In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that’s not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge.

Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company’s existing products or services, without seeking permission to do so.

Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook’s users who opted in, to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook was at the center of their illegal conduct.

Adversarial interoperability flips the script

Clearing this legal thicket would go a long way toward allowing online communities to self-govern by federating their discussions with Facebook without relying on Facebook’s privacy tools and practices. Software vendors could create tools that allowed community members to communicate in private, using encrypted messages that are unintelligible to Facebook’s data-mining tools, but whose potential members could still discover and join the group using Facebook.

This could allow new entrants to flip the script on Facebook’s “network effects” advantage: today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.

Revenge of Carterfone

There is good historic precedent for using a mix of interoperability mandates and a legal right to interoperate beyond those mandates to reduce monopoly power. The FCC has imposed a series of interoperability obligations on incumbent phone companies: for example, the rules that allow phone subscribers to choose their own long-distance carriers.

At the same time, federal agencies and courts have also stripped away many of the legal tools that phone companies once used to punish third parties who plugged gear into their networks. The incumbent telecom companies historically argued that they couldn’t maintain a reliable phone network if they didn’t get to specify which devices were connected to it, a position that also allowed the companies to extract rental payments for home phones for decades, selling you the same phone dozens or even hundreds of times over.

When agencies and courts cleared the legal thicket around adversarial interoperability in the phone network, it did not mean that the phone companies had to help new entrants connect stuff to their wires: manufacturers of modems, answering machines, and switchboards sometimes had to contend with technical changes in the Bell system that broke their products. Sometimes, this was an accident of some unrelated technical administration of the system; sometimes it seemed like a deliberate bid to harm a competitor. Often, it was ambiguous.

Monopolists don’t have a monopoly on talent

But it turns out that you don’t need the phone company’s cooperation to design a device that works with its system. Careful reverse-engineering and diligent product updates meant that even devices that the phone companies hated–devices that eroded their most profitable markets–had long and profitable runs in the market, with devoted customers.

Those customers are key to the success of adversarial interoperators. Remember that the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to. Anything that the Bell system did to block third-party phone devices ultimately punished the customers who bought those devices, creating ill will.

And when a critical mass of an incumbent giant’s customer base depends on–and enjoys–a competitor’s product, even the most jealous and uncooperative giants are often convinced to change tactics and support the businesses they’ve been trying to destroy. In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can’t afford to enrage their customers.

Is Facebook better than everyone else?

Facebook is one of the largest companies in world. Many of the world’s most talented engineers and security experts already work there, and many others aspire to do so. Given that, is it realistic to think that a would-be adversarial interoperator could design a service that plugs into Facebook without Facebook’s permission?

Ultimately, this is not a question with an empirical answer. It’s true that few have tried to pull this off since Power Ventures was destroyed by Facebook litigation, but it’s not clear whether the competitive vacuum is the result of potential competitors who are too timid to lock engineering horns with Facebook’s brain-trust, or potential competitors and investors whose legal departments won’t let them even try.

But it is instructive to look at the history of the Bell system after Carterfone and Hush-a-Phone: though the Bell system was the single biggest employer of telephone technicians in the world and represented the best, safest, highest-paid opportunities for would-be telecoms innovators, after Carterfone and Hush-a-Phone, Bell’s rivals proceeded to make device after device after device that extended the capabilities of the phone network, without permission, overcoming the impediments that the network’s operator put in their way.

Closer to home, remember that when Facebook wanted to get Power Ventures out of its network, its primary tool of choice wasn’t technical measures–Facebook didn’t (or couldn’t) use API changes or firewall rules alone to keep Power Ventures off the service–it was mainly lawsuits. Perhaps that’s because Facebook wanted to set an example for later challengers by winning a definitive legal battle, but it’s very telling that the company that operated the network didn’t (or couldn’t!) just kick its rival out, and instead went through a lengthy, expensive and risky legal battle when simple IP blocking didn’t work.

Facebook has a lot of talented engineers, but it doesn’t have all of them.

Being a defender is hard

Facebook’s problem with would-be future challengers is a familiar one: in security, it is easier to attack than to defend. For Facebook to keep a potential competitor off its network, it has to make no mistakes. In order for a third party to bypass Facebook’s defenses in order to interoperate with Facebook without permission, it has only to find and exploit a single mistake.

And Facebook labors under other constraints: like the Bell system fending off Hush-a-Phone, the things that Facebook does to make life hard for competitors who are helping its users get more out of its service are also making life harder for all its users. For example, any tripwire that blocks logins by suspected bots will also block users whose behaviors appear bot-like: the more strict the bot-detector is, the more actual humans it will catch.

Here again, Facebook’s dizzying user-base works against it: with billions of users, a one-in-a-million event is going to happen thousands of times every day, so Facebook has to accommodate a wide variety of use-cases, and some of those behaviors will be sufficiently weird to allow a rival’s bot to slip through.

Back to privacy

Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook’s famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet).

Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they’re not on Facebook, by watching how they interact with pages that have Facebook “Like” buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We’re rightfully proud of our own tracker blocker, Privacy Badger, but it doesn’t stop Facebook from tracking you if you have a Facebook account and you’re using Facebook’s service.

Facebook users can also watch what they say on Facebook, hoping that they won’t slip up and put something compromising on the service that will come back to haunt them (though this isn’t always easy to predict).

But even if people do all this, they’re still exposing themselves to Facebook’s scrutiny when they use Facebook, which monitors how they use the service, every click and mouse-movement. What’s more, anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some suprisingly creepy and underhanded tactics.

If users could use a third-party service to exchange private messages with friends, or to participate in a group they’re a member of, they can avoid much (but not all) of this surveillance.

Such a tool would allow a someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook refuseniks get out of the Facebook pool while still leaving a toe in its waters. What’s more, it lets their friends follow them, by creating alternatives to Facebook where the people they want to talk to are still reachable. One user at a time, Facebook’s rivals could siphon off whole communities. As Facebook’s market power dwindled, so would the pressure that Web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web..

Third-party tools could automate the process of encrypting conversations, allowing users to communicate in private without having to trust Facebook’s promises about its security.

Finally, such a system would put real competitive pressure on Facebook. Today, Facebook’s scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.

But if there was a constellation of third-party services that were constantly carving escape hatches in Facebook’s walled garden, Facebook would have to contend with the very real possibility that a scandal could result in the permanent departure of its users. Just the possibility would change the way that Facebook made decisions: product designers and other internal personnel who argued for treating users with respect on ethical grounds would be able to add an instrumental benefit to being “good guys”: failing to do so could trigger yet another exodus from the platform.

Lower and upper bounds

It’s clear that online services need rules about privacy and interoperability setting out how they should treat their users, including those users who want to use a competing service.

The danger is that these rules will become the ceiling on competition and privacy, rather than the floor. For users who have privacy needs–and other needs–beyond those the big platforms are willing to fulfill, it’s important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.

None of this means that we should have an online free-for-all. A rival of Facebook that bypassed its safeguards to raid user data should still get in trouble (just as Facebook should get in trouble for privacy violations, inadequate security, or other bad activity). Shouldering your way into Facebook in order to break the law is, and should remain, illegal, and the power of the courts and even law enforcement should remain a check on those activities. But helping Facebook’s own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we’d be on our way to a more pluralistic, decentralized, diverse Internet.

Go to Source
Author: Cory Doctorow

Opening the Door for Censorship: New Trademark Enforcement Mechanisms Added for Top-Level Domains

With so much dissatisfaction over how companies like Facebook and YouTube moderate user speech, you might think that the groups that run the Internet’s infrastructure would want to stay far away from the speech-policing business. Sadly, two groups that control an important piece of the Internet’s infrastructure have decided to jump right in. 

The organization that governs the .org top-level domain, known as Public Interest Registry (PIR), and the Internet Corporation for Assigned Names and Numbers (ICANN) are expanding their role as speech regulators through a new agreement, negotiated behind closed doors. And they’re doing it despite the nearly unanimous opposition of nonprofit and civil society groups—the people who use .org domains. EFF is asking ICANN’s board to reconsider.

ICANN makes policies for resolving disputes over domain names, which are enforced through a web of contracts. Best-known is the Uniform Domain Name Dispute Resolution Policy (UDRP), which allows trademark holders to challenge bad-faith use of their trademarks in a domain name (specifically, cybersquatting or trademark infringement). UDRP offers a cheaper, faster alternative to domain name disputes than court. When ICANN began to add many new top-level domains beyond the traditional ones (.com, .net, .org, and a few others), major commercial brands and their trademark attorneys predicted a plague of bad-faith registrations and threatened to hold up creation of these new top-level domains, including much-needed domains in non-Latin scripts such as Chinese, Arabic, and Cyrillic.

In response, the community allowed trademark interests to create more enforcement mechanisms, but solely for these new top-level domains. One of these was Uniform Rapid Suspension (URS), a faster, cheaper version of UDRP. URS is a summary procedure designed for slam-dunk cases of cybersquatting or trademark infringement. it features shorter deadlines for responding to challenges, and its decisionmakers are paid much less than the panelists who decide UDRP cases.

In a move that has drawn lots of criticism, ICANN announced that it is requiring the use of URS in the .org domain, along with other rules that were developed specifically for the newer domains.

URS is a bad fit for .org, the third most-used domain and home to millions of nonprofit organizations (including, of course, eff.org). The .org domain has been around since 1985, long before ICANN was created. And with over ten million names already registered, there’s no reason to expect a “land rush” of people snatching up the names of popular brands and holding them for ransom.

When nonprofit organizations use brand names and other commercial trademarks, it’s often to call out corporations for their misdeeds—a classic First Amendment-protected activity. That means challenges to domain names in .org need more careful, thorough consideration than URS can provide. Adding URS to the .org domain puts nonprofit organizations who strive to hold powerful corporations and governments accountable at risk of losing their domain names, effectively removing those organizations from the Internet until they can register a new name and teach the public how to find it. Losing a domain name means losing search engine placement, breaking every inbound link to the website, and knocking email and other vital services offline.

Beyond URS, the new .org agreement gives Public Interest Registry carte blanche to “implement additional protections of the legal rights of third parties” whenever it chooses to. These aren’t necessarily limited to cases where a court has found a violation of law and orders a domain name suspended. And it could reach beyond disputes over domain names to include challenges to the content of a website, effectively making PIR a censorship bureau.

This form of content regulation has already happened in some TLDs. Donuts and Radix, which operate hundreds of top-level domains, already suspend websites’ domain names based on accusations of copyright infringement from the Motion Picture Association of America, without a court order. Some registries also take down the domain names of pharmacy-related websites based on requests from private groups affiliated with U.S. pharmaceutical companies, again without a court order or due process.

PIR, the operator of .org, has previously proposed to build its own copyright enforcement system. PIR quickly walked back that proposal after EFF spotlighted it. But PIR’s new agreement with ICANN provides a legal foundation for bringing back that proposal, or other forms of content regulation. And the existence of these contract terms could make it harder for PIR and registrars to say “No” the next time an industry group like MPAA, or a law enforcement agency from anywhere in the world, comes demanding that they act as judge, jury, and executioner of “bad” websites.

Bypassing Users’ Input

The process that led to these changes was problematic, too. The multistakeholder process, which is supposed to account for the views and needs of all groups affected by a policy change, was simply bypassed. ICANN did announce the new .org contract and provided for a period of public comment. But this seems to have been a hollow gesture.

The Non-Commercial Stakeholder Group, a group that represents many hundreds of the organizations that have .org domain names, filed a comment laying out why that domain shouldn’t have the URS system and other “rights protection mechanisms” beyond the UDRP. EFF and the Domain Name Rights Coalition also filed a comment, which was joined by top academics and activists on domain name policy.

An extraordinary and unprecedented 3,250 others filed comments opposing the new .org contract, mainly on the grounds that it removed price caps from .org registrations, potentially allowing Public Interest Registry to increase the fees it charges millions of nonprofit organizations. In contrast, only six commenters, including groups representing trademark holder interests and incumbent registries, filed supportive comments. But ICANN made no meaningful changes in response to these comments from the actual users of .org domain names. The contract they concluded on July 30th was the same as the one they proposed at the start of the public comment period. The ICANN Staff seem to think they can make any policies they choose by contract.

What Comes Next?

EFF has asked the ICANN board to reconsider their new contract, to submit the issue to the ICANN community for a decision, and to remove URS from the .org domain. Public Interest Registry has not yet created any new enforcement mechanisms, nor returned to the copyright enforcement proposal it made and shelved in 2016—but if the new contract stands, it will give them legal cover for doing so. It’s important that Internet users, especially nonprofits, make clear to ICANN, PIR, and PIR’s parent organization, the Internet Society, that nonprofits don’t need new, accelerated trademark enforcement or new forms of content regulation. After all, there’s no reason to think that these organizations will regulate the speech of Internet users any better than Facebook, YouTube, Twitter, and other prominent social networks have done. It would be best if they stay out of that role entirely.

Go to Source
Author: Mitch Stoltz

EFF Delegation Returns from Ecuador, says Ola Bini’s Case is Political, Not Criminal

San Francisco – A team from the Electronic Frontier Foundation (EFF) has returned from a fact-finding mission in Quito for the case of Ola Bini—a globally renowned Swedish programmer who is facing tenuous computer-crime charges in Ecuador.

Bini was detained in April, as he left his home in Quito to take a vacation to Japan. His detention was full of irregularities: for example, his warrant was for a “Russian hacker,” and Bini is Swedish and not a hacker. Just hours before Bini’s arrest, Ecuador’s Minister of the Interior, Maria Romo, held a press conference to announce that the government had located a “member of Wikileaks” in the country, and claimed there was evidence that person was “collaborating to destabilize the government.” Bini was not read his rights, allowed to contact his lawyer, or offered a translator.

Bini was released from custody in June, following a successful Habeas Corpus plea by his lawyers. But he is still accused of “assault on the integrity of computer systems”—even though prosecutors have yet to make public any details of his alleged criminal behavior.

“If someone breaks into a house, and authorities arrest a suspect, the prosecution should at the very least be able to tell you which house was broken into,” said EFF Director of Strategy Danny O’Brien, who was part of EFF’s delegation to Quito. “The same principle applies in the digital world.”

In Ecuador, EFF’s team spoke to journalists, politicians, lawyers, academics, as well as to Bini and his defense team. These experts have concluded that Bini’s continuing prosecution is a political case, not a criminal one.

“We believe that Ecuadorian authorities have grown concerned about the wider political consequences of either abandoning Bini’s case or continuing to prosecute, creating an impasse,” said O’Brien. “But Ola Bini’s innocence or guilt should be determined by a fair trial that follows due process. It should in no way be impacted by potential political ramifications.”

Bini has worked on several key open source projects, including JRuby, and several Ruby libraries, as well as implementations of the secure and open communication protocol OTR. He has also contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world. Bini recently co-founded Centro de Autonomía Digital, a non-profit organization devoted to creating user-friendly security tools.

For more on Ola Bini and EFF’s delegation to Ecuador:

Go to Source
Author: Rebecca Jeschke

DEEP DIVE: CBP’s Social Media Surveillance Poses Risks to Free Speech and Privacy Rights

The U.S. Department of Homeland Security (DHS) and one of its component agencies, U.S. Customs and Border Protection (CBP), released a Privacy Impact Assessment [.pdf] on CBP’s practice of monitoring social media to enhance the agency’s “situational awareness.” As we’ve argued in relation to other government social media surveillance programs, this practice endangers the free speech and privacy rights of Americans.

“Situational Awareness”

The Privacy Impact Assessment (PIA) states that CBP searches public social media posts to bolster the agency’s “situational awareness”—which includes identifying “natural disasters, threats of violence, and other harmful events and activities” that may threaten the safety of CBP personnel or facilities, including ports of entry.

The PIA aims to inform the public of privacy and related free speech risks associated with CBP’s collection of personally identifiable information (PII) when monitoring social media. CBP claims it only collects PII associated with social media—including a person’s name, social media username, address or approximate location, and publicly available phone number, email address, or other contact information—when “there is an imminent threat of loss of life, serious bodily harm, or credible threats to facilities or systems.”

Why Now?

It is unclear why DHS and CBP released this PIA now, especially since both agencies have been engaging in social media surveillance, including for situational awareness, for several years.

The PIA cites authorizing policies DHS Directive No. 110-01 (June 8, 2012) [.pdf] and DHS Instruction 110-01-001 (June 8, 2012) [.pdf] as governing the use of social media by DHS and its component agencies (including CBP) for various “operational uses,” including situational awareness. The PIA also cites CBP Directive 5410-003, “Operational Use of Social Media” (Jan. 2, 2015), which does not appear to be public. EFF asked for the release of this document in a coalition letter sent to the DHS acting secretary in May.

Federal law requires government agencies to publish certain documents to facilitate public transparency and accountability related to the government’s collection and use of personal information. The E-Government Act of 2002 requires a PIA “before initiating a new collection of information that will be collected, maintained, or disseminated using information technology” and when the information is “in an identifiable form.” Additionally, the Privacy Act of 1974 requires federal agencies to publish Systems of Records Notices (SORNs) in the Federal Register when they seek create new “systems of records” to collect and store personal information, allowing for the public to comment.

This appears to be the first PIA that CBP has written related to social media monitoring. The PIA claims that the related SORN on social media monitoring for situational awareness is DHS/CBP-024 Intelligence Records System (CIRS) System of Records, 82 Fed. Reg. 44198 (Sept. 21, 2017). Given that DHS issued directives in 2012 and CBP issued a directive in 2015 around social media monitoring, this PIA comes seven years late. Moreover, there is no explanation as to why the SORN was published two years after CBP’s 2015 directive, nor why the present PIA was published two years after the SORN.

In March, CBP came under scrutiny for engaging in surveillance of activists, journalists, attorneys, and others at the U.S.-Mexico border, with evidence suggesting that their social media profiles had been reviewed by the government. DHS and CBP released this PIA only three weeks after that scandal broke.

Chilling Effect on Free Speech

CBP’s social media surveillance poses a risk to the free expression rights of social media users. The PIA claims that CBP is only monitoring public social media posts, and thus “[i]ndividuals retain the right and ability to refrain from making information public or, in most cases, to remove previously posted information from their respective social media accounts.”

While social media users retain control of their privacy settings, CBP’s policy chills free speech by causing people to self-censor—including curbing their public expression on the Internet for fear that CBP could collect their PII for discussing a topic of interest to CBP. Additionally, people running anonymous social media accounts might be afraid that PII collected could lead to their true identities being unmasked, despite that the Supreme Court has long held that anonymous speech is protected by the First Amendment.

This chilling effect is exacerbated by the fact that CBP does not notify users when their PII is collected. CBP also may share information with other law enforcement agencies, which could result in immigration consequences or being added to a government watchlist. Finally, CBP’s definition of situational awareness is broad, and includes “information gathered from a variety of sources that, when communicated to emergency managers and decision makers, can form the basis for incident management decision making.”

We have seen this chilling effect play out in real life. Only three weeks before DHS and CBP released this PIA, NBC7 San Diego broke the story that CBP, along with other DHS agencies, created a secret database of 59 activists, journalists, and attorneys whom the government flagged for additional screening at the U.S. border because they were allegedly associated with the migrant caravan. Dossiers on certain individuals included pictures from social media and notations of designations such as “administrator” of a Facebook group providing support to the caravan, indicating that the government had surveilled their social media profiles.

As one lawyer stated, “It has a real chilling effect on people who might go down [to the border].” A journalist who was on the list of 59 individuals said the “increased scrutiny by border officials could have a chilling effect on freelance journalists covering the border.”

EFF joined a coalition letter to the DHS acting secretary about CBP’s secret dossiers. Several senators wrote a follow-up letter [.pdf]. In mid-May, CBP finally admitted to targeting journalists and others at the border, but justified its actions by claiming, without evidence, that journalists had “some level of participation in the violent incursion events.”

CBP’s Practices Don’t Mitigate Risks to Free Speech

The PIA claims that any negative impacts on free speech of social media surveillance are mitigated by both CBP policy and the Privacy Act’s prohibition on maintaining records of First Amendment activity. Yet, these supposed safeguards ultimately provide little protection.

First Amendment

The PIA emphasizes that CBP personnel are trained to “use a balancing test” to determine whether social media information presents a “credible threat”—as opposed to First Amendment-protected speech—and thus may be collected. According to the PIA, the balancing test involves gauging “the weight of a First Amendment claim, the severity of the threat, and the credibility of the threat.” However, this balancing test has no basis in constitutional law.

The Supreme Court has a long line of decisions that have established when speech rises to the level of a true threat or incitement to violence and is thus unprotected by the First Amendment.

In Watts v. United States (1969), the Supreme Court held that under the First Amendment only “true threats” may be punishable. The Court stated that alleged threats must be viewed in context, and noted that in the “political arena” in particular, language “is often vituperative, abusive, and inexact.” Thus, the Court further held that “political hyperbole” is not a true threat. In Elonis v. United States (2015), the Supreme Court held that an individual may not be criminally prosecuted for making a true threat based only on an objective test of negligence, i.e., whether a reasonable person would have understood the communication as a threat. Rather, the defendant’s subjective state of mind must be considered, including whether he intended to make a threat or knew that his statement would be viewed as a threat. (The Court left open whether a recklessness standard would also be sufficient for the speech to fall out of First Amendment protections.)

Additionally, in Brandenburg v. Ohio (1969), the Supreme Court held that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” There, the Court struck down an Ohio law that penalized individuals who advocated for violence to accomplish political reform, holding that the abstract advocacy of violence “is not the same as preparing a group for violent action and steeling it to such action.” In Hess v. Indiana (1973), the Court further clarified that speech that is mere “advocacy of illegal action at some indefinite future time,” is “not directed to any person or group of persons,” and is unsupported by evidence or rational inference that the speaker’s words were “intended to produce, and likely to produce, imminent disorder,” remains protected by the First Amendment. Similarly, the Court in NAACP v. Claiborne Hardware Co. (1982), held that “[a]n advocate must be free to stimulate his audience with spontaneous and emotional appeals for unity and action in a common cause. When such appeals do not incite lawless action, they must be regarded as protected speech.”

While the PIA states that CBP considers threatening posts to be those that “infer an intent, or incite others, to do physical harm or cause damage, injury, or destruction,” the PIA does not fully embrace the nuances of the Supreme Court’s jurisprudence—and CBP’s balancing test fails to comport with constitutional law. A seemingly threatening social media post may, in fact, be protected by the First Amendment if it is political hyperbole or other contextual facts suggest that the speaker did not intend to make a threat or did not believe that readers would view the post as a threat. Furthermore, a social media post that advocates for violence against CBP facilities or personnel may nevertheless be protected by the First Amendment if it is not directed at any particular person or group, and evidence does not reasonably indicate that the speaker intended to incite imminent violence or illegal action, or that imminent violence or illegal action is likely to result from the speech.

Thus, CBP may be collecting social media information and related PII even when the speech is protected by the First Amendment—contrary to its own policy—and further contributing to the chilling effect of CBP’s social media surveillance program.

Privacy Act

The PIA also mentions the Privacy Act, a federal law that establishes rules about what type of information the government can collect and keep about U.S. persons. In particular, the PIA points to 5 U.S.C. § 552a(e)(7), the prohibition against federal agencies maintaining records “describing how any individual exercises rights guaranteed by the First Amendment.”

Unfortunately, this prohibition is followed by an exception that effectively swallows the rule—that information about First Amendment activity may be collected if it is “pertinent to and within the scope of an authorized law enforcement activity.”

In Raimondo v. FBI, a Privacy Act case currently before the Ninth Circuit, the FBI kept surveillance files for “threat assessments” on two individuals who ran an antiwar website. EFF argued in an amicus brief against an expansive interpretation of the Privacy Act’s law enforcement activity exception in light of modern technology—specifically, given the ease with which law enforcement can collect, store, and share information about First Amendment activity on the internet, such information should not be stored “in government files in perpetuity when the record is not relevant to an active investigation.” We reminded the Ninth Circuit that in MacPherson v. I.R.S. (1986), the court recognized that “even ‘incidental’ surveillance and recording of innocent people exercising their First Amendment rights may have a ‘chilling effect’ on those rights that (e)(7) [of the Privacy Act] was intended to prohibit.”

Raimondo demonstrates the seemingly limitless nature of the law enforcement activity exception, including allowing for the indefinite retention of records of online activism and journalism, activity that is clearly protected by the First Amendment.

Similarly, under this PIA, because CBP follows a “credible threat” assessment not rooted in the First Amendment and the Privacy Act’s law enforcement activity exception can be interpreted broadly, CBP could very well collect and retain information that is protected by the First Amendment.

Unidentified Government Social Media Profiles Pose Risk to User Privacy

The PIA inspires little confidence not only in DHS and CBP’s interpretation of the law related to protected speech, but also in CBP personnel’s ability to follow the agencies’ own policies related to respecting social media users’ privacy.

The PIA states that CBP personnel “may conceal their identity when viewing social media for operational security purposes,” effectively allowing CBP agents to create fake accounts. However, this provision conflicts with DHS’s 2012 directive, which requires employees to “[u]se online screen names or identities that indicate an official DHS affiliation and use DHS email addresses to open accounts used when engaging in social media in the performance of their duties.”

Moreover, if, as according to the PIA, CBP personnel do not engage with other social media users and may only monitor “publicly available, open source social media,” it begs the question: why would a CBP agent need to create a fake account? Public posts or information are equally available to all social media users on a platform. Why would CBP personnel need to conceal their identity before viewing a publicly available post if they are not attempting to engage with a user?

This concern is backed by past practices where DHS agencies used fake profiles and interacted with users during the course of monitoring their social media activity. Earlier this year, journalists revealed that U.S. Immigration and Customs Enforcement (ICE) officers created fake Facebook and LinkedIn profiles to lend legitimacy to a sham university intended to identify individuals allegedly engaged in immigration fraud. There, ICE officers friended other users and exchanged emails with students, thereby potentially bypassing social media privacy settings and gaining access to information intended to remain private.

Such practices not only violate DHS’ existing policies, but also allow law enforcement to obtain access to content that would otherwise require a probable cause warrant. Furthermore, fake profiles violate the policies of several social media platforms. Facebook has publicly stated that law enforcement impersonator profiles violate the company’s terms of service. 

Fighting Back

The CBP PIA is just one sliver of a broad federal government campaign to engage in social media surveillance. DHS, through its National Operations Center, has been monitoring social media for “situational awareness” since at least 2010. DHS also has been monitoring social media for intelligence gathering purposes. More recently, DHS and the State Department have greatly expanded social media surveillance to vet visitors and immigrants to the U.S., which EFF and other civil society groups have consistently opposed.

Several congressional committees have the responsibility and the opportunity to review CBP’s budget and provide oversight of the agency’s operations, including its social media surveillance.  At a minimum, EFF urges these committees to ensure that CBP is following DHS’ own policies and is reporting, both to Congress and the public, how often officers are engaging in social media monitoring to understand the prevalence and scale of this program. Fundamentally, Congress should be asking why social media surveillance programs are necessary for public safety. Additionally, Congress has the responsibility to ensure that CBP and DHS are abiding by settled case law respecting the free speech and privacy rights of Americans and foreign travelers.

We’re also pushing social media companies to do more when they identify law enforcement impersonator profiles at the local, state, and federal level. Earlier this year, Facebook’s legal staff demanded that the Memphis Police Department “cease all activities on Facebook that involve the use of fake accounts or impersonation of others.” Additionally, Facebook updated its “Information for Law Enforcement Authorities” page to highlight how its misrepresentation policy also applies to police. While EFF applauds these steps, we are skeptical that warnings or policy changes alone will deter the activity. Facebook says it will delete accounts brought to its attention, but too often these accounts only become publicly known—through a lawsuit or a media report—long after the damage has been done. Instead, EFF is calling on Facebook to take specific steps to provide transparency into these law enforcement impersonator accounts by notifying users who have interacted with these accounts, following the Santa Clara Principles when removing the law enforcement accounts, and adding notifications to agencies’ Facebook pages to inform the public when the agencies’ policies permit impersonator accounts in violation of Facebook’s policy.

Please contact your members of Congress and urge them to hold CBP accountable. Congress depends on hearing from their constituents to know where to focus, and public pressure can ensure that social media surveillance won’t get overlooked.

Go to Source
Author: Saira Hussain

‘IBM PC Compatible’: How Adversarial Interoperability Saved PCs From Monopolization

Adversarial interoperability is what happens when someone makes a new product or service that works with a dominant product or service, against the wishes of the dominant business.

Though there are examples of adversarial interoperability going back to early phonograms and even before, the computer industry has always especially relied on adversarial interoperability to keep markets competitive and innovative. This used to be especially true for personal computers.

From 1969 to 1982, IBM was locked in battle with the US Department of Justice over whether it had a monopoly over mainframe computers; but even before the DOJ dropped the suit in 1982, the computing market had moved on, with mainframes dwindling in importance and personal computers rising to take their place.

The PC revolution owes much to Intel’s 8080 chip, a cheap processor that originally found a market in embedded controllers but eventually became the basis for early personal computers, often built by hobbyists. As Intel progressed to 16-bit chips like the 8086 and 8088, multiple manufacturers entered the market, creating a whole ecosystem of Intel-based personal computers.

In theory, all of these computers could run MS-DOS, the Microsoft operating system adapted from 86-DOS, which it acquired from Seattle Computer Products, but, in practice, getting MS-DOS to run on a given computer required quite a bit of tweaking, thanks to differences in controllers and other components.

When a computer company created a new system and wanted to make sure it could run MS-DOS, Microsoft would refer the manufacturer to Phoenix Software (now Phoenix Technologies), Microsoft’s preferred integration partner, where a young software-hardware wizard named Tom Jennings (creator of the pioneering networked BBS software FidoNet) would work with Microsoft’s MS-DOS source code to create a custom build of MS-DOS that would run on the new system.

While this worked, it meant that major software packages like Visicalc and Lotus 1-2-3 would have to release different “PC-compatible” versions, one for each manufacturer’s system. All of this was cumbersome, error-prone, and expensive, and it meant, for example, that retailers would have to stock multiple, slightly different versions of each major software program (this was in the days when software was sold from physical retail locations, on floppy disks packaged in plastic bags or shrink-wrapped boxes).

All that changed in 1981, when IBM entered the PC market with its first personal computer, which quickly became the de facto standard for PC hardware. There are many reasons that IBM came to dominate the fragmented PC market: they had the name recognition (“No one ever got fired for buying IBM,” as the saying went) and the manufacturing experience to produce reliable products.

Equally important was IBM’s departure from its usual business practice of pursuing advantage by manufacturing entire systems, down to the subcomponents. Instead, IBM decided to go with an “open” design that incorporated the same commodity parts that the existing PC vendors were using, including MS-DOS and Intel’s 8086 chip. To accompany this open hardware, IBM published exhaustive technical documentation that covered every pin on every chip, every way that programmers could interact with IBM’s firmware (analogous to today’s “APIs”), as well as all the non-standard specifications for its proprietary ROM chip, which included things like the addresses where IBM had stored the fonts it bundled with the system.

Once IBM’s PC became the standard, rival hardware manufacturers realized that they had to create systems that were compatible with IBM’s systems. The software vendors were tired of supporting a lot of idiosyncratic hardware configurations, and IT managers didn’t want to have to juggle multiple versions of the software they relied on. Unless non-IBM PCs could run software optimized for IBM’s systems, the market for those systems would dwindle and wither.

Phoenix had an answer. They asked Jennings to create a detailed specification that included the full suite of functions on IBM’s ROMs, including the non-standard features that IBM had documented but didn’t guarantee in future versions of the ROM. Then Phoenix hired a “clean-room team” of programmers who had never written Intel code and had never interacted with an IBM PC (they were programmers who specialized in developing software for the Texas Instruments 9900 chip). These programmers turned Jennings’s spec into the software for a new, IBM-PC-compatible ROM that Phoenix created and began to sell to IBM’s rivals.

These rivals could now configure systems with the same commodity components that IBM used, and, thanks to Phoenix’s ROMs, could also support the same version of MS-DOS and the same application programs that ran on the IBM PC.

So it was that IBM, a company that had demonstrated its expertise in cornering and dominating computing markets, was not able to monopolize the PC. Instead, dozens of manufacturers competed with it, extending the basic IBM architecture in novel and innovative ways, competing to find ways to drive down prices, and, eventually, giving us the modern computing landscape.

Phoenix’s adversarial interoperability meant that IBM couldn’t exclude competitors from the market, even though it had more capital, name recognition and distribution than any rival. Instead, IBM was constantly challenged and disciplined by rivals who nipped at its heels, or even pulled ahead of it.

Today, computing is dominated by a handful of players, and in many classes of devices, only one vendor is able to make compatible systems. If you want to run iPhone apps, you need to buy a device from Apple, a company that is larger and more powerful than IBM was at its peak.

Why have we not seen an adversarial interoperability incursion into these dominant players’ markets? Why are there no iPhone-compatible devices that replicate Apple’s APIs and run their code?

In the years since the PC wars, adversarial interoperability has been continuously eroded.

  • In 1986, Congress passed the Computer Fraud and Abuse Act, a sweeping “anti-hacking” law that Facebook and other companies have abused to obtain massive damages based on nothing more than terms-of-service violations.
  • In 1998, Congress adopted the Digital Millennium Copyright Act, whose Section 1201 threatens those who bypass “access controls” for copyrighted works (including software) with both criminal and civil sanctions; this has become a go-to legal regime for threatening anyone who expands the functionality of locked devices, from cable boxes to mobile phones.
  • Software patents were almost unheard of in the 1980s; in recent years, the US Patent and Trademark Office’s laissez-faire attitude to granting software patents has created a patent thicket around the most trivial of technological innovations.

Add to these other doctrines like “tortious interference with contract” (which lets incumbents threaten competitors whose customers use new products to get out of onerous restrictions and terms of service), and it’s hard to see how a company like Phoenix could make a compatible ROM today.

Such an effort would have to contend with clickthrough agreements; encrypted software that couldn’t be decompiled without risking DMCA 1201 liability; bushels of low-quality (but expensive to litigate) software patents, and other threats that would scare off investors and partners.

And things are getting worse, not better: Oracle has convinced an appeals court to ban API reimplementations, which would have stopped Phoenix’s ROM project dead in its tracks.

Concentration in the tech-sector is the result of many factors, including out-of-control mergers, but as we contemplate ways to decentralize our tech world, let’s not forget adversarial interoperability. Historically, adversarial interoperability has been one of the most reliable tools for fighting monopoly, and there’s no reason it couldn’t play that role again, if only we’d enact the legal reforms needed to clear the way for tomorrow’s Phoenix Computers and Tom Jenningses.

Images below: IBM PC Technical Reference, courtesy of Tom Jennings, licensed CC0.

Go to Source
Author: Cory Doctorow