EFF Tells the FTC Why We Need Better Competition and Consumer Protection Policies for Tech Companies

The Federal Trade Commission (FTC) is wondering whether it might be time to change how the U.S. approaches competition and consumer protection. EFF has been thinking the same thing and come to the conclusion that yes, it is. On August 20, we filed six comments with the FTC on a variety of related topics to tell them some of the history, current problems, and thoughtful recommendations that EFF has come up with in our 28 years working in this space.

Back in June 2018, the FTC announced it was going to hold hearings on “competition and consumer protection in the 21st century” and invited comment on 11 topics. As part of our continuing work looking at these areas as they intersect with the future of technology, EFF submitted comments on six of the topics listed by the FTC: competition and consumer protection issues in communication, information, and media technology networks; the identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses; the intersection between privacy, big data, and competition; evaluating the competitive effects of corporate acquisitions and mergers; the role of intellectual property and competition policy in promoting innovation; and the consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics.

Our goal in submitting these comments was to provide information and recommendations to the FTC about these complicated areas of Internet and technology policy. The danger is always that reactionary policies created in response to a high-profile incident may result in rules that restrict the rights of users and are so onerous that only established, big companies can afford to comply.

On the other hand, the Internet status quo has moved increasingly away from the ideal of decentralization and towards a few large companies acting as gatekeepers, so some thoughtful regulation or scrutiny is warranted.

Take, for example, our comments on the topic of competition and consumer protection issues in communication, information, and media technology networks. When everyone is talking on Twitter or Facebook, those platforms become important venues for speech. And so the rules used to prevent someone from using those platforms need to be carefully considered and made transparent.

Another obvious example of how consolidation hurts users is found in consumers’ lack of choices for broadband Internet service. A majority of Americans [.pdf] find themselves with little or no choice in high-speed ISPs, giving those providers little to no incentive to improve or expand their service. And these companies have a history of net neutrality violations, an area that is up to the FTC to police since the FCC’s “Restoring Internet Freedom Order” went into effect.

We point out a similar tension in our comment on the intersection between privacy, big data, and competition. The notion that something needs to be done about what tech companies do with the data they have on their users has gained momentum since the revelations of what Cambridge Analytica did with Facebook data. But many proposed rules to address this issue risk creating burdens that only companies of Facebook’s size and reach can meet, further cementing their dominance.

Looking at ways to promote meaningful opt-in consent, “right to know” rules that let users see their data and know how it is being used, right to take your data with you somewhere else (“data portability”) and use it there (“data interoperability”), and new ways to hold companies to account when they fail to secure customer privacy would create a healthier ecosystem.

The problem of access to data is also included in our comment on market power and entry barriers and the evaluation of collusive, exclusionary, or predatory conduct by platforms. Companies further control their data by using computer crime laws to ensure that only they have access to it.

We also encourage policymakers to account for privacy and data concerns when looking at acquisitions and mergers. Google and Facebook both purchased companies that track what their users are doing online, augmenting the large amounts of data they already have access to. In the case of Facebook, that means they can follow you when you click a link that you might think takes you away from Facebook, and they can tie that information to your profile. In the case of Google, the company initially said their data would be kept separate from the data gathered by Doubleclick, the service it acquired. But that siloing was eventually ended by the company.

Intellectual property and competition is another topic that requires competition policymakers to look beyond the usual concerns. Intellectual property by its very nature is exclusionary. Copyright and patent holders own exclusive rights in things. When it’s a patent on a standard—that is on a technology or process that’s required to build a compatible product—patents give the holder the ability to charge huge license fees, knowing everyone has to pay. Small businesses and new businesses are especially vulnerable. Among other recommendations, we told the FTC that we need to make sure these standards-essential patents are licensed in fair, reasonable, and non-discriminatory ways (also called “FRAND” or “RAND” obligations). We see intellectual property harm competition again when we are not sold products but licensed them. Meaning we can’t reverse engineer and build our own versions. Or test the security on them. Or make our own repairs. We’re further restricted by the Digital Millennium Copyright Act’s section 1201, which bars circumvention of access controls and technical protection measures. That law creates legal risk for people who tinker with their own devices, or make repairs, promoting obsolescence and raising the cost of “authorized” repair services.

Another new concern comes from the use of algorithmic decision tools, artificial intelligence, and predictive analytics. These are tools rapidly growing in importance and ubiquity. They can also, however, insinuate imaginary correlations, suggest misleading conclusions, and technologically launder longstanding discrimination and bias. This can affect consumer welfare, making it an issue of concern to the FTC.

As with data privacy, recent events have illuminated concerns with these methods being used by companies like Facebook. Newsfeed algorithms have been seen to perpetuate misinformation and be vulnerable to manipulation. And algorithms used for content moderation have falsely flagged and taken down posts by public figures including a European head of state, silencing his criticism of U.S. foreign policy using an iconic war image published long ago.

Information about these tools has been jealously guarded by their owners. AI and its many potential applications could invite innovation and foster new enterprises. On the other hand, the same sampling bias and secrecy that prevent AI tools from being replicated and tested scientifically can skew their operation in practice and entrench firms wielding market power. Requirements for transparency into code and data sets used in significant AI systems would enhance the public interest. Transparency would help prevent exacerbated discrimination that would inevitably grow worse and more deeply entrenched if the sector remains entirely unregulated.

These comments are distillations of work EFF has spent decades on and represent areas we know will become major issues in the future. We shared this expertise with the FTC in the hopes of making sure the policymakers there understand the way civil rights, consumer protections, and competition play out on the Internet and in emerging technology. 

Go to Source
Author: Katharine Trendacosta