FCC’s Privacy Proposal Doesn’t Address ‘Big Data’ Problem

Individuals are not aware of all the types of data being collected about them, how they are categorized and who is using that information to make what decisions.

Jul 25, 2016 at 12:19 pm

Jeffrey Layne Blevins, Ph.D., is an associate professor and head of the Journalism Department at the University of Cincinnati where he teaches media law and ethics. He previously served as a federal grant reviewer for the Broadband Technologies Opportunity Program administered by the U.S. Department of Commerce and National Telecommunications and Information Administration.


The comment period for the Federal Communications Commission’s proposed privacy rules ended this month, and while the plans are a step in the right direction, they are too limited in scope to address the most serious concerns about privacy in the digital age.

Under the FCC’s proposal, there would be more provisions for customers who want to opt-out of basic marketing for communications services and opt-in requirements to collect consumer data for other purposes. However, the FCC’s authority is limited only to broadband Internet Service Providers and not the websites and other applications that make up the totality of our online experience.

In this broader arena, data miners collect and categorize information from our email, search engines, web browsing and social media apps. They know a lot about who our friends and associates are, where we go and what we do. All of this digital data collection is ostensibly designed for marketers to make extremely calculated inferences about what we will purchase and then to deliver targeted ads.

However, as reported in a National Science Foundation White Paper on privacy and big data in 2015, corporations do much more, as they are able to obtain unprecedented insights into people’s behavior, activities and habits. Experian Marketing Services boasts that it classifies all U.S. households and neighborhoods into just 19 overarching groups that range from “Power Elite” and “Thriving Boomers” to “Singles and Starters” and “Economic Challenges,” as well as other segments that include monikers such as “Kids and Cabernet,” “Full Pockets, Empty Nests,” “Small Town, Shallow Pockets” and “Urban Survivors.”

As noted in a New York Times report last year on the digital collection of personal data, although millions of people embrace apps and services that collect personal data, many people are leery of the kinds of inferences made from the information gathered about them. And what happens in instances when the data crunching leads to inaccurate and politically and economically harmful depictions of a targeted individual?

The problem is that individuals are not aware of all the types of data being collected about them, how they are categorized and who is using that information to make what decisions.

Some of the largest data brokers in the U.S. are Acxiom, Epsilon, TransUnion, Datalogix, Dataium and Spokeo, and according to an IBM study, these brokers and others collected over 2.5 quintillion bytes of data per day. Through the assembly of different points of data, these brokers make inferences about such things as religion, political affiliation, medical history, income and sexual orientation. 

Statlistics advertises a list of gay and lesbian adults; Response Solutions markets a list of people suffering from bipolar disorder; Paramount Media sells lists of people with alcohol, gambling and sexual addictions and people who are looking to get out of debt; Exact Data vends lists of people who have sexually transmitted diseases, as well as people who have purchased adult material.

More than just predicting consumer preference and behavior, big data churn has application to decisions about healthcare, insurance and crime prevention. All of this may sound like Philip K. Dick’s science fiction short story, The Minority Report, in which the mutant beings with supposed precognitive power are used to predict criminal activity so police can act upon it before it occurs. 

Similarly, the algorithmic formulas applied to the data gathered from our digital media activities sort us into socioeconomic categories and determines our level of value.

Business models that sort consumers into customer-service tiers are likely to have a disparate impact on people of color and individuals who may have suffered financial hardship. And with all of this processing, reprocessing and standardizing, the meaning of data that is devoid of contextualization — some form of “missorting” is bound to occur.

For instance, Cable One uses predictive analytics (based on low FICO scores) to define “hollow value” customers. In Cable One’s view, these customers are more likely to dispute their bill, not pay regularly and are less likely to purchase higher-end services. 

Therefore, customers sorted into this category (fairly or not) receive less time and attention from customer service representatives without regard for whether or not the individual customer in question has actually paid his or her bill on a timely basis — the only thing that matters is that the customer has been sorted into the “hollow value” file.

A more comprehensive policy approach to digital media privacy is needed to ensure fairness and protect the rights of everyone.


CONTACT JEFFREY LAYNE BLEVINS: [email protected]