Cybermarketing, or digital marketing, is emerging as one of the most prolific and effective advertising and marketing strategies in recent history. The evolution and use of digital devices and platforms has transformed individuals into complex, multidimensional digital surfaces that generate vast quantities of data related to their everyday lives. Digital devices such as smartphones, digital tablets, personal computers, digital televisions, smart appliances, and personal fitness monitors significantly enhance people’s lives; however, these and other digital platforms also radiate immense quantities of invaluable information that provide marketers and advertisers with an intimate portrait of these consumers’ lives. The value of digitally sourced consumer data to businesses can be measured in the hundreds of billions of U.S. dollars annually. The conundrum is what are the corresponding benefits to the consumer and what are the non-trivial risks to individual privacy? Do these personal and economic benefits outweigh the risks to individual privacy?

Adding to the concern of the incursion of personal privacy by corporate entities through the collection of large data sets is the considerable risk to personal privacy posed by the fusion of multiple data sources to provide a more complete picture of the consumer’s characteristics, behaviors, and attitudes. Promoted under labels such as a “360 degree picture of the consumer,” marketers and advertisers are aggressively aggregating and fusing these multidimensional data sources into more or less complete pictures of individual consumers’ lives with the objective of precision targeting on the household or individual level. The efficiencies that result from precision marketing also provide some tangible benefits to consumers in the form of reduced exposure to nonrelevant advertising and reductions in product cost due to more efficient marketing.

The risks to the privacy of the individual consumer are, however, considerable. Because personally identifiable information is often attached to fused data, there are non-trivial risks, including, for example, data concerning the taking of specific prescription drugs, which suggests that the individual has a particular ailment or in one already widely publicized case where an adolescent was precision targeted by a major retailer for baby products for a pregnancy that had not yet been disclosed to her parents.

Another controversial area in cybermarketing is the collection of app use and geolocation data, particularly prevalent in the use of mobile devices such as smartphones and digital tablets. These technologies are being utilized in new, real-time marketing strategies such as geofencing. Geofencing as utilized by the marketing industry involves the use of geolocation data typically from mobile devices to identify when an individual has entered or exited some predefined geographical area, which typically coincides with the physical location of a brick-and-mortar store. Often these geolocation data are combined with other consumer information held about the individual to offer the individual some sort of promotion for a store or business that is geographically nearby.

The situation is further complicated by the deployment of sophisticated statistical models and machine learning strategies to mine the massive data sets that are generated by emerging digital technologies. Data mining the content and social network structure of Facebook users or parsing the content of Gmail email messages is currently powering very precise targeting for digital advertising campaigns within those services. The acceptance of the terms of use of these platforms provides these companies with the legal right to perform this data mining.

In addition, often there are predictive models built upon these statistical and machine learning–derived strategies that may predict key characteristics or behaviors for personally identifiable individuals. These predictions will always contain some level of error but often are treated by many in the cybermarketing world as having the same verisimilitude as non-modeled data. The consequences of these errors for individual consumers are often not grave—they might miss out on a particular offer or may not hear about a specific sale—however, modeled errors that persist in this database when applied to purposes other than marketing might have much more serious costs to an individual. For example, the application of these statistical models by federal law enforcement could be utilized to attempt to produce a pool of suspects that are suspected of committing a crime, or the intelligence community might apply these models to identify or locate potential terrorists. The serious consequences of being falsely swept up into a database of individuals who may be a threat to the community or national security has already been demonstrated by efforts such as the U.S. no-fly list. Statistical models always have error incorporated into them and the use of additional data on consumer attitudes and behaviors in statistical models for purposes other than marketing holds significant potential for misuse.

The emergence of cybermarketing has brought a renewed sense of saliency of the concept of personal privacy. While personal privacy is perhaps more salient, this state of affairs has not seemingly altered what Patricia Norberg and her colleagues (2007) call the paradox of privacy. The paradox of privacy refers to the phenomenon whereby individuals understate the amount of information they are willing to provide and then proceed to actually disclose significantly more information that they previously said they would disclose. Norberg and her colleagues suggest that this occurs because risk is the primary social factor affecting attitudes and statements about information disclosure while trust is the primary factor governing the actual amount of information that is disclosed.

Finally, it is useful in the examination of cybermarketing and privacy to highlight the fact that there are different, competing theoretical perspectives on privacy. These perspectives include but are not limited to privacy as the ability to control information, privacy as a legal right, and privacy as a commodity. Perhaps privacy as a commodity is the most relevant perspective here. Privacy as a commodity refers to the perspective that individuals realize that privacy has value and they are willing to trade personal information to businesses in return for something of value. Data from a large U.S. national probability sample ( Simmons, 2012 ) suggest that over 50% of U.S. adults state they are proactive about protecting their privacy but in fact are willing to trade personal information to a business in exchange for something of value. It is clear both from a theoretical as well as an empirical perspective that the relationship between personal privacy and cybermarketing is a complex one that continues to evolve.

Max Kilger

See also Advertising and Marketing Research ; Corporate Surveillance ; Data Mining and Profiling in Social Network Analysis ; Information Security ; Privacy, Internet ; Privacy, Types of

Further Readings

Acquisti, A., et al. “The Economics of Privacy.” Journal of Economic Literature, v.54/2 (2016).

Eastin, M., et al. “Living in a Big Data World: Predicting Mobile Commerce Activity Through Privacy Concerns.” Computers in Human Behavior, v.58 (2016).

Lyon, D. “Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique.” Big Data & Society, v.1/2 (2014).

Norberg, Patricia A., et al. “The Privacy Paradox: Personal Information Disclosure Intentions Versus Behaviors.” Journal of Consumer Affairs, v.41/1 (2007).

Simmons National Consumer Study data set. Experian Simmons, 2012. (Accessed August 2017).