Online privacy

The advent of information and communication technologies such as computers and smartphones has dramatically changed how we think about privacy today. Social media platforms allow us to share intimate details about our lives with large audiences. At the same time, online companies like Google and Meta are eager to track user behavior for commercial purposes. Studying these opportunities and challenges to privacy is crucial to understand the adaptations, needs, and limits of individual behavior. In my research, I focus on three overarching themes.

Self-disclosure

Self-disclosure is the act of revealing personal information to others. Both online and offline, we have many opportunities for self-disclosure, such as in personal conversations, messenger chats, or posts and stories. However, sometimes the decision to disclose personal details is less voluntary. When we register for a service or platform, we need to disclose certain details about ourselves to be able to use it. I particularly studied this latter form of self-disclosure and the potential reasons why users decide to provide personal information in exchange for using online services. Moreover, I examined approaches to help users make informed self-disclosure decisions as privacy threats online are often invisible and elusive. These approaches include warning messages, shortened privacy policies, and privacy scores.

Active privacy protection

Whenever we use information and communication technologies, online companies like Google and Meta collect information about us. This form of user surveillance has become an extremely profitable business model also known as surveillance capitalism. Companies use the vast amounts of collected data primarily for advertising which is tailored to users’ preferences and interests. This personalization and targeting of content can result in economic exploitation, behavioral manipulation, and algorithmic discrimination. Although a complete avoidance of digital surveillance is not possible, users have several options to reduce their digital traces. These include using privacy-friendly websites and services (e.g., DuckDuckGo), deleting digital traces (e.g., cookies), and using additional software (e.g., VPNs and anti-tracking tools). Studying why users are (not) willing to protect their personal information from being collected by large online companies is crucial to understanding the motivational factors and obstacles users face. This knowledge can inform the design of digital literacy education, in-situ interventions, or software assistance.

Chilling effects

Digital surveillance by online companies has unintended negative side-effects, known as chilling effects. Users who are aware of surveillance and anticipate negative consequences from being surveilled are more likely to self-inhibit their digital behavior. This self-inhibition includes not expressing one’s opinion, avoiding searching for specific terms, or even withdrawing from using certain services altogether. This adaptation of user behavior due to permanent digital surveillance is a very drastic form of privacy protection that comes at the cost of fundamental rights such as freedom of speech and information. Accordingly, chilling effects have negative consequences for individuals and societies alike. It is vital to study under which circumstances self-inhibition occurs and which segments of the population are most susceptible to chilling effects in order to ensure equal access to and use of information and communication technologies for all.

Digital inequalities

The observation that not all segments of the population have the same access to information and communication technologies has been termed the digital divide. Currently, three levels of this divide are distinguished: (1) access to information and communication technologies (ICTs); (2) ICT knowledge, skills, and usage; and (3) the experienced outcomes from ICT use. The second and third level divide are also known as digital inequalities. Many empirical studies showed that individuals with a specific sociodemographic and -economic status (e.g., older age, lower education, and lower income) are disadvantaged on these three levels. It is important not to attribute these inequalities to individual factors but to understand them as systematic and structural injustices. In other words, digital inequalities are social inequalities transferred to the digital world. In my research, I I combine digital inequality and privacy research.

Privacy attitudes, knowledge, and behavior

My initial interest in this topic stemmed from the question whether people’s privacy knowledge and behavior differed between sociodemographic groups. Since even strict data protection laws like the European General Data Protection Regulation advocate self-data management, the question whether segments of the population are less able to manage and protect their privacy is vital. This implies that some parts of the population leave much more detailed digital footprints than others which can have detrimental consequences like algorithmic discrimination or manipulation. With my research, I aim to uncover inequalities in privacy knowledge, skills, motivations, and protection behavior. This knowledge can be used for education purposes, technological interventions to support user decision-making, or improving legal regulations.

Digital experiences

When some segments of society have lower digital literacy they may be less capable to manage and protect their personal data which could translate into different digital experiences. This third level of the digital divide focuses on the outcomes people experience in their lives due to using information and communication technologies. So far, disadvantage was mostly defined as fewer positive outcomes such as social capital or information access. While research about the experienced outcomes is still scarce, the connection between privacy and the third level digital divide also opens the opportunity to study the negative outcomes of using information and communication technologies such as privacy violations. What constitutes a privacy violation is a highly subjective question and can range from accidentally accepting all cookies to a major data breach. This extension of the third level is important because disadvantaged user groups may have fewer positive experiences and may simultaneously more often experience online privacy violations.

Science and media use

Information and communication technologies become increasingly important when it comes to information access. Legacy media like TV, newspaper, and radio have been gradually replaced by the internet. Especially social media play an increasingly important role when it comes to learning about news, politics, and science particularly among younger generations. Being accurately informed about scientific consensuses is both individually (e.g., getting vaccinated) and societally (e.g., protesting for climate action) important. Therefore, studying how certain dispositions affect the way people consume and are exposed to information about science is crucial. Likewise, I am interested in studying how contact with science information in new media environments affects knowledge about science and informed decision-making.

General science skepticism

My interest in this topic emerged during the Covid-19 pandemic. During this time, protests against Covid-19 policies (e.g., lockdowns or vaccinations) formed in Germany and many other parts of the world. Protesters denied the scientific evidence showing that SARS-CoV-2 is a harmful virus, that lockdowns and other policies were necessary to save lives, and that vaccines can effectively protect individuals from contracting and spreading the virus. During literature research, I discovered mutual patterns underlying denying scientific evidence not only of Covid-19, but very different areas like global warming, genetically modified foods, or evolution theory. This observation led to the general hypothesis that denying scientific evidence from various domains is rooted in a general negative disposition towards science and scientists. We termed this disposition general science skepticism and developed the General Science Skepticism Scale (GS3) to measure this variable. Results confirmed that persons who are generally skeptical towards science are more likely to deny established scientific evidence from various domains. My ongoing interest in this topic lies in investigating the reciprocal relationship between general science skepticism and using digital media. Exposure to media content denying and casting doubt about scientific evidence likely contributes to science skepticism, and this disposition likely influences media consumption. Moreover, I consider the question of how to overcome general scientific skepticism and build trust in science to be extremely important.

Incidental exposure to science content

In Germany, social media ranks second behind traditional media (e.g., television, radio, and newspapers) in terms of contact with information about science. Around the globe, social media are even the most important medium to find science information. On social media, exposure to content is not intentional but incidental which means that users come across content when they had a different goal of using social media in the first place. Moreover, we most likely see content that aligns with our interests and preferences due to algorithmic recommendations. This leads to the question whether persons who are less interested in science come into contact with science content at all on social media and how incidental exposure to science content may affect knowledge about science and subsequent behavior.