PM & AI Chronicles

From Product Thinking to Prompt Engineering – One Tool at a Time

Inside Privacy Expectations 👁️⚖️: Where Trust and Data Use Meet 🔐📝

In the earlier article on Integrity and Availability, we examined what happens when data can no longer be trusted or when systems fail to remain accessible. Those concepts directly support privacy, because users expect their information to remain accurate, reliable, and available only when appropriate. 👉 Integrity Errors & Availability Failures

Today, being online is as normal as carrying a wallet or keys. We message friends, shop for groceries, pay bills, and access bank accounts—all through the internet. Because these actions feel routine, we rarely stop to think about what happens to our personal information in the background.

Think of data privacy like lending your house key to someone. You expect them to use it only for the purpose you agreed to, not make copies or hand it to others. In the digital world, data privacy works the same way—it’s about who has access to your information, why they have it, and how responsibly they handle it.

Most people assume their personal data is automatically private, but that isn’t always the case. Privacy depends on policies, technology, and trust—and sometimes that trust is misplaced. Below, we’ll break down the different aspects of data privacy and what users should realistically expect when sharing information online.

In most countries, a user’s personal information is protected by law. Companies and government organizations that collect, store, or use personal data have a legal responsibility to keep it safe. That said, privacy laws vary by country—each country defines personal information differently and sets its own rules for protecting it.

Of course, laws don’t stop criminals. Someone trying to steal your credit card number is unlikely to care about regulations. The same is true for attackers who break into large companies to steal usernames or account details. Still, these laws are important because they strongly encourage organizations to take data protection seriously and implement safeguards.

The type of personal information that must be protected is commonly referred to as PII (Personally Identifiable Information).
PII is any information that can be used to identify a specific individual. Examples include:

  • Full name
  • Home or mailing address
  • Email address
  • Phone number
  • Government-issued ID numbers (such as a Social Security number)

In the United States, PII is protected under the Privacy Act of 1974. This law generally prohibits the disclosure of personal information without the individual’s written consent, with limited exceptions—such as when the data is required for law enforcement purposes.

There are also different categories of sensitive personal data. For example, medical records are known as PHI (Personal Health Information) and are regulated in the U.S. under HIPAA (Health Insurance Portability and Accountability Act) of 1996. Financial and credit card details fall under PCI (Payment Card Industry) data, which includes credit and debit card numbers, bank account numbers, expiration dates, security codes, and cardholder names.

In the EU (European Union), data privacy is governed by GDPR (General Data Protection Regulation), which came into effect in 2018. As of this writing, GDPR is considered one of the strongest laws for personal privacy and data protection in the world. It protects individuals living in the EU regardless of where the company collecting the data is located. For example, a U.S.-based company must still follow GDPR rules if it handles data belonging to an EU citizen.

GDPR places strong emphasis on user consent and on limiting data use. Individuals must clearly and knowingly agree to their data being collected, such as by voluntarily providing an email address. That data can be used only for the specific purpose stated at the time of collection and for the allowed duration. Organizations that fail to comply with GDPR can face very large fines.

It helps to think about PII from two perspectives.

  • As an individual, you should expect your personal data to remain private—but real-world experience shows that this expectation should not be absolute, which means you still need to be cautious.
  • From the company’s side, there is a responsibility to protect user data as thoroughly as possible, using measures such as secure data centers and encryption when data is stored or transmitted.

People join social networking sites to share information with others. At first glance, that might seem to conflict with the idea of privacy—and in some cases, it does. Still, there is an expectation of some level of privacy when using these platforms.

Social media companies are generally expected not to share your private information with third parties without your consent. However, privacy only goes so far. If you choose to share personal details publicly, that information is no longer private—and the responsibility largely falls on you.

For example, there are entire social media accounts dedicated to mocking posts where people accidentally share photos of their credit or debit cards. While it may be intended as a joke or mistake, posting sensitive financial information online is extremely risky and can lead to fraud or identity theft.

Most social media platforms offer privacy and security settings that allow users to control who can see their content. For instance, you can:

  • Limit posts to friends or followers only
  • Make your account private so only approved people can view your content
  • Control who can send you messages or tag you in posts

How you configure these settings should match your goals. If you aim to be a social media influencer, represent a business, or build a public brand, keeping your account public makes sense. On the other hand, if you only want friends and family to see your posts, making your account private is usually the safer choice and helps protect your privacy.

Unlike social media, which is designed for one-to-many sharing, email and instant messaging are meant to feel more personal and private. While it’s possible to send messages to large groups, most emails and chats are typically intended for a much smaller audience. Because of this, users naturally expect a higher level of privacy.

A helpful way to think about this is the difference between a postcard and a sealed letter. Social media is like sending a postcard—anyone who sees it can read the message. Email and instant messaging are more like sealed letters: they’re addressed to specific recipients and aren’t meant to be read by others.

Commercial email and messaging services such as Gmail, Outlook, and similar platforms have a fundamental responsibility to protect your messages from unauthorized access. In general, these providers use security measures to prevent outsiders from reading your emails or chats.

However, email and instant messaging privacy come with important caveats.

  • First, if you use company-provided email or messaging tools, your employer usually has the right to monitor, read, and archive those messages. In many organizations, messages sent using company systems are considered company property. While employers cannot legally access your personal email accounts, anything stored or sent using a company-owned device or account may be subject to review.

For this reason, many companies follow the “headline test”: never write anything in email or instant messaging that you wouldn’t be comfortable seeing on the front page of a national newspaper.

  • Second, even private email and instant messages are not always untouchable. In certain situations—such as investigations involving criminal activity—messages can be legally requested or subpoenaed.

Note: In the United States, the SCA (Stored Communications Act) generally limits the disclosure of emails, instant messages, and text messages in civil subpoenas, helping protect user privacy under normal circumstances.

File-sharing websites have a responsibility to protect both the data stored on their platforms and the personal information of the users who rely on them. Because many businesses use file sharing services to store documents, reports, and customer data, an unintended data breach can have serious—even catastrophic—consequences.

A helpful way to think about file-sharing security is to compare it to a locker and an open shelf. Storing files in a secure, encrypted service is like placing valuables in a locker—you decide who gets the key. Leaving files unprotected or widely shared is more like putting them on an open shelf, where anyone passing by might access them.

The good news is that most reputable file-sharing services use encryption to protect data both during transmission and while it is stored. For this reason, it’s always recommended to use services that clearly state they encrypt data in transit and at rest.

Below are some practical tips to help protect your privacy and security when using file-sharing sites:

  • Password-protect sensitive files: File sharing platforms require users to log in, but you should also control who can access each file. Adding a password to sensitive files creates an extra layer of protection.
  • Use multi-factor authentication (MFA): Multi-factor authentication requires more than just a username and password—such as a one-time access code sent to your phone—making it much harder for attackers to gain access.
  • Avoid public Wi-Fi networks: Public or unencrypted Wi-Fi networks can expose your data to interception. When possible, wait until you are on a private network, or use a VPN (Virtual Private Network) to access file-sharing services securely.
  • Set expiration dates on shared links: When you share a file, the platform typically generates a link. Setting an expiration date ensures the file isn’t accessible forever, reducing the risk of unintended access later.
  • Keep file-sharing software up to date: Outdated software may contain known security vulnerabilities. Always update file sharing applications and clients to the latest version.

Cookies are small files that websites download onto your device to remember information and track activity. While the term may sound harmless, cookies play a big role in how your online behavior is recorded and used. Not all cookies are the same, and understanding the difference helps clarify why consent matters.

There are two main types of cookies used on websites: first-party cookies and third-party cookies.

  • First-party cookies are created and used by the website you are visiting. These cookies are generally helpful. For example, they may remember your username, keep you logged in, or save your preferences—such as language or display settings—so the site feels familiar when you return.
  • Third-party cookies, on the other hand, are created by organizations other than the website you’re visiting, most commonly advertisers. These cookies track your activity across multiple websites and build a profile of your interests. That’s how you might search for shoes on one site and then see shoe ads follow you around the internet.

In the past, websites could quietly place cookies on a user’s device without asking. Over time, this practice raised serious privacy concerns. Regulations such as GDPR (General Data Protection Regulation) in Europe and CCPA (California Consumer Privacy Act) in the United States brought third-party cookies under much stricter scrutiny.

Browser vendors also began responding. Apple Safari blocks most cookies by default, and Mozilla Firefox blocks third-party cookies automatically. Microsoft Edge allows users to block third-party cookies through its privacy settings. Google announced plans to phase out third-party cookies in Chrome, a significant move given Chrome’s large share of the global browser market, though timelines have shifted.

The result of all this change is what users now experience as cookie consent. When you visit a website today, third-party cookies are no longer silently added to your device. Instead, the site must ask for your permission before placing them. Some essential first-party cookies—such as those required for login or basic functionality—may still be allowed automatically, but non-essential cookies require your approval.

A helpful way to think about cookies is like visitors entering your home. First-party cookies are like family members who help keep things organized and running smoothly. Third-party cookies are more like strangers taking notes about what’s inside your house. It makes sense that you’d want to decide who gets access—and under what conditions.

Privacy in the digital world is not about expecting complete secrecy—it’s about understanding where control exists, where it doesn’t, and how responsibility is shared. Laws and regulations set boundaries for how organizations should protect personal information, but technology, user behavior, and real-world threats all influence how private our data truly is.

From social media and messaging to file sharing and cookies, each platform comes with different privacy expectations. While companies are responsible for protecting data and honoring consent, users also play an important role by making informed choices about what they share, where they share it, and how they configure their privacy settings.

By understanding these expectations, you’re better equipped to protect your personal information and make smarter decisions online. In a connected world, awareness is one of the strongest tools you have to maintain your privacy.

In the next article, we’ll explore common software-based security threats targeting systems and data, and how malicious programs can exploit vulnerabilities without physical access. 👉 Software Security Threats_1

This article is part of the Security Concepts & Threats series, which explores the fundamentals of protecting data, people, and devices in a connected world. For the full overview of how modern risks, defenses, and access controls fit together, refer to the main article in this series. 👉 Security Concepts&Threats