In both theory and practice, Security and Privacy intersect for the perfect symbiosis. It is widely accepted that when you focus on one, you typically achieve the other, as well. Understanding their respective applications and definitions can be key to knowing how to protect yourself in the modern, connected world. Security applies to measures one takes to ensure that sensitive data is protected against unauthorized access or use, while Privacy is often defined by what is or is not appropriate use of that data. As fast friends, Security and Privacy go hand in hand and typically have a congenial, mutually beneficial relationship– except, that is, when personal data becomes profitable– as in the case of Facebook. It’s no new concept that the monetization of personal data can sometimes blur ethical lines around privacy practices. Watching the Facebook drama play out on a global stage, however, even loyal users have to ask themselves, How far is too far? When will users finally draw the line?
When questioned, most of our parents and peers like to justify their presence on Facebook as their outlet for “seeing pictures of people’s kids” or “keeping up with old high school friends”, but is it worth the cost? Is the ability to avoid having to actually pick up the phone and call someone so valuable that we are willing to give away our personal privacy? For many of us in the Security and Privacy industries, Facebook is Frankenstein’s monster, originally birthed from eager millennial innovation to connect friends and enable them to share their lives. As in the case of Shelley’s Frankenstein, however, good intention was ultimately corrupted. Few (aside from our grandparents, who don’t notice anyway) would disagree that the Facebook experience has been ruined by greed, cyberstalking, and Farmville.
In its defense, Facebook has taken steps to implement controls around data security. Currently, both the application and website are relatively secure to use. Getting back to the intersection of Security and Privacy, though, security isn’t the only thing. If you follow global media headlines, you are already well aware that most people believe that this platform doesn’t exactly stand as the bastion of privacy. In the past year, prominent media voices, the U.S. Congress, EU Parliament, and users worldwide have scrutinized Facebook's questionable relationship with data privacy and consent. So, are we really surprised that Facebook has managed to dig and stumble into another of its own controversial pits recently?
The company's business model relies heavily on the auctioning off of private user data to the highest bidding advertiser, and it seems that user health data is the latest ethically questionable tool in Facebook’s money-making toolbox. Last month, the Wall Street Journal reported that 11 out of 70 tested iOS applications integrate with Facebook Analytics and brazenly share private health data with Facebook servers - even if the user does not have a Facebook account.Why does Facebook want, collect, and store non-users’ health data? Through these third-party applications, Facebook receives very sensitive and federally protected health information without users’ explicit consent. As discovered by the Wall Street Journal (WSJ), the data was shared with Facebook because the applications used Facebook’s software development kit (SDK). By definition, SDK provides developers with guides, tools, libraries, code samples, and processes that enable developers to create applications– and for– a specific platform. Facebook’s SDK provides developers with an analytics service which allows developers to track user trends and customize application features. The Wall Street Journal asserts that developers who send sensitive information to Facebook use "custom app events" to send HIPAA protected user health data, such as daily heart rates, ovulation times, and pregnancy timelines. Let’s let that sink in: ovulation times.
For its part, Facebook has officially stated that it is not collecting sensitive health and financial information to use more broadly and that they "require app developers to be clear with their users about information they are sharing", according to a Facebook spokesperson to the WSJ. Yet, this is similar verbiage to what was laughably claimed before the EU Parliament last year, and there is still no way to verify that Facebook isn't still tracking your every move. So how can you protect yourself, your employees, and your business in an era of questionable privacy practices? While I can definitively say that trusting Facebook is not the answer, there are ways to mitigate and limit data tracking, though eliminating it all together may require becoming a luddite hermit (not an altogether bad option).
- Check, re-check, and stay abreast about relevant and/or new Facebook privacy settings.
- Research an application prior to downloading it to a personal or work- issued devices.
- As an organization, train employees on the importance security and how to incorporate security principals in their private lives. (See our last blog post by Corey Bolger.)
- Be wary of when and how you share private health data and be aware of what you consent to be shared.
- As an organization, do not use Facebook's Software Development Kit or Pixel web tracker for custom applications that may interact with private health or finance data.
Read more about how to protect yourself here.