I hadn’t set out to write a follow up to my last blog on how technology is changing our outlook and behavior when it comes to the concept of privacy. I was going write about something along the lines of how we are now able to print meals a la The Jetsons or Star Trek or something on wearable technologies… but then I read an interesting article about social media and data analysis that related directly back to my last posting.
The article appeared in the May 1st U.S. edition of the Guardian. It stated that Facebook had develop an algorithm that had “the capacity to identify when teenagers feel ‘insecure’, ‘worthless’ and ‘need a confidence boost’, according to a leaked documents based on research quietly conducted by the social network.” The article went on to state that Facebook was touting this ability to detect vulnerable teens (and I would assume any Facebook user) a selling point to advertisers. Facebook “can monitor posts and photos in real time to determine when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’ and a ‘failure’.”
The article was picked up by media around the globe and advocacy and consumer organizations quickly called on Facebook to end the practice. Facebook, for its part, did not deny that they have the capacity to analyze that data and draw conclusions. They did deny, however, that they are using that data to sell advertising that targets those teens. An ex-Facebook employee then counter that they do indeed. I’m not going to get in the middle of this argument but having developed a startup that used automated sentiment analysis, you can bet that Facebook’s capacity to determine mood is quite sophisticated. After all that’s how they make their money.
Stop and think for a moment… let it settle in and you realize that this is a new paradigm…a sort of Minority Report lite. Facebook has long been the butt of jokes regarding posters idealizing their otherwise banal lives. But this is different. This is a corporation’s ability to tell if a person is depressed, angry, suicidal, etc.
The propensity of younger users to have a much more relaxed view of privacy coupled with Facebook’s (and others - think Google search histories) opens up the question of what do you do with the data? We know who owns it, but what can and/or will they do with the data? Is Facebook responsible for alerting authorities or parents of a user’s mental status? Are they responsible for the ads being delivered to that user if those ads spur deeper depression or an act of self-harm or violence? How safe is that data? Could it be compromised? The state of cybersecurity (or insecurity) has shown, if anything, that no data is truly safe. I refer back to my blog post on the future need for personal cyber insurance (2/21/17). Will the post of a 14 year old come back to haunt them when they are applying to college or for employment? I’d like to think that Facebook and the other major social media networks are responsible corporate citizens and they may well be. Yet we know that that teens and pre-teens have discovered and use other social networks that most adults have never heard of: Seeking Arrangements; KIK Messenger; Whisper; and the infamous Blue Whale, the game that purports to ask users to commit suicide after completing a number of tasks. Although it’s not proven that any actual suicides have taken place due to Blue Whale, the fact that it’s out there says enough. The other sites I mentioned encourage anonymous conversations among teens and preteens. The question is who is on the other side? Finally, there are a number of apps that allow users to hide these suspect apps from parents. We know there are a lot of bad actors in the world and many of them are using the anonymity of the Internet to play some very vicious games targeting the vulnerable. It’s the wild west out there in cyberland .