Online safety has resurfaced in the media and in public discussion recently. The gist being that although online communications are bound by existing laws and always have been, social media seems to have completely sidestepped any social responsibility to not contribute to breaching of those laws and people in general seem to think that the laws (or even just conventions of normal decency) don’t apply.
People need to be held accountable.
If we pursue that idea it takes us to the problem of identity. People can’t be held accountable if there is no way of knowing who they are. The industry’s response to this is interesting because it has to pursue almost contradictory aims. The extent to which individual marketing companies track their users varies by virtue of the fact that they can use marketing to distort the apparent value of what they do, and because they tend to do what each other doesn’t do for reasons of market ideology. So one company will make a big deal of anonymity simply because they have a business model that doesn’t require user tracking (at least not in a way that is apparent) and / or they see market value in proclaiming that (probably to highlight an apparent weakness in a competitor).
Generally speaking, however, social media (and more generally, marketing) fundamentally apposes anonymity. The very purpose of social media is to track people. Any business involved in providing a social media platform is really in the business of tracking people. Methods vary from explicit login frameworks to trying to deduce who you are from behavioural data, but the aim is the same.
This puts social media providers in a peculiar position when it comes to online safety issues like racist tweets. They don’t want to have to do the work necessary to enforce any laws other than those required to maintain their business model for the very simple reason that it would cost. They also don’t want to ban people from their platform because their platform is more effective with a bigger share of users, and their business model works just fine even if their users are a bunch of racists. They would however save a huge amount of resources if the problem of user identity was solved by someone else. That can’t be a competitor, of course, so which other entity could it be? The only alternative: the public sector.
Dealing with the public sector, for industries, is dangerous because it has the problematic feature of being subject to interference by public interest. From their perspective it would be great if the government solved the problem of identity. It would transfer a huge cost into the public sector and make their marketing systems far more airtight.
But.. and this is a huge but…
People having an online identity that maps to a real identity and private powers being able to track those identities is not the same thing.
The social media industry would love for there to be an online identity but only if they can track them. This, however, contradicts a public interest in privacy. This is why the discussion is only allowed to happen as a balance between privacy and safety. So the private sector will want to flirt with the idea of the government doing their work for them but will have to be very careful not to pollute the discussion with facts. If we want safety we have to give social media providers the technology to track us all. This is simply a lie. I’m going to explain how we could have privacy and accountability using basic computer science:
Everyone has an identity. Everyone can also create multiple personas that are connected with this identity. When we set up an account with, say, SocialNetwork.com, we create a persona and use it to login in. We type the name (e.g. ‘Adrian’) of the persona and it’s password. This is encrypted at our end and sent to SocialNetwork.com. SocialNetwork.com cannot decrypt it. At this stage they have no idea who you are. They send it to a public agency (with whom we created the identity and who has the private key) who verify that it is correct, tell SocialNetwork.com to accept the login and tell them what your name is. SocialNetwork.com now knows that there is someone who goes by the name ‘Adrian’ logged in and that they really are who they say they are… although they don’t know who they say they are.
Then, let’s say you log out and create a new persona (e.g. ‘RacistGuy5000’) and log in. The same process happens. SocialNetwork.com knows you as ‘RacistGuy5000’. Although they know you, again, are who you say you are (your identity) they have no idea that Adrian and RacistGuy5000 is the same person.
SocialNetwork.com can block personas. So if RacistGuy5000 does something against their terms then they can ban that persona. Of course you could just log in as Adrian. But lets say that every post you make is signed with an encrypted signature that within it has your identity. SocialNetwork.com can’t access that (because they don’t have the private key to decrypt it) so to them the signatures are just random strings. If RacistGuy5000 does something illegal SocialNetwork.com would gather all the offending posts (or whatever media item) and send it to the agency who would be able to decrypt the identity of those who produced the comments and ban their identities. Now neither RacistGuy5000 or Adrian would be able to log in. The agency could chose whether to ban the identity from that one service or all services that use this unified login.
We could also stipulate that the service providers have to make the signatures associated with posts publicly visible. This way a third party could report a breach. The responsibility of the social media provider would simply be to report offending posts.
The important thing here is that SocialNetwork.com, or any service, would have no way of knowing which personas are the same person without that person telling them that they are, so they wouldn’t be able to track movements across sites without people opting in .
In summation what I am describing is simply having a public agency that provides a framework currently provided by the private sector: we nationalise authentication.
 This doesn’t solve all problems. Reports would have to be verifiable (so that you can’t copy someone’s signature and create fake reports). This isn’t too hard to solve because the signature could contain a hash of the content of the post. Faking a post that matches a signature would be impossible. This would however require a standard scheme by which to hash posts. And posting videos and sounds becomes an issue too, again we’d need a standard scheme for hashing videos and audio. We’d have to decide which services are required to use this login system, say any service that allows communication with more than 100 users. And should there be an ‘authentication tax’ paid by services that use this system or should it be a public cost? The point is that the initial problem, having identity with privacy, is solved.