Archive

Internet

Around the ’50s a political ideology formed. I’m sure its ideas date back further but it was at this time that it crystalised into a coherent group. More than a political ideology, it was, and is, a worldview. This worldview has no explicit pop-culture presence. No-one preaches it overtly. So most people are unaware of it directly. However it was, and still is, insidiously influential on the thoughts of some important people.

One of the most basic assertions/assumptions of this worldview is what is technically called strong reductionism. This is the idea that any system is just the sum of the elements in the system. Put plainly: to understand a society you have to only understand individuals. So, for example, if crime increases by 10% then the explanation is that individuals like crime by 10% more than before. It is this idea that lead Thatcher to believe and proclaim that “there is no such thing as society.” What she meant is that society is a kind of academic myth. A mirage created by a bad understanding of the world.

I don’t know how but this worldview has filtered out into popular culture. The world is confusing right now, and confusion both inspires and demands analysis. Reading the never-ending stream of analyses of how we got here, on racism, on sexism, on equality, on social justice, I have noticed that the worldview I mention is encoded right into all of them, and it makes these analysis irrelevant because they’re blind to the biggest factors.

These analyses seem mostly to try to reduce behaviours of societies down to how individuals think and act. Attributing things to individuals is difficult, as there are billions of them, so this approach necessarily requires demographic segmentation i.e. stereotypical thinking (which is ironically often what the analysis is out to criticise.) You have to find the demographic group whose behaviour, or thoughts, or opinions can reasonably be said to contain the behaviour you are trying to explain. I say reasonably because these analyses are usually only opinions; what the writer thinks is a reasonable explanation is accepted, usually without any actual evidence, instead relaying on the reader liking the sound of the conclusion. How many “why Trump got elected” videos and articles have you seen? Few provide any causal evidence, most only provide demographics as data masquerading as evidence. These are plentiful and have, undoubtedly, already coloured peoples ideas of how the world works.

This approach to understanding the world is limited to specific types of conclusions. If the phenomena we are most concerned about are recent this type of analysis can only conclude that the difference in our society is obviously the fault of whichever group represents the biggest recent demographic change; millennials. Things like institutional racism or sexism are incomprehensible because those phrases don’t mean anything. How can an institution be racist if none of the individuals in the institution are guilty of overt racism? How can we even approach fixing a sexist education system if none of the parts of that system are being sexist? How can inhuman labour practices be an issue if everyone working in the factories chose to work there?

Strong reductionism is bullshit. It was shown, with actual maths, to be bullshit over 100 years ago. The hard sciences, you know, the people who put robots on comets millions of miles away, predict weather with miraculous precision, run optical cables across the ocean floor, create self-driving vehicles, use general relativity to account for transmission distortion in communication between machines in geo-stationary orbits, put the magic machine you are looking at in front of you, those people, dropped strong reductionism at that time and never looked back.

If you want to understand radical changes in the behaviour of our society in the last decade there is an elephant in the room: social media. Social media itself mediates the new social interaction. The important word there being interaction. Interaction is not a feature of individuals so strong reductionist worldviews are blind to it. For them interaction is effectively inert. It just transmits benignly, having no overall effect on behaviour. It can express behavioural traits, that’s it.

When the world, apparently in unison, listens to Gangnam Style then a month later ritualistically pours buckets of water over their heads, what does that tell you? That everyone woke up one morning and decided they like songs about Korean horse farming, then changed their mind and really wanted to pour whatever over their heads and social media was just there to record it? Or is it a more feasible explanation that those things went viral largely because of the nature of social media itself? So many variables in that process are obviously part of how social media and the internet themselves work and cannot be reduced to individuals at all. If social media didn’t exist, but every music shop in the world sold copies of Gangnam Style one day, would people have bought it? There is clearly another factor at play here that isn’t just peoples’ traits.

There is a motion that fake news, transmitted by social media, was a large factor in recent political events. You might think that this is an example of a break from the worldview above because it is laying blame at Zuckerberg’s feet. Maybe it is, but this analysis, again, seems to be about the content traveling around social media rather than the system itself. The system is at fault in that it contains this type of content. It is recognised that social media creates filter bubbles in which our view of the world is coloured to match our outlook, biasing our opinions. Again, this looks at the situation in terms of individuals. Social media biases the individual, or more accurately the individual biases themselves using social media, which manifests itself as a societal bias. Social media is just providing a way for individuals to do what they as individuals want to do, but if that is true there is no bias… ta da!

This is, I’m sure, a factor but it’s an incomplete story. Social media filters content based on two broad factors: the user’s interaction with it and marketing revenue. So if we like things that steers social media’s shaping of the filter bubble but what we like is a function of our social interaction, which is itself mediated by social media and distorted by our filter bubble. It might sound like I have added nothing to this analysis. The first says “A affects B”. Mine says “A affects B which affects A”. I’ve just pointed out a feedback loop made from the same elements, but that feedback loop is an important extra element. Put a feedback loop in a speaker-microphone system and you get a loud, shrill whine, right? That screechy noise isn’t a product of the singer, or the mic, or the speaker, or the cables; it’s a product of those things combined. It’s exact pitch is a product of the properties of all of those things and it’s volume is a product of their interaction. You can sing a different song but you’ll still get the same pitch. The only way to get rid of it is to get rid of the feedback loop.

We all see definite polarisation on most important issues. The standard analysis is, again, that two groups form, and the difference in opinion between the two sums to the outlook of the society. And again we are ignoring interaction. What people miss out is that both sides have a vested interest in portraying the other side as as crazy as possible. So most of the examples of either side are actually picked out by the other side. Those articles about air conditioning being sexist, a woman with 40 kids on benefits, people complaining about a movie poster, outrage about this and that, political correctness gone made, are in every case minor incidents involving a handful of people selected by the other side and made viral. Then the analysis that follows is based on data handed over by this process. Apparently the worlds leading experts on gender equality are all well-off white men who think that feminists are all men hating nut-cases; a conclusion based on a biased view of the world provided largely by a social media system designed to respond to those opinions by shaping its filters to make the world look more like that view!

In technical science systems with feedback loops have a mathematical property called non-linearity. They’re called complex systems because they have complex and often weird behaviours. The properties of these systems are well understood by people whose opinions no one cares about, and are unknown unknowns to a huge number of people whose opinions that shape our society. You probably aren’t aware of this but the idea of strong reductionism is axiomatic to all neo-classical economics, which includes all the ideas about how economies function in official practice right now. It’s embedded in the university curricula studied by many of our government ministers, although to be fair they probably didn’t pay much attention.

Advertisements

Online safety has resurfaced in the media and in public discussion recently. The gist being that although online communications are bound by existing laws and always have been, social media seems to have completely sidestepped any social responsibility to not contribute to breaching of those laws and people in general seem to think that the laws (or even just conventions of normal decency) don’t apply.

People need to be held accountable.

If we pursue that idea it takes us to the problem of identity. People can’t be held accountable if there is no way of knowing who they are. The industry’s response to this is interesting because it has to pursue almost contradictory aims. The extent to which individual marketing companies track their users varies by virtue of the fact that they can use marketing to distort the apparent value of what they do, and because they tend to do what each other doesn’t do for reasons of market ideology. So one company will make a big deal of anonymity simply because they have a business model that doesn’t require user tracking (at least not in a way that is apparent) and / or they see market value in proclaiming that (probably to highlight an apparent weakness in a competitor).

Generally speaking, however, social media (and more generally, marketing) fundamentally apposes anonymity. The very purpose of social media is to track people. Any business involved in providing a social media platform is really in the business of tracking people. Methods vary from explicit login frameworks to trying to deduce who you are from behavioural data,  but the aim is the same.

This puts social media providers in a peculiar position when it comes to online safety issues like racist tweets. They don’t want to have to do the work necessary to enforce any laws other than those required to maintain their business model for the very simple reason that it would cost. They also don’t want to ban people from their platform because their platform is more effective with a bigger share of users, and their business model works just fine even if their users are a bunch of racists. They would however save a huge amount of resources if the problem of user identity was solved by someone else. That can’t be a competitor, of course, so which other entity could it be? The only alternative: the public sector.

Dealing with the public sector, for industries, is dangerous because it has the problematic feature of being subject to interference by public interest. From their perspective it would be great if the government solved the problem of identity. It would transfer a huge cost into the public sector and make their marketing systems far more airtight.

But.. and this is a huge but…

People having an online identity that maps to a real identity and private powers being able to track those identities is not the same thing.

The social media industry would love for there to be an online identity but only if they can track them. This, however, contradicts a public interest in privacy. This is why the discussion is only allowed to happen as a balance between privacy and safety. So the private sector will want to flirt with the idea of the government doing their work for them but will have to be very careful not to pollute the discussion with facts. If we want safety we have to give social media providers the technology to track us all. This is simply a lie. I’m going to explain how we could have privacy and accountability using basic computer science:

Everyone has an identity. Everyone can also create multiple personas that are connected with this identity. When we set up an account with, say, SocialNetwork.com, we create a persona and use it to login in. We type the name (e.g. ‘Adrian’) of the persona and it’s password. This is encrypted at our end and sent to SocialNetwork.com. SocialNetwork.com cannot decrypt it. At this stage they have no idea who you are. They send it to a public agency (with whom we created the identity and who has the private key) who verify that it is correct, tell SocialNetwork.com to accept the login and tell them what your name is. SocialNetwork.com now knows that there is someone who goes by the name ‘Adrian’ logged in and that they really are who they say they are… although they don’t know who they say they are.

Then, let’s say you log out and create a new persona (e.g. ‘RacistGuy5000’) and log in. The same process happens. SocialNetwork.com knows you as ‘RacistGuy5000’. Although they know you, again, are who you say you are (your identity) they have no idea that Adrian and RacistGuy5000 is the same person.

SocialNetwork.com can block personas. So if RacistGuy5000 does something against their terms then they can ban that persona. Of course you could just log in as Adrian. But lets say that every post you make is signed with an encrypted signature that within it has your identity. SocialNetwork.com can’t access that (because they don’t have the private key to decrypt it) so to them the signatures are just random strings. If RacistGuy5000 does something illegal SocialNetwork.com would gather all the offending posts (or whatever media item) and send it to the agency who would be able to decrypt the identity of those who produced the comments and ban their identities. Now neither RacistGuy5000 or Adrian would be able to log in. The agency could chose whether to ban the identity from that one service or all services that use this unified login.

We could also stipulate that the service providers have to make the signatures associated with posts publicly visible. This way a third party could report a breach. The responsibility of the social media provider would simply be to report offending posts.

The important thing here is that SocialNetwork.com, or any service, would have no way of knowing which personas are the same person without that person telling them that they are, so they wouldn’t be able to track movements across sites without people opting in [1].

In summation what I am describing is simply having a public agency that provides a framework currently provided by the private sector: we nationalise authentication.

[1] This doesn’t solve all problems. Reports would have to be verifiable (so that you can’t copy someone’s signature and create fake reports). This isn’t too hard to solve because the signature could contain a hash of the content of the post. Faking a post that matches a signature would be impossible. This would however require a standard scheme by which to hash posts. And posting videos and sounds becomes an issue too, again we’d need a standard scheme for hashing videos and audio. We’d have to decide which services are required to use this login system, say any service that allows communication with more than 100 users. And should there be an ‘authentication tax’ paid by services that use this system or should it be a public cost? The point is that the initial problem, having identity with privacy, is solved.