
Some boffins have claimed that in a social network “ten percent” has special powers. A software model simulated different networks of connected people and sought insights into how ideas spread through society. They observed that once a meme infects 10% of the population it has the potential to explode into the mainstream.
This is not the first time I’ve read about micro-model simulations like these. I’ve gleaned that the models need to be:
- True representations of cause and effect relationships, such a decision making process.
- Empirically validated against real life, e.g. by demonstrating a statistical correlation
The boffins readily concede they have not yet found a way to measure the correlation, so what of the model? The article explains:
The basis of the model was that people don’t like to hold an unpopular opinion and usually want to reach a consensus with their peers. When a person talks to someone with the same opinion as themselves, the conversation reinforces their belief. But, if they talk with someone who holds a different opinion, they consider the argument and then pop off to talk to someone else about it. If the next person they talk to also holds the different opinion, then the listener will adopt that belief.
So the people in the model are sheeple who can be completely influenced by a few magically conjured contrarians that the boffins insert by programmers diktak. Reason, content, and the individual’s debating competence don’t come into it. Also, if you look closely, all the people have the same number of connections leading to a society shaped into a perfectly formed ball. That’s not the society I know.
So, it’s not a completely accurate model. But what if it were close? Ten percent is 10 times easier to achieve than 100% dominance and if say 50% of the population are sheeple, might 20% be a realistic threshold?
Dumbing down
I’m very interested in what this means strategically. According to the model it’s the level of uptake within the peer group that drives further adoption, if true then it points us away from targeting influencers and going for raw numbers and easy wins. A dumbed down communications strategy might not persuade key influencers in the media but might help to build a critical mass.
Divide and conquer
Another possible response would be to deliberately target people who know somebody already persuaded or to target tight groups, say a geographic region, an ethnic minority or an interest group. If members of one community spend most of their time talking to other members of the same community then a pattern of repeated exposure could be achieved by persuading fewer people overall. In other words, by segmenting the population we can more easily achieve critical mass within one segment.
It might be interesting to experiment with this approach by exploring the range of available targetting methods on facebook and other online markets, and observing what works.
Downsides
This blog post has been maturing in the Libertarian Home cellar for some time. One of the reasons I’ve taken my time over it is a sense of unease about attempting either of these strategies.
The assumption that a large proportion of the population change their minds based on peer pressure alone makes me uncomfortable. It seems like an injustice to characterise the British this way (and no fairer for the Americans who’s peers conducted the study) and frankly I find the idea quite shocking. Unfortunatley, if we are going to try to achieve a change in policy then we should seriously consider whether it is true. Perhaps the best evidence for it is that there is so much wrong with the world as it is today, and I’m already convinced that the reason is bad ideas about how to run it. The sheeple theory has the power to explain how those poor ideas were adopted in the first place.
There are surely risks in attempting to exploit the credulous nature of casually interested people. Would a casually interested and easily persuaded person act as a conduit for misrepresentations and misunderstandings, and undermine the brand?
When targeting separate groups we would naturally tailor our message to the group. The risk here is accidentally contradictory messages or worse, messages that can be characterised as a contradictory by opponents. The LibDems have this image, correctly or otherwise, and it does them no good. Such attacks are difficult to close down because an honest explanation might contain a superficial contradiction, such as making drug takers accountable for health costs and also legalising drugs.
A possible route through this minefield is an honest and intellectual campaign targeted according to a divide and conquer strategy. This might proceed by identifying a group with compatible underlying assumptions and targeting them directly, and then proceeding to target groups which overlap the first, and so on.
I’m interested in what other tactical changes this work inspired, in the comments.

Leave a reply to pavel Cancel reply