An advantage of a few days holiday is that it allows time to let the mind reflect and run free from the routine daily tasks it has to perform in work-a-day life.
Given the last few days holiday (‘Holy-day’) were about Easter I took the chance to think about religion, humans and politics. I should make it clear that I follow no religion and have no supernatural commitments but am not at all hostile to the religious sentiment. I find it intriguing – and very human.
One of the most intriguing things about religion is that it has been around for a very long time.
It has also been present in just about every (maybe, in fact, every) society and culture for at least the last ten thousand years.
That, in itself, makes it very interesting.
What makes it more interesting still are the many attempts to answer the general question ‘why have religions been so omnipresent in the human world?’.
Further, when it comes to Easter itself, the specific variation on that interesting question is ‘why is the myth of the empty tomb so appealing?’ (I accept that it is, and has been.)
But the question I’m interested in here is yet another: What might the myth of the empty tomb (and resurrection) tell us about contemporary society and politics?
There are, now, almost as many theories about the reason religion is so ubiquitous as there are different religions (well, to be honest, that’s probably an exaggeration).
One recent theory that is gaining quite a bit of popularity at present is the ‘parasite’ theory.
By ‘parasite’ theory I don’t mean the rather smug theories that see religiosity as a kind of mental parasite (a meme-like ‘infection’ of the mind that has perhaps evolved to make itself appealing to, especially, children’s minds). I don’t even mean the possibility that susceptibility to religiosity is caused by some parasitic pathogen. (Both of these views are neatly summarised in this essay.)
Instead, I mean the theory developed by Corey Fincher and Randy Thornhill that parasite-stress makes groups of people socialise more exclusively with their own in-group rather than with members of out-groups. That is, where there are a lot of potentially harmful parasites (including bacteria for infectious diseases) the theory predicts higher levels of ‘family ties’ and religiosity because both promote ‘in-group assortative sociality’ (‘preference for socialising with those close to us’).
[As an aside, Thornhill – along with co-author Craig Palmer – is perhaps more (in)famous for the book ‘A Natural History of Rape‘ in which the claim was made that rape had evolved as a male ‘mating’ strategy under certain conditions. An interesting analysis of why Palmer and Thornhill might be wrong about this is presented in this ‘Guide for the Perplexed‘ by Eric Smith, Monique Borgerhoff Mulder and Kim Hill (not ‘our’ Kim Hill – I presume) – at least it’s a guide when it comes to controversies in the evolutionary social sciences .]
As I said, all very interesting. But there’s a problem.
In the vast expanse of what we can call our hunter-gatherer past a couple of things are clear: Humans lived in relatively small bands, encountered other bands relatively infrequently (maybe a few times a year) but, surprisingly for the parasite theory, when they did meet other bands there was quite a degree of swap-over in membership. Individuals moved from one band to another for a variety of reasons including having friends and relatives in neighbouring bands and, of course, in search of partners.
That intermixing between bands is not the kind of behaviour that you’d expect from groups of humans preferring ‘in-group assortative sociality’. And nomadic habits within a defined territory mean that you come across other bands with overlapping territories as a matter of course. So ‘in-group assortative sociality’ would presumably only apply to humans who lived in more or less permanent settlements. That settled life doesn’t apply until the onset of an agrarian lifestyle.
Yet it seems that our religiosity as a species was well and truly established before the Agricultural Revolution around 9-10,000 BC that transformed humans into farmer-settlers and – as the archaeological and paleoanthropological record makes clear – reasonably stunted, disease prone, bone-weary but more numerous beings.
The parasite theory makes more sense for our agrarian forebears than it does for our hunter-gatherer ancestors. The former had reason to keep themselves suspicious of outsiders. Being settled they had less opportunity to mix with other settlements and such settlements were soon of a size that could support mini-epidemics which were well worth avoiding. They also involved close proximity to domesticated animals which were a constant source of new infectious diseases.
In this context, the recent (since 1995) findings at the site called Göbekli Tepe in south-east Turkey are particularly interesting. Large stone pillars have been found in circular formations in the deepest layers which date to almost 10,000 BC. Huge cooperative efforts by hundreds of people would have been needed to construct these at a time when the humans in the area were still hunter-gatherers and foragers:
Göbekli Tepe is regarded as an archaeological discovery of the greatest importance since it could profoundly change the understanding of a crucial stage in the development of human society. Ian Hodder of Stanford University said, “Göbekli Tepe changes everything”. It shows that the erection of monumental complexes was within the capacities of hunter-gatherers and not only of sedentary farming communities as had been previously assumed. As excavator Klaus Schmidt put it, “First came the temple, then the city.”
The site, however, is close to where one of the first varieties of domesticated wheat originated (einkorn wheat found in the Karaçadag Hills some 30 kilometres away). There were no signs of habitation in the deepest layer of the site which pre-dates the Neolithic (the start of agriculture):
While the site formally belongs to the earliest Neolithic (PPNA), up to now no traces of domesticated plants or animals have been found. The inhabitants are assumed to have been hunters and gatherers who nevertheless lived in villages for at least part of the year. So far, very little evidence for residential use has been found. Through the radiocarbon method, the end of Layer III can be fixed at about 9000 BCE (see above) but it is believed that the elevated location may have functioned as a spiritual center by 11,000 BCE or even earlier.
The surviving structures, then, not only predate pottery, metallurgy, and the invention of writing or the wheel, they were built before the so-called Neolithic Revolution, i.e., the beginning of agriculture and animal husbandry around 9000 BCE. But the construction of Göbekli Tepe implies organization of an advanced order not hitherto associated with Paleolithic, PPNA, or PPNB societies. Archaeologists estimate that up to 500 persons were required to extract the heavy pillars from local quarries and move them 100–500 meters (330–1,640 ft) to the site. The pillars weigh 10–20 metric tons (10–20 long tons; 11–22 short tons), with one still in the quarry weighing 50 tons.It has been suggested that an elite class of religious leaders supervised the work and later controlled whatever ceremonies took place. If so, this would be the oldest known evidence for a priestly caste—much earlier than such social distinctions developed elsewhere in the Near East.
The site, that is, seems to be religious in orientation but drew a large number of bands together rather than keeping them separate. This may have been achievable by the presence of einkorn wheat – and perhaps even its preliminary cultivation – to support the work. At the time the environment was also plentiful with wildlife:
The reliefs depict mammals such as lions, bulls, boars, foxes, gazelles and donkeys; snakes and other reptiles, arthropods such as insects and arachnids; and birds, particularly vultures. At the time the edifice was constructed, the surrounding country was likely to have been forested and capable of sustaining this variety of wildlife, before millennia of settlement and cultivation led to the near–Dust Bowl conditions prevalent today.
Schmidt [the lead archaeologist] believed that what he called this “cathedral on a hill” was a pilgrimage destination attracting worshippers up to 100 miles (160 km) distant. Butchered bones found in large numbers from local game such as deer, gazelle, pigs, and geese have been identified as refuse from food hunted and cooked or otherwise prepared for the congregants.
A plentiful environment may well have meant that a large number of hunter-gatherer bands began to live in close proximity and were less mobile than typical nomadic hunter-gatherers. The site was clearly a gathering point and presumably not just for the work needed to erect the monumental structures. Given that there was no existing large social system it seems likely that the effort must have been done voluntarily for some common purpose – probably, as Schmidt thought, for some form of worship.
Most explanations for what was going on at the site are inevitably speculative. But one intriguing take on its meaning is presented in this National Geographic programme.
To summarise the argument in the video: Rather than agricultural settlement and the surplus provided pre-dating organised religions (and hence the construction of temples and cities), Göbekli Tepe suggests that settlement occurred prior to agriculture and along with organised religion. This large, reasonably cohesive and relatively immobile group could then presumably take advantage of the potential for agriculture.
That original settlement was based on a naturally occurring surplus for hunter gatherers (plentiful wild fauna, wild ‘fields’ of wheat for gathering, etc.) which acted to bring large numbers of them together. Importantly, because larger numbers of people were now in close contact the typical processes that united small bands no longer worked to generate trust and cooperation.
Dunbar’s Number was well and truly exceeded:
Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships. These are relationships in which an individual knows who each person is and how each person relates to every other person. This number was first proposed in the 1990s by British anthropologist Robin Dunbar, who found a correlation between primate brain size and average social group size. By using the average human brain size and extrapolating from the results of primates, he proposed that humans can only comfortably maintain 150 stable relationships. Proponents assert that numbers larger than this generally require more restrictive rules, laws, and enforced norms to maintain a stable, cohesive group.
The construction of the pillared circles can be understood as providing both a cooperative work project that used members of many different bands and a demonstration of shared hunter-gatherer values and norms. That is, it symbolised – in monumental stone and carvings – a hunter-gatherer religion that allowed a large group to co-exist cooperatively.
If correct, religion becomes a means of bringing smaller groups together rather than, primarily, a means of keeping them apart (to avoid parasite transmission). It helped, that is, organise large units of society and so to begin the process we have come to call ‘civilisation’.
Further, the – increasingly speculative – argument in the video is that the monumental pillars (which get smaller in subsequent layers at the site) are basically depictions of humans – without facial features (deities) – that tower above the carvings of (mostly) predatory animals seen lower on their base. This, it is claimed, suggests a shift in collective mind-set to one in which humans come to see themselves as above nature and able to dominate it – through organised mass collective effort. In turn, the obvious achievements (monumental constructions, etc.) that a coordinated large group of humans can produce would have confirmed their pre-eminence over the rest of the world.
Quite a boost for the collective human ego.
Domestication of animals and farming would follow quite naturally from these beliefs combined, as they would have been, with the naturally prolific environment and the larger social group. The people who lived there, in short, could cooperate to protect wild fields of wheat and, perhaps, corral wild herds of sheep, boar and goats to protect them from predators. It’s then one short step to cultivate the wheat systematically and herd, husband and shepherd the animals.
Finally, the beliefs that went along with this cooperative arrangement are argued to have continued even after the original site was deliberately filled in. In particular, evidence of the unearthing of buried bodies and the removal of heads, it is claimed, suggests that the site was concerned with symbolically preserving and continuing the existence of people of the past (a ‘cult of the skull’).
That is, it is claimed that the notion of resurrection was used to unite a large grouping of humans through time as well as spatially in the ‘here and now’. Important ancestors were ‘re-born’ through ritual so that they remained with those in the present and so establish, and continue, a cultural tradition carved in stone.
Resurrection myths then supposedly proliferated throughout the broad area but all lead back to this momentous change: Humans aggregated into large social units, developed agriculture and the division of labour that has supported and been reproduced and elaborated within every civilisation since.
So, is this the origin of the empty tomb? Resurrection as integral to the continuation of a culture based on the pre-eminence of humans over nature and one that can unite large groups of humans over time?
Well, if I can add my own twist to this tale, I’d say ‘not quite’.
There’s one more step, I think, and that step has particular resonance for our modern society. Resurrection myths, after all, have two parts. Obviously there is the ‘re-birth’ (resurrection). But there is also, of necessity, a death.
What dies? One answer might be ‘the body’. But there’s another candidate.
The shift from small bands to larger aggregations of humans not only presents the problem of how to entrust strangers in cooperative action but also leads to a very particular social invention: The self.
More accurately, it leads to the enlargement of the self in the sense that it comes to loom larger in its role in human action. Hunter gatherers no doubt had selves but there would have been less need for them to flex their sociocultural muscles in a relatively flat social structure which allowed considerable autonomy.
When life is directly attuned to current environmental (and social) conditions there is less need to do those things that selves have come to excel at – individual planning; imagining future scenarios; considering one’s own preferences and how to achieve them, all the while being aware that other selves are busy trying to achieve theirs; and, perhaps most importantly, monitoring relative social status.
And that last point is important: In egalitarian societies (or families or any other groups) there is little need for the development of a strongly etched, individuated ‘self’. Egalitarianism, almost by definition, has little time for interpersonal competition and, absent competition, individuals have less need to expend energy on advancing their social status.
The strong overlap in individual interests in hunter-gatherer bands sprang (and still springs) from an interdependence that goes to the heart of individual and group survival. That interdependence probably also provided a strong perception of ‘common fate’ amongst individual members of the band. Spending time contemplating one’s self-interest rather than that of the group may not have been an efficient use of cognitive resources.
But in larger aggregations of people who depend upon surpluses (rather than real-time ‘harvesting’ of nature’s surplus) the orientation of individuals would shift markedly. A surplus (as opposed to a ‘harvest’ from nature) must be secured, maintained under favourable conditions and reproduced (to continue the way of life dependent upon it).
The kind of social coordination needed to achieve this goes well beyond cooperation between interdependent but autonomous individuals who know each other. It becomes – to use a modern phrasing – a process of bureaucratic administration, planning and organisation.
Increasing dependence on the surplus also means maintenance of social access to it – that is, its distribution must be organised and administered (e.g., through explicit rules or laws of who deserves what amounts). That means, for the individual, that there is a need to maintain a social reputation and conform increasingly to the demands of the group (its rules and regulations).
Importantly, one’s reputation starts to be judged not through intimate, life-long acquaintanceship but through one’s degree of adherence to increasingly complicated social rules and performance at particular tasks that are part of a social structure that is more and more differentiated and stratified.
As mentioned above, collectively achieved surpluses may also be partly dependent upon seeing oneself as separated from nature, even in competition with it. The world therefore needs re-designing to achieve continuing surpluses – a notion that could only have reinforced the sense of agency and efficacy for individuals as well as the group as a whole.
But, importantly, what might at first seem paradoxical – given reliance on the cohesion of the large group – is that separation from nature also includes separation from other members of the social group (i.e., increasing independence from others rather than interdependence with them).
Partly that’s because life for an individual in a large, bureaucratically organised aggregation of humans is less dependent on the survival of any particular individual members of that grouping than is the case in a small hunter-gatherer band. The welfare of another member of the group is no longer reliably coincident with one’s own welfare – in fact it might well be in direct conflict, especially as the social world becomes more diversified (more skills, even ‘occupations’ required) and hierarchical.
All of these factors encourage – even require – the development of an increasingly enlarged, individualised and discrete notion of a self. But a ‘self’ of what kind?
The kind meant here is what has been called the ‘symbolic self’. To quote Sedikides, Skowronski and Dunbar (2006, p.3):
We are specifically concerned with the evolution of the symbolic self. This term refers to both the ability to consider the self as an object of one’s own reflection and the ability to store the products of such reflections (which may be abstract and/or language-based) in memory.
Basically, it’s a self that sees itself as a self. Importantly, it sees itself as a ‘separate self’ – both from other such separate ‘selves’ and from the world in general.
Even more importantly, the larger aggregations of people in which it ‘finds itself’ also imply separate and more or less independent ‘trajectories’ for each self within the social group – it’s no longer a case of ‘common fate’. In this environment, the self suddenly has quite a bit of work to do (on its own behalf).
Opinion is a bit divided over just when a symbolic self emerged in evolution. Sedikides et al. tend towards an early entrance of at least a rudimentary symbolic self (about 1 million years ago with Homo ergaster/ Homo erectus) while Mark Leary and Nicola Buttermore (2003) suggest it was much later (about 60,000 years ago) that a fully symbolic self made its appearance.
Either way, while the symbolic self emerged ‘early’ relative to the agricultural revolution, like most inventions it had to wait for its time to really come. With prolonged periods of surplus and settlement – possibly at Göbekli Tepe – the self found itself at last, in a social environment that let it off the leash.
What ‘leash’ is that?
In a world of small bands in which everyone knew everyone else as well as all the relationships that existed between members of the band it was hard for a self to ‘blossom’ (or perhaps the term is ‘bloom’ as in ‘toxic algal bloom’). As Peter Gray points out:
The writings of anthropologists make it clear that hunter-gatherers were not passivelyegalitarian; they were actively so. Indeed, in the words of anthropologist Richard Lee, they were fiercely egalitarian. They would not tolerate anyone’s boasting, or putting on airs, or trying to lord it over others. Their first line of defense was ridicule. If anyone–especially if some young man–attempted to act better than others or failed to show proper humility in daily life, the rest of the group, especially the elders, would make fun of that person until proper humility was shown.
On the basis of such observations, Christopher Boehm proposed the theory that hunter-gatherers maintained equality through a practice that he labeled reverse dominance. In a standard dominance hierarchy–as can be seen in all of our ape relatives (yes, even in bonobos)–a few individuals dominate the many. In a system of reverse dominance, however, the many act in unison to deflate the ego of anyone who tries, even in an incipient way, to dominate them.
According to Boehm, hunter-gatherers are continuously vigilant to transgressions against the egalitarian ethos. Someone who boasts, or fails to share, or in any way seems to think that he (or she, but usually it’s a he) is better than others is put in his place through teasing, which stops once the person stops the offensive behavior. If teasing doesn’t work, the next step is shunning. The band acts as if the offending person doesn’t exist. That almost always works. Imagine what it is like to be completely ignored by the very people on whom your life depends. No human being can live for long alone.
(The original ‘tall poppy syndrome’ may have arisen for extremely good reasons!)
Well, that might be the case in small hunter-gatherer bands. But it’s no longer the case in large aggregations of people dependent on a stored food surplus who are often strangers to each other. Put bluntly, the new ‘mode of production’ no longer supports ‘reverse dominance’ – potentially, it starts to support good old-fashioned dominance. Status in a hierarchy – rather than egalitarianism – now becomes the clear goal for each individual (which reinforces the need for individuals to develop the ‘symbolic self’ with its focus on itself set to the maximum).
And in just those sorts of conditions the origins of the very modern ‘separate self’ we all know so well come into focus.
In the intervening millennia larger societies inevitably developed numerous means to check the destabilising effect of a ‘war of all against all’ – some methods brutal, others more subtle. No society has been willing to leave the ‘self’ completely unleashed – apart, perhaps, from the current, and now global, experiment with a form of ‘self-centred’ individualism as ‘full-blown’ as humans have ever experienced.
Pushed centre-stage during the Enlightenment – with its focus on individual rights, property rights, and democracy the ‘self’ has never looked back. As an antidote to Feudal and aristocratic privilege the focus on the ‘natural rights’ of the self has been on a winning streak for the past few hundred years. Held aloft by its fellow travellers of humanistic science, a Protestant Reformation (with its emphasis of a personal relationship with God) and the artistic discovery of personal perspective (and, at least since Beethoven, the artist as expressive individual), the ‘symbolic self’ is now the central construct of our civilisation.
Which would all be very fine if that were the end of the story.
But it isn’t.
[To be continued]