Because of the extra workload my boss has dropped on me along with the length of this essay, I have decided to break this third post on language rights into two posts. I expected to put this up yesterday, but better late than never. The next post will go up sometime early next week.
This post outlines an alternative to normative liberal political theory, something I have promised to do since this surprisingly controversial post on the Middle-East. I've actually started writing it a half dozen times in the last couple of months, and then scrapped it. I guess I need a certain amount of pressure to actually get some kinds of work done, but language rights are a good example application so I'm taking this opportunity to do it.
This post only outlines the theory and offers an example of its application that has nothing to do with language policy. I use it to analyse arguments justifying affirmative action in the United States as a form of slavery reparations. In the next post, I will apply it specifically to language issues.
I'm not a philosopher or a political scientist by training. I grew up immersed in child development and education theory because of my parents and later studied a lot of linguistics and translation theory, ending up - somehow - with a degree in physics and becoming a computer programmer by trade. In a lot of the social sciences, and particularly in philosophy, I'm more or less self-educated, and I realise the limitations that entails. Sometimes it means rediscovering something that's been hashed over decades ago without knowing it. But sometimes, it has advantages in terms of thinking outside of the box. Bits and pieces of my outlook have been advanced by other people, but I don't know anyone else saying quite the same thing. On the other hand, this may all be old news and I just don't know about it.
I want to start by giving you a brief and quite skewed summary of a theory in psychology and child development. It was originally promoted by Lev Vygotsky, a Russian who worked in the era of the revolution. He was a contemporary of Jean Piaget, and for a long time people tended to group them together as if they were largely saying the same things. They weren't, but Vygotskyan thinking is certainly informed by the experiences of Piaget's disciples.
Vygotsky advanced, among other things, a notion he called mediation. He believed that people always interact with the world through culturally constructed artefacts. Mostly, he just called them tools. The most trivial examples of this sort of thing are, in fact, physical tools. Hammers, for instance, are culturally constructed. They require a metalworking industry, mass produced nails and access to lumber, each of which involves a complicated cultural framework of divided labour, market relations, transportation networks and the like. When we want to build something, we don't just construct it with our bare hands, we use carpentry tools, and how we build is determined in large part by the tools and materials we have.
Part of what was unique about Vygotsky was that, first, he claimed that symbolic tools were just as important, and just as much culturally constructed artefacts, as physical tools. Second, he claimed that tools not only affect how we interact with the world, they also affect how we think about it. Mathematical algorithms, categories, philosophies and beliefs constitute, in Vygotsky's thinking, tools in the same sense as a hammer or a car. They have histories, they are instrumental in mediating how we interact with the world, they are supported by a cultural and institutional framework and, just like a hammer, we come into possession of them as is, without direct knowledge of the history and cultural supports all tools have. Furthermore, not only are our tools culturally adapted to us and our needs, we adapt ourselves to them as we use them. Vygotsky considered this the core principle of psychology. He called it cultural-historical activity theory, now often abbreviated as "CHAT."
This school of thought has been very influential in education theory and is beginning to have influence in cognitive science, thanks in large part to work at UC San Diego's Communications department and in the Education department at the University of Helsinki. It has found an especially productive home in the computer industry in recent years, where CHAT has become an important school of thought in the theory of Human-Computer Interface design. Bonnie Nardi, former chief interface wonk at Apple, is one of its better known proponents.
But I want to highlight some of the more philosophical consequences of this kind of thinking. To do that, I'm going to use a semi-famous quote from Gregory Bateson, an anthropologist and I think Margaret Mead's husband:
Suppose I am a blind man, and I use a stick. I go tap, tap, tap. Where do "I" start? Is my mental system bounded at the handle of the stick? Is it bounded by my skin? Does it start halfway up the stick? Does it start at the tip of the stick?Bateson goes on to argue that cognition - at least, if cognition is understood as information processing - is something that takes place both inside and outside of our bodies. Tools, like the blind man's stick, are as much a part of the human information processing system as a chunk of our brains. But if the stick is part of the blind man's mental processes, isn't the curb he taps it against just as much a part? How about the man who made the curb, or the city planner who put it there in the first place?
Cognition isn't the only place where this sort of "out of body" thinking applies. No one can run a mile in a minute, but I can sure drive a mile in a minute if I have a car. If my legs are a part of my locomotive apparatus, surely so is any vehicle I'm using? But then, so is the road, the road signs, the gas stations and the crews who repave the road every few years. Digestion is the process of turning food into stored energy for my body, so where does my digestive system start? I have to cut food up on the plate before I can eat it, so my hands and my fork and knife must also be part of my digestive system. But then, the cook also has to be a part of my digestive system, since he too plays a role in converting food into metabolic energy.
This more philosophical way of looking at Vygotskyan mediation says that our selves expand out from our bodies to the whole world, permeating the machines, the social structures and even the people all around us. We are not merely our bodies, or our memories, or even some sort of software running in our nervous system. My mind is not just my brain, it's also my Ultra 10 workstation, my books, my job and the whole cultural and intellectual milieu I inhabit, as well as the larger global sphere that it exists in. Each of us encompasses the entire universe.
This "oneness with everything" begins to sound like New Age mysticism, but I want to convince you that there is nothing mystical about it. People chop the world up into categories and things and what I am describing may not be the way that you're used to chopping the world up, but all I'm doing is chopping it up differently. I have no intention of evoking gods, essences or mystical forces. I am not proposing any sort of strange theory of physics, although I might be guilty of proposing a theory of metaphysics. I am saying that the stars affect our destinies, but only in the same sense as I might claim that if the sun went out you'd freeze to death.
Vygotsky was a materialist - both historical and dialectical - and had little use for mysticism. Mediation was, for him, the tool that let him get at the human mind.
As an aside, I've always thought there would be an interesting paper in comparing Vygotskyan mediation to Dennettite memes. At times it seems like they're almost talking about the same thing and at others they seem like they're on different planets. Vygotsky felt that consciousness is constructed by the use of cultural artefacts, while Dennett feels that it is constructed by the action of memes. The big difference is that Dennett says that consciousness is a socially constructed illusion, while Vygotsky claims that it is a socially constructed fact. Therein lies a world of difference.
Now, there are several other alternative ways of looking at cognition and identity, but one frequently discussed alongside CHAT is Actor-Network Theory (ANT), a school of thought most associated with Bruno Latour. Bruno Latour is fairly famous, especially in Europen intellectual culture, primarily because of his work in science studies. He's been pretty heavily attacked for his work, and I think he's done a reasonably good job of defending himself, except that unfortunately some of his books - Pandora's Box in particular - are far too dense and stylistically difficult to explain some of his ideas very well.
Latour devised ANT in large part to explain the semantic capacity of non-humans, particularly the objects of scientific study and the experiments used to study them. He deploys Greimas' notion of the actant to describe a wide variety of phenomena as actors, and then suggests that a more appropriate way to look at scientific work is to see scientists as people who interpret the acts of their objects of study, doing so within a cultural framework just like all other kinds of acts of interpretation. This view is opposed to a vision of scientific work as the generation of theories and testing of hypotheses. A full discussion is too far from my main topic and a bit too complicated to get into here.
What Latour claims, at some length, is that cognition should be understood not as the actions of individual agents like people, but as a process which takes place over heterogeneous networks of actors held together by different kinds of relations. Thought, for him, is always and everywhere a collective process. It is reasonable and useful to say that a network capable of cognition and action constitutes an agent in the same sense that a person does.
There is quite a lot I find appealing in Latour's approach, although I think Latour would be disinterested and possibly horrified to see ANT used to develop a normative political theory. This notion of collective cognition and action allows me to identify a wide variety of things in the world - companies, countries, institutions of various kinds - as things with the capacity for thought and action in their own right.
People already do this all the time. Every time someone says that "Microsoft is out to destroy Linux" or that "America made a mistake invading Iraq", they are ascribing the capacity for thought and action to an entity which is not a human being. Most people, when pressed on this issue, will say that it's a sort of verbal shorthand for saying that "some of the people who run Microsoft want to use the human and material resources at their disposal as Microsoft's bosses to destroy Linux" and that "George W. Bush and other executive office decision makers made a mistake in ordering individual American soldiers to invade Iraq." I don't think it makes any sense to see the one as a shorthand for the other.
Most of us have had the experience of dealing with some kind of customer support person, or some kind of government agent, only to be told that whatever perfectly reasonable thing we want them to do is "against policy." It is possible to have an outlook which claims that when this happens we are dealing with an individual person who has just told us "no." But, that strikes me as counter-intuitive. We don't usually blame the actual person we're dealing with for failing to do what we requested, no matter how desperate we are to have them do it. Think about it, who precisely is to blame when your local library is closed due to budget cuts? The librarian has the keys, she (it's usually a she) could keep it open if she wanted to. Of course, there would be consequences for her, but do we genuinely attribute the underlying fault to the individuals making those choices? No, we blame the government and usually we blame them in a quite amorphous and indistinct manner, since bureaucrats and elected officials also work in a context of limited choice.
I am not claiming that anything except individual people are making decisions and taking actions, just that we can attribute outcomes to the whole that we can not necessarily attribute to individuals alone. All I am saying is that "[m]en make their own history, but they do not make it as they please; they do not make it under self-selected circumstances..." The circumstances in which people act are not static and sometimes we do act as a part of something else. To attribute only to individuals the causes of their actions is utterly contrary to the way we usually conceptualise the world and the way we behave towards each other.
Instead, I propose to step back as say that it is perfectly reasonable to identify an action with a country, a firm or another kind of institution. This is very much in line with modern thinking about institutions. The division of labour, for instance, is a perfect example of this sort of collectivist thinking. A firm is not an individual. Its cognition is not the cognition of a single person because it is impossible that a single person could do all the planning, much less execute all the actions, of a large firm. Yet, we are presented with and usually interact with a firm as a whole which singly produces whatever goods and services it sells and is singly compensated. In the same way, a government is never merely one man, even in the most absolute dictatorship. There is always an apparatus of state which cannot be micromanaged from the top. Yet, the whole point in having a state is that it should act with a single mind in those matters that fall within its sphere of activity.
I harp on this point because it is the hardest one to get people to accept. The world does not have to be analysed as if only individuals counted and there are many times when such an analysis is counter-intuitive and misleading. I should also make clear what I am not saying. People are never merely elements of a collective. Nor am I claiming that no one is ever to blame if they are "following orders." Humans have individual powers of agency and I do not seek to deny this.
Furthermore, it is extremely important to recognise that not just any group of people can be called a collective by my definition. That was what got me into so much trouble the last time I brought this topic up. A collective exists where a heterogeneous group of humans and non-human actors exist in a network of relations that creates a capacity for cognition and action as a single thing. This pretty much always means an institution of some kind. America is a collective. The Americans are just a bunch of people. America is not just the 280 million odd US citizens and residents; it is also a mass of land, an industrial plant, an armoury of weapons, a body of law and traditions that have developed over history and a set of social relations, some of which extend outside of US soil and encompass people who have never set foot in the USA. It is reasonable to say that America has invaded Iraq; it is not reasonable to say that the Americans have done so.
The last time I brought this up it was to make a point about Israel's relations with the Palestinians - that it is a category error to claim that the Palestinians are to blame for something or that the Palestinians have to do something in order for there to be peace, while to make the same assertions about Israel is not a category error. The reason is because Israel is a collective in the sense that I have described, and the Palestinians are not. Naturally, it is not a category error to say that the Palestinian Authority or Hamas is to blame for something or must do something. They are collectives and, should there someday be a Palestinian state, it too will qualify as a collective. But, "the Palestinians" will never qualify as a collective, nor will "the Israelis", "the Jews", "the Americans", "the Muslims" nor anything else that is just a bunch of people.
Now, I want you to consider my definition of a collective in light of CHAT and mediation. The term collective in the sense I am using it here describes individual people as well as institutions. Cognition is something that happens both inside and outside the brain, through networks of cultural artefacts which may include other people. This approach to cognition, action and identity has the advantage of scaling well. It treats people as just one class of collective.
Here is the first principle I want to put forward: responsibility and intent can only be attributed to collectives. Remember, individuals are collectives by my definition and can be held responsible for their acts, but it is also possible to attribute to collectives responsibility for acts without necessarily attributing them to any specific individual people. This notion is very productive in the discussion of historical injustices, as I will show later.
There is one other element that I need to bring into this discussion. It's a concept that comes down to us from Hegel via several other thinkers (attn: Brad): self-development. I want to advance self-development as the core idea of a sort of humanism. I assert that people have the right to develop themselves as they wish and that enhancing people's ability to do so should be identified as the good thing on which utilitarian discussions of policy should focus. That means that people should be able to become what they want to be; that their thoughts, desires and choices should be able to evolve in as unrestricted a manner as possible. This idea subsumes the notion of "opportunities" in liberal discourse but it is larger than that. It, too, has a sort of new-agey feel to it that I want to dispel.
Norman Geras is one of the few writers I know of pursuing this line of thought. Those interested in self-development as a normative principle tend to eschew any discussion of justice as if sefl-development renders it superfluous. Geras argues here that it does not, and I am inclined to agree with him. What I am hoping to do is build up a right to free self-development as a normative theory to compete with liberalism.
Naturally, self-development is not an absolute standard which exists independently of time, place and social context; nor can all developmental efforts be treated equally. If someone wants to develop into a serial murder, they can't assert the freedom to go around killing people in the name of self-development. Furthermore, what policies specifically enhance or block self-development are always conditioned by the historical circumstances people find themselves in. To someone who is starving, food insecurity is an enormous barrier to self-development even when they have nominal political liberties like freedom of speech. It is possible, under this scheme, to come to the conclusion that a dictatorial regime which grants none of those political rights but which is able to keep people fed may actually be the juster regime. Of course, this is not to say that a regime that offers food security and political rights isn't juster still.
This is a sort of relativism, but it is quite different from the kind of vulgar relativism that serves as a strawman in a lot of arguments. What enhances the freedom of self-development in one time and place may harm it in another. Even the mostly widely adopted and agreed upon liberal principles are not necessarily universally applicable. I claim that the freedom of self-development is a universally applicable principle, but that what that means is highly relative.
Asserting a freedom of self-development enables us to get rid of the taxonomy of freedoms that have proliferated under liberalism. Both negative rights (freedoms from something) and positive rights (freedoms that enable people to do things) can be evaluated within the same framework: do they enhance or hinder self-development? A standard of self-development enables us to more rationally judge the classical liberal freedoms, since in practice each is to a significant degree restricted.
We can claim, for example, that freedom of speech is a necessary condition to self-development because it enhances the cognitive abilities of individuals. The principle of mediation means that when we communicate with others, we are in effect taking advantage of cognitive abilities outside of our own brains and enhancing our cognitive powers because of it. But, to do this, we need to be free to communicate with other people. Freedom of speech is, in my analysis, a freedom to think outside of your own head. At the same time, we can identify communicative acts which hinder self-development. The classical instance, of course, is yelling "Fire!" in a crowded theatre, but more realistic examples are acts of fraud and conspiracy.
Also, the freedom of self-development allows us to treat empowerment in the same framework as liberty. Access to an automobile and good roads, or to an efficient public transportation network, empowers people to develop more freely by giving them access to more of the world. Income security and social esteem enhance free self-development. And, access to good education and the freedom to learn what you want are key liberties in a self-development-based notion of rights. This last point in particular makes my philosophy appealing to someone coming from a background in child development and education.
So, let me summarise. I have advanced three principles:
However, not everyone acts as if they agree. Consider carefully what I am saying. I am saying that there is never any need to "give one's life for one's country." It is reasonable to be willing to risk your life to defend your state for its instrumental value in enhancing the freedom of self-development. But, to die for King and Country, for honour, for glory, for the mother or fatherland, for your race, ethnic group, religion, whatever - I am saying that all of that is plain stupid. People possess intrinsic value and institutions only have instrumental value.
This principle is intended in part to undermine the charge of collectivism. This is a collectivist theory in that it recognises the real existence of collectives and assigns value to them. But, I am specifically saying that the worth of the individual is not their worth to the collective; instead, the worth of the collective is only its value to the individual.
I actually have a critique of capitalism based on this principle, but that is for another post.
There is a specific example from outside of language policy that this line of thought works well with: affirmative action as a form of slavery reparations. Most of the people opposed to affirmative action will point out that there is no living slave owner in America and many Americans don't even have ancestors who lived in America when there was slavery. However, even though individual slave owners are all dead, we can still attribute liabilities for slavery to various collectives: the US government, the various state governments, political parties, church organisations, even to America as a collective entity. These collectives are still alive today.
Furthermore, I don't have a problem with the logical consequence: Making a collective responsible, and compelling it to make amends, means that individuals who participate in the collective must bear the costs. I considered writing much of this post a few weeks ago, when Brad Delong had a post on more or less the same subject and justified affirmative action on almost identical grounds to the ones I am using. America is a collective, but it is also a culturally constructed tool - one that is both symbolic and more substantial - through which Americans as individuals interact with the world. To accept the benefits of this tool - to make it a part of yourself - means accepting the costs associated with it. That means paying taxes, but it also means accepting the liability for its past injustices. Cultural artefacts have histories, they do not come into the world as they are, and the artefact and its history are not readily separable things. No individual is liable for slavery because of their ancestors, even those whose ancestors did own slaves. Everyone is liable for America's past because of their acceptance of America's present instrumental value, even those with no history in America until recently.
The problem I have with the idea of slavery reparations - even in as distended a form as affirmative action - is determining just who is owed. I can't identify black people, or people descended from slaves, as a collective by my definition. Were there any individuals who had been slaves still alive today, they would be personally eligible for compensation. But there are no such individuals.
Instead, let me offer an alternative to hereditarian theories about who should be the beneficiary of a collective liability for past policies. People alive today who suffer diminished freedom of self-development due to historical slavery are the ones who ought to be the beneficiaries of whatever America owes.
This has a number of advantages. For example, one of the notions that I've seen in circulation is that contemporary racial inequalities in America have as much to do with a shift in the way labour is employed as it has to do with continuing racism. The idea is that at some point in the fairly recent past - the 1970's in most analyses - the American economy shifted from one that offered a lot of opportunities to unskilled labourers to one that was heavily tilted against them, and that the mechanisms by which children from families of labourers gained skills in the past have disappeared. Black people, having entered this period with poor skills due to past racism, have since tended to stay unskilled and poor even as racism diminishes.
As an educated middle class white guy, this theory strikes a chord with me. I don't claim that there is no racism in America, but members of the class with the most power in America are the people least likely to think that skin colour is a good factor in making decisions about people. I think quite a few Americans are bothered by persistent racial inequality in America even though neither they nor the people they associate with are bothered by having black neighbours, co-workers or friends; and, I think people are hard pressed to understand how this can be. This theory explains how even if no one in America was racist, there could still be racial inequalities.
The logic I'm advancing still justifies assistance specifically for black Americans, as compensation for the present consequences of past injustices. It enables us to compensate black people who may not even be descended from former slaves - immigrants and their offspring - who have diminished present day opportunities for self-development, while at the same time identifying black people who appear to enjoy as much freedom of self-development as everyone else - say, Condoleezza Rice - as people who should not benefit from compensation but who bear the same liability for past injustices as other Americans.
This, I think, takes away the most pernicious problems people see in compensatory policies that make racial distinctions. My logic does not lead to the conclusion that "white people" owe "black people." It justifies targeting compensation in the same way that the injustice we are compensating for is targeted. It also makes liability conditional on benefiting from a collective rather than hereditary criteria or racial classification. It suggests that affirmative action should not merely target people by their race, but also by their social status.
So far, I have not discussed language in this framework. My analysis of affirmative action is emblematic of how I intend to bolster language rights claims on the basis of historical injustice and specify who should benefit from them and what kinds of policies may legitimately serve those ends. But that will have to wait for my next post.Posted 2003/08/23 16:49 (Sat) | TrackBack