This is Tom's undergraduate Philosophy & Politics dissertation. You can download a PDF of the dissertation from the attachment at the bottom of the page.
This essay begins with the following proposition: given that we spend a large proportion of our time working, a just society will provide or encourage meaningful work. I further assume that, rather than mounting a full frontal assault on the root of the problem, which I identify as capitalism and instrumental wage labour, we should instead seek out and broaden spaces where life can unfold freely (Gorz, 1994). Hackers, a group or label used in a sense unfamiliar to analytical philosophers, have created such spaces, and fit Melucci's description of individuals who "invest... in the creation of autonomous centres of action". Hackers have, to an extent, "oppose[d] the intrusion of the state and market" (quoted in Della-Porta & Diani, 2003) into their lifeworld since they first emerged as a social group in the late 1950s (Levy, 2001). I shall therefore set out to show how the Hacker Ethic, by which all hackers work, provides a promosing model both for further research into meaningful work and for public policy in the same area.
I shall proceed by first developing an understanding of the Hacker Ethic, which will highlight a central concern of my essay, that of orientations that I characterise as self-indulgent and social placing conflicting obligations upon individuals. I will then analyse Marx's concept of alienation to deepen my understanding of meaningful work, and to show how the Hacker Ethic addresses Marx's concerns. Finally, I will show how, by employing my conception of alienation, the Hacker Ethic can to an extent overcome the conflicting obligations.
The word 'hacker' originated in the computer labs of Massachusetts Institute of Technology (MIT) in the late 1950s amongst a group of programmers who believed that "all information should be free" and that "access to computers... should be unlimited and total" (Levy, 2001, p.40). Hackers now define themselves as "an expert or enthusiast of any kind. One might be an astronomy hacker, for example" (Raymond, 2003). One could work in a 'hackerish' way in any field of endeavour where universal access to, and sharing of, the tools of your trade would be positive and viable. Decades later the media began to apply the term to criminals using computers, who hackers began to call 'crackers' (Raymond, 2003).2
Pekka Himanen wrote the first major study of the hackers' attitude from a philosophical perspective, establishing a 'Hacker Ethic' with seven key characteristics: passion, freedom, their work ethic, their money ethic, their network ethic, caring and creativity (Himanen, 2001, p.141). Broadly speaking the Hacker Ethic suggests (a) the importance of a particular kind of work, namely the kind that hackers can be passionate about, that isn't motivated by money, and that is playful (b) a particular approach to working, which allows an individual rhythm of life and yet also places the community and cooperation at the centre, and (c) a particular approach to building productive communities, involving equal and unfettered access to information and tools facilitated by open sharing. Utopian though it sounds, it is important to recognise that the hackers who subscribe to this ethic have built much of the infrastructure of today's information society 3.
Parts (a) and (b) encapsulate a work ethic that is orientated towards work as being intrinsically worthwhile and motivating, rather than instrumental. Hacker work is, in the words of the hacker Linus Torvalds, "interesting, exciting, and joyous", "intrinsically interesting and challenging" (Himanen, 2001, pp.xiii-xvii) and "goes beyond the realm of surviving or of economic life" (Capurro, 2003). That these features are intrinsic to the work, rather than being a subjective attitude on the part of the individual, is demonstrated by a comment from an employee of Microsoft. The company competes with the work of hackers, often attacking them, and so charged an employee with the task of investigating the competitiveness of the hackers' work. Without any bias in favour of hackers, he wrote that when hacking on their software, "the feeling was exhilarating and addictive" (OSI, 1998).
It is important to note that when hackers talk about intrinsic motivation they almost always use adjectives like "fun", "passionate", "joyous" and "entertaining". In contemporary society we maintain a distinction between work and leisure, and are acutely aware of when work erodes the time we usually dedicate to leisure. To hackers, the distinction is a non sequitor. Hacking on some challenging code is every bit as entertaining as playing a game of football or reading a book, albeit in a different way. Not all play is "something wasteful [or] frivolous", and can be "the experience of being an active, creative and fully autonomous person"; to hack "is to dedicate yourself to realizing your full human potential; to take an essentially active, rather than passive stance towards your environment; and to be constantly guided in this by your sense of fulfilment (sic), meaning and satisfaction" (Kane, 2000).
This guiding sense is apparent in hackers' approach to work management, and specifically in how they decide what to work on. The dominant factor, according to most theorists, is the desire to "scratch and itch", i.e. to satisfy a need (Raymond, 1999) (Lakhani & Wolf, 2005). This need may be a functional one where the hacker needs a particular bit of software, or it may be a personal one where the hacker wants to try his hand at a particular technique. Most hacker work is entered into voluntarily because it is "intellectually stimulating", because it "improves skills" and because of the code's "work functionality" (Lakhani & Wolf, 2005). If this is the case, then the adjectives given by Torvalds and the Microsoft employee should not be thought of as the sole motivations for work, nor solely as pleasant byproducts, but rather as factors that affect how a hackers prefer to scratch their itches.
This orientation around the activity and its inherent worth gives rise to a meritocratic form of organisation. According to Levy, "Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position." If a hacker wants to pursue a line of work, they simply start hacking and gain approval from other hackers when their work shows merit. Access to computing equipment and advice from fellow hackers isn't restricted by bureaucracy or unjust social arrangements (Levy, 2001, p.43). Of course meritocracies place demanding barriers to entry insofar as they require a certain level of skill and aptitude. They also fail to emphasise other aspects of personality or capability that we might find positive such as race and gender, which may be "deeply felt" by some individuals (Adam, 2004). This problem can be overcome simply by acknowledging the positive aspects of equal opportunities, and of social duties that help the less able; these suggestions are coherent with the hackers' work ethic.
Weizenbaum, an early critic of hackers, suggested that hackers work "without definite purpose". Unable to set long-term goals or analyse information in a teleological context, he claimed that hackers are aimless and disembodied. Compulsively scratching itches, Weizenbaum's hacker is like a hyperactive child who may engage passionately in frenzied activity without ever achieving anything. This is a culture that he describes as "instrumental rationality", the result of the belief that if some task is technically feasible then it is worth performing (quoted in Hannemyr, 1999). For a work ethic to be truly fulfilling, truly meaningful, he suggests that it must account for some kind of worthwhile aim towards which the hackers' activity is directed. However, as Hannemyr (1999) has pointed out, hackers create products not only for the pleasure of the work but also for the utility and beauty of the products themselves. Hackers value "flexibility, tailorability, modularity and openendedness to facilitate on-going experimentation". The activity of creation may not be as aimless as Weizenbaum suggests, however one can still object that these aims are limited. Creating for the sake of abstract features in the product could still be characterised as a form of instrumental rationality, without wider personal or social goals such as the creation of tools that enhance personal life quality or that meet a pressing social need.
Wiezenbaum could reply that such a creative act would simply fetishise the role of information and activities that create it, without good reason to value them as abstract entities. He could further point out that such an attitude would have no objection to creating harmful products, such as software that facilitates anonymous online transfer of child pornography, if the creation of the product was particularly enjoyable and if the code was deemed to be beautiful. This lack of focus on outcome leads to an ambuguity in the work ethic: is it the case that hackers value characteristics inherent to the work and the code they create, or do they also account for the use value of the products? This ambiguity makes it difficult to say, given that they have a limited amount of time during which to hack, how they should spend it.
The hackers' work ethic, insofar as it concentrates on how one should work and why it should be motivating, invites the charge of self-indulgence. It argues for an autonomy in work that facilitates personal fulfilment without clarifying this ambiguity of values, and without accounting for social obligations that might reasonably abridge this autonomy. The Hacker Ethic does, however, encapsulate social obligations in part (c) mentioned above. These obligations can be most clearly studied in the free software movement, an applied example of the hackers' social ethic.
The free software movement arose out of the hacker subculture in MIT in the 1980s. It was started by Richard M. Stallman, who wanted to produce an entire operating system that would be developed and distributed according to the principles of the Hacker Ethic. This act would be "a way of bringing back the cooperative spirit" found in the hackers' social ethic (FSF, 2003). This spirit was being taken away by an increasingly proprietary application of copyright to software, whereby the copyright owners abridge hackers' access to information.
The hackers' social ethic is based upon three principles: First, the belief that sharing information, be it about the weather or a novel, is good; second, that hackers have an ethical duty to share the information with which they work 4; and third, that hackers should facilitate access to computers wherever possible (Raymond, 2003). These principles are closely related to the hacker work ethic, since they facilitate it by removing artificial restrictions on the hacker's freedom to use the information with which they work. It is important to note that there is no obligation to create socially valuable products, only to remove restrictions, a liberal feature that I will attend to later.
Stallman applied these axioms by subverting copyright, a limited monopoly granted by the state to a creative person in return for their increased productivity. He wrote and applied a license to his copyrighted work that gave the community free access to the information. He coined this act "copyleft", and described the licensed work as "free software", where "free" refers to freedom rather than price. The license guarantees the following four freedoms:
Stallman conceived of these freedoms as an ethical duty on the part of the hacker to society. Not sharing information in this way is "the wrong treatment of other people", "anti-social" and it "cuts the bonds of society" (Stallman, 2004a) (Stallman, 2004b). These bonds are hinted at when he writes that not sharing with others is "divisive" because it reduces the emphasis on "helping one's neighbors" and on working "for the public good". To do this is an obligation, but not one so strong that we must always work for the public good (Stallman, 1992) (Stallman, 2005). Even the suggestion that we ought to work for the public good on occasion seems to contradict Raymond's reluctance to mention working on socially valuable products.
In a seminal position paper, Stallman describes the harms that non-free software causes, which parallel the freedoms his licenses guarantee. In the first place, "fewer people use the program" (Stallman, 1992). People might be unable to use a program because of 'natural restrictions', such as blindness, or by 'artificial restrictions', such as copyright. In correspondence with Stallman, he confirmed that only harms caused by artificial restrictions need concern a hacker, suggesting that they have no obligation to ensure that, for example, a blind person can use their program as well as somebody with sight (Stallman, 2005). The second harm he identifies is that "none of the users can adapt or fix the program", caused by an application of copyright that obstructs access to the program's source code. This also causes the final harm, which is that "other developers cannot learn from the program, or base new work on it" (Stallman, 1992). Again, a person with no programming skills or with blindness would suffer the harms regardless of artificial restrictions.
The free software philosophy operates on the harm principle, suggesting that placing any artificial restriction on the sharing of information causes a social harm that is never justified and that must therefore be avoided.rom this Stallman develops a golden rule, that one must always share software freely under the terms described above. Though he suggests that it is a Kantian moral position, one could also advance a rule utilitarian or rule consequentialist account, that one ought to share software freely because the good consequences, and the avoidance of the aforementioned harms, always outweigh the bad. He writes that "if anything deserves a reward, it is social contribution. Creativity can be a social contribution, but only in so far as society is free to use the results. If programmers deserve to be rewarded for creating innovative programs, by the same token they deserve to be punished if they restrict the use of these programs." 5. This analysis is admittedly trivial, but necessary since the ambiguity of his moral philosophy, and his probable focus on either a Kantian, utilitarian or consequentialist account, causes problems in my account of alienation later in this essay.
The free software social ethic also has an important positive component that goes beyond Raymond's weak or liberal emphasis on rights and access. Though these are an important part of the free software ethic - Stallman maintains that the freedoms his licenses guarantee are a human right (Berry, 2004, p.70) 6 - it also emphasises values such as cooperation and communication in productive communities. "The conception of the social good is strongly communitarian and privileges both a vision of a social order that assigns rights and obligations, and one that is fair and equitable" (Berry, 2004, p.73). The rights and obligations that this position implies are taken as a Kantian categorical imperative and should be scrupulously followed by all hackers.
Stallman's position poses two problems. In the first place, one might reasonably ask why it is that we have any obligation to share information but not to produce it. The ethic is neutral towards an idle hacker who does no work but hostile to a busy hacker who refuses to share his work. It may be simply that social sanctions against idleness already exist, and so the Hacker Ethic is only concerned with an additional sanction not present, that against exclusive ownership of information. Raymond, for example, is an ardent supporter of the free market and so presumably he believes that we needn't worry about idleness because the need for money will compel a hacker to work. Stallman's more left wing political stance, on the other hand, explains his reference to working for the public good. It is safe to say, then, that the Hacker Ethic does place value on individual performing socially useful work, but that there is no consensus on where the responsibility for this lies, be it in the market or social obligations.
The second problem is to do with Stallman's aversion to natural restrictions. By 'natural', Stallman doesn't just refer to biological restrictions but also to other restrictions that we would normally think of as outside the direct control of the hacker. So both blindness and poverty in the user are natural restrictions that the hacker cannot directly overcome, or at least that is the prevailing opinion in the society in which Stallman lives. But the aversion remains strange. Imagine if I were blind or if I had no money. I would be unable to use free software for any purpose, be it to use it, to adapt it or to learn from it if the program doesn't work with accessibility software or if I am unable to purchase a computer. Would I not be less free, according to Stallman's criteria, than a person who faces no natural restrictions but is nonetheless unable to study how the program works because of artificial restrictions? Surely the hacker breaks the bonds of society more strongly if he refuses to make it usable for the majority of his fellow human beings out of a desire for other work that would be characterised as self-indulgent? Furthermore, buying a computer for the poorer person would seem to heed Raymond's call for universal access to computers, and Stallman's call to work for the public good.
An amended rule based upon his four freedoms might state: where the good consequences of a hacker overcoming restrictions outweigh the bad, the hacker has a duty to overcome these restrictions, be they natural or artificial. In the examples given above, adapting my software for blind people or buying a computer for a poorer person would have obvious good consequences, whilst abridging my autonomy and setting me back financially. Given that Stallman suggests we ought to accept a lower wage writing free software rather than attempting to "get rich" through writing non-free software, the hacker will have to heed his social obligations in most cases.
In response to this claim, Stallman simply wrote to me that "to demand an impractical level of clarity in practical applications of ethics simply brings it to a standstill, since it sets the bar impossibly high" (Stallman, 2005). In other words, his philosophy is based upon a utilitarian principle that, if taken to its logical extreme, becomes impractical or even undesirable but which, when applied in moderation, becomes desirable. Aside from the fact that this violates his desire for a categorical imperative, because it is impossible to apply the ethic in full to all members of society, it also suggests that he is wrong either in thinking the bar is too high, or somewhere in the construction of the social obligations that set the bar so high.
In his defence, it would be absurd to suggest that a hacker must go out of his way to educate the whole of society to an advanced level of physics so they could use his physics program, even if society wanted to use it. This would involve the hacker volunteering a phenomenal amount of time and resources to educating society, with limited discernible public good. It is not so absurd, however, to suggest that a hacker should spend a small proportion of his time adapting a program essential to a group of people so that they can use it, even if that work isn't intrinsically interesting for the hacker.
The free software philosophy, as an example of the hackers' social ethic, seems to be a strongly socialistic counterweight to the self-indulgent work ethic. Stallman writes that "a user of software is no less important than an author... their interests and needs have equal weight, when we decide which course of action is best" (Stallman, 1992). That is to say that the author of some software has obligations to himself and to society. The self-indulgent obligations are met by working in a joyous and passionate way on software that is intellectually challenging, that develops skills and that has significant use value; the social obligations are more complicated. Stallman posits a weaker social obligation that can be met by distributing any information produced under a free, copyleft license. I have advanced a stronger social obligation, more consistent with the calls for universal access, that can only be met by producing socially useful information, distributing it under a free, copyleft license and purchasing equipment for those whose poverty denies them access.
A hacker will automatically meet Stallman's social obligations without prejudicing his self-indulgent obligations simply by virtue of working according to the Hacker Ethic. By releasing his work under a free license, the hacker won't prejudice his ability to work freely, passionately, joyously and so on. In fact, as part of a community that also meets this obligation, the free distribution of information will facilitate his self-indulgent work practises. A hacker may, however, have to temper his self-indulgent obligations to meet the stronger social obligations. For example, making a piece of software usable for blind people may not challenge the hacker, it may be uninteresting work, but it should nonetheless be undertaken for the sake of universal access to that software.
In practise, of course, the point of moderation between the obligations bestowed by the work ethic and the social ethic is decided not by an ethical principle but by judgement of each individual hacker. But the question remains as to whether or not the Hacker Ethic has anything to say on this matter. For Torvalds and Himanen, once a hacker is self-sufficient then the Hacker Ethic can account for characteristically self-indulgent work practises. For Stallman, given self-sufficiency, he is interested in social relations and obligations. Unlike in other cases, where the two demands are clearly antagonistic, the mechanisms that hackers employ to meet their social obligations facilitate their self-indulgent work practises, and vice versa. Given the close connection between these two aspects of the ethic, it would seem possible and attractive that there might be some common framework that could account for both aspects and help resolve the conflicts.
I propose that the Marxian theories of alienation are a good candidate for such a framework. In dealing with relations between the worker and his labour, the worker and his product, and the worker and other people it presents a unified theory with a common language. This allows me to overcome the discursive chasm between Himanen's and Stallman's accounts of the Hacker Ethic whilst retaining both the spirit and language of each. In focussing on relations of production and creation it addresses the central concerns of the Hacker Ethic: how we work and what we do with the products of our labour.
Marx's theory of alienation identified four kinds or aspects of alienation, which I shall analyse in turn: alienation from labour, from products, from society and from our species essence. For Marx, alienation is not a matter of psychology; we cannot make our activities meaningful by changing our attitudes towards the activity nor our perspective on the context of the activity. Rather, alienation arises from the material conditions of our labour, and the relations those conditions set up between ourselves and our product, between ourselves and the activity, between ourselves and others, and finally between ourselves and our species being. To overcome alienation, then, requires that we change the way we work such that these relations become more healthy.
A product is the embodiment of a worker's labour, it's "objectification", it's "realisation" (Marx, 1992, p.324). There is a relationship between the worker and his product that is socially constructed, which will have both concrete and abstract components. The concrete is between the worker and the product itself with its own specific and unique qualities. The abstract is based on a perception of the product's generality, ignoring many of the specific qualities but appreciating its uses and its status as the realisation of the worker's creative powers. "The full and productive relatedness to an object comprises this polarity" (Fromm, 1963, pp.113-114).
Under capitalism, the product is a commodity that is traded on a market, rather than remaining a product for the worker's own consumption. A commodity is produced according to the needs of the buyer rather than the self-sustaining needs of the worker, and so the worker's labour becomes subordinate to the division of labour within society (Marx, 1960, p.71). "The worker's needs, no matter how desperate, do not give him a license to lay hands on what these same hands have produced" (Ollman, 1980, p.143). Thus by losing any concrete relationship with the product, the worker suffers a loss of reality.
When the product becomes a commodity, the abstract relationship is transformed, and the generality of the product becomes one of market value rather than one related to the concrete product. The worker loses his abstract relationship with the product and gains a relationship with congealed market value. That value is determined as much by the market as by the efforts of the worker and the abstract and concrete qualities of the product. When the commodity is sold the worker is left with money, and so the worker loses the abstract relationship with the product representing a further loss of reality. "In exchange for his creative power the worker receives a wage or a salary, namely a sum of money, and in exchange for this money he can purchase products of labour, but he cannot purchase creative power. In exchange for his creative power, the worker gets things" (Rubin, 1975, p.xxv).
This is more than a matter of the worker losing control over the product; rather than being experienced as a result of his creative power, the product is experienced by means of the other commodities bought with the product's market value. By contrast, a relationship whereby the worker experiences his labour as the results of his creative power, where he can use the product and value it not as a means to an end but for its own sake, is one in which the reality of the product is preserved. And a product that increases the worker's creative power would be a gain in reality.
Marx says that the relationship under capitalism, with its unhealthy concrete and abstract components that no longer relate to the product at all, causes alienation (Marx, 1992, p.324). He goes on to say that the product takes on an "external existence" that confronts the worker as "hostile". One can understand this in one of two ways. According to the first, not only is there no guarantee that an increase in productivity will improve your living conditions, but it is likely to increase the power of the hostile system that keeps them in these conditions by giving the capitalist more than the worker receives. The devaluation of the worker increases in proportion to his productivity (Cox, 1998). According to the second interpretation, the unhealthy relationships directly diminish the reality that the worker created and so are hostile, as opposed to relationships that increase the reality or those that are neutral in this respect.
Elster raises a problem relating to the objectivity of this kind of alienation: is the worker in fact external to and in a hostile standing with the product, or does he feel external? If it is the former then an individual can overcome alienation only by changing the mode(s) of production that give rise to the alienated relationship, whereas if it were the latter the worker could overcome alienation by changing his state of mind, either by replacing it with positive feelings or by accustoming himself to them such that they no longer made him feel miserable (Elster, 1985, pp.74-76). I would suggest that Marx be opposed to the psychological explanation, and that the worker cannot give meaning to his product when he has no real connection to it, concrete or abstract. This disconnection gives rise to the feelings Elster describes.
Another problem is that, according to this account, the only way to overcome the two losses of reality would be to keep control of the product. But this would make productive social relations impossible; exchange of products in any form would represent a loss of control and therefore of reality. If there is nothing special about a worker being in control of his own products, and instead we worry about workers being in control of a sum total of reality that provides a sum total of creative power, then we can at least conceive of social relations based on equally valuable products, such that nobody loses any reality or creative power. Labour-based theories of value are one likely candidate.
I would suggest that in exchanging one product for another that increased one's creative powers, without the abstractification of money, one is at least mitigating the alienation caused by moneyed exchanges. This weaker claim means that already we have to forfeit any hopes of an unalienated society, but it allows us to rescue some semblance of pragmatism whilst providing a basis for lessening alienation. This can then be applied to the Hacker Ethic.
Because hackers deal with information they can overcome alienation in both the concrete and abstract components of the relationship. In the first place, because their products are non-rivalrous they can continue to use their product whilst sharing it freely with others. By employing free software licenses, as per the Hacker Ethic, they retain a concrete relationship with the product that can increase the hacker's creative powers. Hackers revere work that enables them to achieve new things (Levy, 2001, pp.46-48).
Secondly, by creating the product for its own sake - i.e. for its valuable generalities such as its usefulness and intellectually challenging design - rather than as a means to an end, the hacker can continue to appreciate the product's abstract qualities, such as artful or concise expressions of complex ideas (Levy, 2001, pp.43-45). Though a hacker may sell his product, the commodification isn't absolute because the free software licenses guarantee that the hacker can retain his full and productive relatedness to the product. Commodification in this context isn't directly connected to the worker's relationship with the product, it isn't hostile as in the capitalist context. It would only be relevant if, as Torvalds, Himanen and Stallman suggest, the hacker was unable to be self-sufficient and so had to treat his products primarily as commodities. In this situation the market value of his work would determine the concrete and abstract relationships.
Marx claimes that, if the worker is alienated from the product, which is itself the objectification of labour, then it follows that labour is an activity of alienation (Marx, 1992, p.326). This is dubious, since the alienation only occurs when the product is finished and becomes a loss of reality; there is nothing in the activity of production that suggests alienation takes place, according to my explanation of Marx's account of alienation so far. But Marx develops a similar terminology to explain how the relationship between worker and his labour is also alien.
Marx ascribes various attributes to alienated labour: He says it is "external", which he further defines as making the worker "feel miserable and not happy", that which "does not develop free physical and mental energy", that puts the worker in a state where he cannot "feel himself" whilst working; alienated labour is also forced labour; it is a means to an end, not satisfying in itself; and finally it is, as with the product, "directed against himself" (Marx, 1992, pp.326-327). Unalienated labour involves the "free actualisation and externalisation of powers and ... abilities" (Elster, 1986, p.101). This notion can be broken down into two component parts: the freeness of the activity of labour, and the capacity of actualisation and externalisation of one's powers through labour.
Central to this list of adjectives is the idea that the labour ought to be entered into freely as a conscious activity. Men should gain freedom through labour that is free "from autonomous social forces and laws" (Gray, 1986, p.178), and that allows the individual to choose both what work he does, and when and how he does it. Marx gives the example of an individual in a capitalist society who is forced to specialise in hunting, fishing or literary criticism (Marx, 1968, p.45). Unregulated market forces control not only the value of any commodity, as previously mentioned, but also therefore what labour workers can feasibly enter into if they are to sustain themselves; these forces are ultimately autonomous, in that the worker has no control over them whatsoever.
Using Lukes' analysis of power (Lukes, 1974), I understand this free activity that Marx describes as requiring three kinds of autonomy: first, the worker must be able to directly influence decisions about the work he does (this may be more or less compatible with "reasonable" social forces such as utilitarian or egalitarian considerations); second, the the worker must be able to affect the perspectives and agendas that determine the work he does, such that social laws cannot unfairly predetermine the scope of his activity; third, the worker must be able to enter into discourse that determines how society understands work and in particular the kinds of work he wants to engage in. On the second point I say unfairly because it would be absurd to posit a scenario in which an unskilled labourer could work as a neurosurgeon without proper training. Similarly, as noted in my section on free software, we may want to account for reasonable social forces such as the relative social utility of various kinds of work.
In terms of self-realisation, Elster defines activities that lend themselves to it by reference to "some further goal or purpose" that "can be performed more or less well", and that offer "a challenge that can be met". One can contrast self-realisation to passive consumption, suggesting that any productive activity - i.e. one in which the individual has an affect on the world external to him that is creative rather than destructive or insignificant - lends itself to self-realisation, more or less (Elster, 1986).
One could further object that there may be individuals who wish to pursue productive activities that have no social utility whatsoever, or even negative utility. For example, an scientist might want to build a nuclear weapon that could misfire and destroy an entire city. It wouldn't seem acceptable in this situation to allow the scientist to fulfil his natural capabilities. I will look at this tension in more detail later in the essay.
The crucial point for Marx is that, insofar as we accept that each individual will have natural capabilities that can be developed and give the individual new productive capacities, society shouldn't limit the scope of the individual's development and productive activities. Alienation occurs where the reverse is true, where workers are restricted either in their ability to develop their natural abilities, or to pursue a variety of productive activities.
One consequence of modern capitalism makes this kind of alienation particularly stark, namely that "the lives of millions of people are reduced to the narrow limits of their undemanding work. Fantasy, rather than creative effort, then becomes the vehicle through which they escape it, and fantasy itself, packaged as accessible pleasures to be bought in the market place, is relentlessly commoditized" (Williamson, 1997). The essence of the capitalist division of labour is deskilling, an economic and political process whereby the workers' tasks are reduced to "mechanical routines that can be quickly learned". The worker has no control over his tools and "becomes a mere appendage to an already existing material condition of production", resulting in a kind of "operational autonomy" akin to the autonomous social forces and laws posited by Gray. The worker, meanwhile, suffers a "knowledge deficit" and a "solidarity deficit, defined with respect to the levels of understanding and community required for self-rule" (Feenberg, 1991, pp.27-28).
In the context of this essay - that of computer hackers - one might balk at the notion that highly qualified software engineers are engaged in "mechanical routines" that provide little or no scope for the development of skills and a growth in knowledge. Many in the industry, however, think that this is the case. Scott Adams' popular comic Dilbert satirises a software engineer's life inside a cubical, churning out code according to the wishes of the clueless managers, working according to Taylorist management principles (Adams, 1996). Alan Kay, a legendary computer scientist, thinks that computer science degrees in the US are becoming little more than "vocational training". Deskilling needn't be as extreme as forcing a worker to manage an operationally autonomous machine, and software engineers may be afforded more scope for developing their skills that many other workers, but they are nontheless susceptible to deskilling and a loss of operational autonomy. The absence of deskilling and a total operational autonomy in a workplace wouldn't invalidate the place of these factors in a theory of alienation; rather, it would suggest a division of labour that has overcome alienation to some extent.
Thus the worker is alienated both in the division of labour and in the division of himself. The former forces him to engage in productive activities that preclude him from using his productive capabilities, leading to miserable and unfulfilled feelings, whilst the latter precludes him from being able to develop his nascent productive and social capabilities through his work. The alien character of labour under capitalism is demonstrated, Marx says, by the fact that if we weren't forced we would avoid work "like the plague" (Marx, 1992, p.326).
As with the worker's relation with his product, it is difficult to see how the relation with production could be entirely unalienated, as conceived by Marx. In the first place, Marx's conception of a perfect, unalienated individual involved a bewildering diversity of productive activities, all combining to satisfy every creative and productive potential. He wrote that:
Elster rightly criticises this vision as "fanciful", accusing Marx of "wishful thinking". It would impose huge burdens on workers to know about all of his creative and productive capabilities, to have the resources required to fulfil them and to be able to pursue each of them whilst guaranteeing self-sufficiency. Even the most self-indulgent worker bent on total self-realisation would have a difficult time achieving it. However, even if one cannot posit a system that completely overcomes alienation, one can suggest systems that provide more possibilities for "autonomy, creativity and community" that can mitigate alienation (Elster, 1985, pp.89-92).
Unalienated labour, then, requires operational autonomy and a variety of productive and creative activities that actualise the worker's capabilities. Through it the worker must meet some challenge and be able to judge how well the challenge was met. This is exactly the kind of productive activity that hackers engage in. By working to scratch an itch, with a desire to improve skills, meet an intellectual challenge and all the while increase one's productive powers, the hacker overcomes the external nature of alienated labour. Nobody would voluntarily enter into labour that made him feel miserable, or that didn't develop free mental energy. Hackers, entering voluntarily into their work with passion, cannot be characterised as working against themselves, against their productive nature. Moreover, by rejecting the distinction between work and leisure and by emphasising instead the active realisation of potential, "the experience of being an active, creative and fully autonomous person" (Kane, 2000), the Hacker Ethic orientates all life activities around this unalienating maxim. The Hacker Ethic's emphasis on play and fun create the basis for opposition to top-down management of their work, and for the positive alternative of self-organisation and an overall operational autonomy (Adam, 2004).
The Hacker Ethic also goes beyond Elster's understanding of self-realisation to include an account of passion. A hacker might, for example, spend her working day studiously hacking some problematic code, realising her capabilities in abstract mathematics, debugging and other technical pursuits. Alternatively, she might stay up late into the night for days on end in frenzied hacking sessions, passionately trying to solve the problem with as elegant and concise code as possible. According to Elster's criterion, if the outcome were the same - i.e. she realised her capabilities equally in each case - then we should consider each case equal. The Hacker Ethic, however, argues that the latter case is a better example of meaningful work because it truly engaged the individual; it enriched her life, gave it focus, to return to Levy's account of the early MIT hackers (Levy, 2001, p.45).
One may object that this maxim is too demanding, and makes justifying necessary drudgery extremely difficult. Again, this objection has two components: an objection related to personal tasks such as cleaning and cooking, which can be dismissed by building in the need to first be self-sustainable and then to adhere to the maxim; and secondly an objection related to socially useful or necessary tasks that conflict with this maxim, which I will return to later in this essay.
A further objection is that, though there is emphasis on a diversity of tasks in the Hacker Ethic, there is little emphasis on developing all of your capabilities, and in particular those not related to software. Though many hackers do engage in other creative activities it cannot be said that they all develop their full mental and physical energy. This is a result of a hesitance on the part of most hackers to advocate a perfectionist account of personal development such as Marx's; they favour a more liberal or welfarist approach that emphasises the freedom to engage in fulfilling activities as well as the virtue of engaging in these activities in general. But Marx's account of developing all of our capabilities seems too demanding, both because we are limited by time and by our capacity to know of all of our capabilities. I may need five lifetimes to properly develop all of mine, including some that I wouldn't know about until I pursued certain activities; I may, for example, have the potential to be a talented police officer, but I won't know until I try it, and I cannot spend my life trying every life activity in the hope that I might find and develop my capabilities. The Hacker Ethic's emphasis on pursuing tasks that we know will develop our capabilities, rather than pursuing tasks for other, external reasons such as the potential for financial gain is more reasonable.
Because information sharing is such a powerful force in hacker communities, hackers actively help and encourage each other to develop skills. This is one instance in which the ethic takes a more proactive and perfectionist approach, both creating and promoting an environment in which hackers developer their skills fully. This is not identical with promoting the realisation of all capabilities, as Marx suggested. Instead, the Hacker Ethic emphasises the value of developing and realising capabilities in all productive activities. The onus, as Stallman suggested in the context of social obligations, is not to work but, if one is to work, to do so in a particular way. This can be seen in the importance given to "the freedom to study how the program works, and adapt it to your needs" (FSF, 2004) and in emphasis on documenting everything for the benefit of nascent hackers.
Crucially, following my analysis of operational autonomy using Luke's analysis of power, we can see how hackers are able to exercise operational autonomy. There isn't a single organisational structure adopted by all hacker communities - some adopt democratic structures, others voluntarily defer the final decision making to a benevolent dictator (with the proviso that they can always "fork" the project by taking the code and developing it in a new community with a different organisational structure), whilst many allow structures to organically develop (Brand & Chance, 2005). In each of these arrangements, however, hackers can choose what code to work on; they can influence the agendas that direct their work, both by entering into the open discussions about the direction of the project(s) they work on and by simply opting out of any projects whose agendas conflict with the hacker's own priorities; and finally though it is not forced nor prevalent in every productive forum, hackers can and often do enter into discussions about work itself and what it means to engage in meaningful work.
Marx asserts that "the relationship of man to himself becomes objective and real for him only through his relationship to other men" (Marx, 1992, p.331). The alienation of the worker from his product only becomes real when his product is bought by a consumer, an act that constructs the hostile standing of product to worker and also of worker to consumer. If, as is the case under capitalism according to Marx, the relationship between worker and consumer is one of domination, of the non-producer over production and its product, then we can see how the relationship becomes one of man to an alien being. This of course rests on the assumption that a healthy relationship - the opposite of an alienated relationship - is one where the parties are relatively equal and where nobody suffers a significant loss of reality, which I think is fair.
The consumer is also alienated when he receives a product that "does not belong to him", since he put none of his labour into it, making the commodity alien to him (Marx, 1992, p.331). Marx refers here to his contention that value and ownership can only be bestowed by labour, and that this value and ownership is only conferred on the creator, so that the consumer cannot own nor value the product. Again for the sake of preserving exchange-based relationships, we can instead say that a product may lose value when it is exchanged as a commodity. This kind of alienation is obviously not as bad as losing your product, since the consumer may gain creative and productive power through purchasing the product. But where the product is treated entirely as a consumable (i.e. it isn't used productively), and where it doesn't contribute to the sustenance of the worker (e.g. food), then the product is neither a gain nor a loss in reality; the consumer simply gains a thing that has limited use value. Therefore at both ends of this relationship, worker and consumer, people can be alienated.
There are also other relationships in which a worker stands, those in relation to his fellow workers, which are marked by competition rather than cooperation and that put the worker in a hostile and disconnected standing. Workers must compete for jobs in the first place, and then within the workplace they must compete for better positions with better wages, and even to keep their job. They only cooperate insofar as it benefits the company, i.e. for the end of capital. In general, workers have no power to change the nature of these relations; though in many contemporary workplaces workers are afforded notionally managerial positions, they must manage according to the needs and ends of the company, i.e. capital accumulation. Though this may seem an overly stark and pessimistic view of the workplace, it is the logical application of the principles of capitalism, and so any instances of cooperation - any real relationships between workers - are incidental and a sign of the worker rebelling against capitalism's constraints. Where cooperative work environments exist, unless the arrangement has capital advantages over more competitive environments, they will generally be under pressure to change to a more competitive basis. There is limited space for the cooperative spirit that Stallman has tried to restore (FSF, 2003).
Marx's heavy emphasis on the importance of social alienation - suggesting that alienation only becomes real through his relationship to other men - seems to contradict, or at least call into question the seriousness, the reality, of his claims about the other forms of alienation. It implies that alienation in the activity of production, which is a matter of the activity becoming external and unfulfiling, can only occur in a social context. This would mean that an isolated worker who produced according to his needs would always perform fulfilling work regardless of how it affected his creative powers. Or, Marx means no productive activity can be real in such an isolated context, and so whatever the worker did, it would never be real and fulfilling until placed in a social context. Both of these interpretations are flawed, if we are to take seriously his previous claims about the importance of the relationship between a worker and his product, and between a worker and the activity of his work. I would suggest that, to be consistent, placing work and products in a social context makes them more real because in the new inter-personal relationships they provide the worker with more use value, more social value, in other words more reality.
In the case of hackers, as I have already mentioned, the hacker won't lose the product to the users, and the users won't passively consume the product. This is generally true of all computer software, whether or not it is produced by hackers and released under a free license. But the license guarantees that the hacker and the user receive exactly the same rights with respect to the product, and that both are endowed with the product's full creative and productive potential. Relations between hackers as workers are based upon cooperation and the free sharing of both the workload and products; they are characterised by a positive cycle, whereby the more hackers produce and relate to one another through sharing, the more productive and communicative powers they have. Furthermore, as I mentioned in the section on alienation from the activity of work, the Hacker Ethic emphasises operational autonomy meaning that hackers have the power to change or opt out of any relationships that they don't like. Giving control to all parties, avoiding relations of domination and fostering "a community of goodwill, cooperation, and collaboration" are explicitly stated as goals of the free software movement, for example (Kuhn & Stallman, 2001).
As well as creating healthy social relations through its mode of production, the Hacker Ethic also accounts for why its social relations are more healthy, suggesting that to do otherwise is to cut "the bonds of society" (Stallman, 2004b). By emphasising the need to work "for the public good" and to "help one's neighbours" (Stallman, 1992) (Stallman, 2005) - including people we are connected to through productive and interpersonal relations - the Ethic puts social sanctions on alienated relations.
The Hacker Ethic, therefore, overcomes the alienation between producer and consumer by making such a distinction as producer and consumer invalid, and by basing the relationship upon the future potential endowed by the product. The user may simply consume the software, or at least only use it and never modify or even share it, but that is their choice. There is nothing in the relation with the product nor the person who shared it with them that forces their hand in this respect.
This raises a problem that is related to the question of social obligations I raised at the end of the section on free software. It may be the case that hackers aren't aware that a relationship is unhealthy, either because they are too altruistic or simply not sufficiently self-aware, and so may fail to change or opting out of the alienating relationship. Worse still, a hacker may feel compelled to remain in a relationship that abridges his operational autonomy because of other considerations. These could include egalitarian obligations such as to create usable software for everybody (where the hacker has to give up a proportion of his time to work on less fulfilling code), or to provide computer equipment to poor people (where the hacker has to be motivated by what will earn the most money, rather than by factors intrinsic to the product and the activity of work).
If the interests of the workers and the users "have equal weight" (Stallman, 1992) then these relationships are equal ones, not characterised by domination. If one takes Stallman's weaker social obligations and discounts the stronger obligations I posited, then this study of alienation describes the normative basis for my claim that social and personal obligations aren't in conflict in the Hacker Ethic, but rather intimately linked. Hackers maintain healthy social relationships - Stallman's "bonds of society" (Stallman, 2004b) - by working in communities and releasing their work under a free software license. Their doing so in no way prejudices their ability to pursue what I have characterised as healthy, unalienated work and to remain in healthy relations with their products. The Hacker Ethic, if followed properly, allows a worker to achieve our "species being", Marx's vision of the essence of and the ultimate example of mankind. A hacker is able, through his work, to strive towards his essence, his individual life (Marx, 1992, p.328).
If, however, one assumes the stronger obligations, and one takes seriously the problems that these cause for the account of productive relations and alienation, then there is no obvious way out. A strong egalitarian assumption presents the possibility of social relationships fuelling alienation in one aspect (the hacker's relation to his activity of work) whilst soothing alienation in another (the hacker's relationship to the needy users). Analysing the Hacker Ethic in terms of alienation at least provides a common framework within which these conflicts can be understood. They are not simply a matter of individual autonomy being abridged by social obligation, nor simply vice versa, but as a necessity for balance between the two demands. Improper prioritisation of one would simply shift the locus of alienation.
There also remains the problem of kinds of work that have a negative social utility. I gave the example earlier in the essay of a scientist who wants to develop a nuclear weapon that could misfire and destroy an entire city. Should a community of hackers allow one of their colleagues to pursue such work? This asks not only whether or not a hacker ought to prioritise certain kinds of socially useful work, but also whether or not it is acceptable for hackers to abridge an individual's operational autonomy in blocking them from pursuing that work. A less extreme example may be a hacker who has shown no capabilities in the field of medical science, but who nonetheless would like to spend a few years studying the field, which will mean a break from other socially useful work. If there is no guarantee that the hacker will be any more socially useful at the end of her study should hackers accept and even support this sabattical?
One possible solution lies in an aspect of Marx's conception of alienation that I haven't touched upon yet. I have omitted it so far because it seems to contradict his emphasis on operational autonomy; this problem, and a proper explanation of the idea, need addressing before I show how it can help resolve the central conflict of personal and social obligations. Marx describes how labour ought to be "consciously regulated by [the workers] in accordance with a settled plan" (Marx, 1996, p.40). Commonsense suggests that we cannot immediately grasp what our capabilities are, and what work we will find most fulfilling; even as children in relatively cooperative environments we take time to learn what we enjoy, be it passive consumption or a childlike productive activity. The ancient Chinese philosophy Taoism understands our individual essence - our Tao - as something inexpressible, a self-defining basis for our character. When we act in accordance with our essence we find that our body and mind work "self-so", or without forcing. These ideas, despite their poetic and vague expression, can lend insight into Marx's idea of our essence. It is not something that we can rationally come to understand or discover, but rather something that we happen upon. We cannot express why we find certain activities more fulfilling, we simply do, and we understand this when we partake in them.
The most pressing objection to Marx's settled plan, then, is that no system of regulation or organisation can hope to determine each individual's appropriate activities. Instead, it should set-up conditions whereby individuals can discover and then partake in their essential activities. This might satisfy Elster, who says that Liberalism "forgets that the choice is to a large extent preempted by the social environment in which people grow up and live". Yet it must avoid the heavy-handed paternalism advocated by some Marxists lest it violates the workers' operational autonomy. "The solution", Elster continues, "must be a form of self-paternalism", whereby people can, individually or collectively, shape their choices relating to their labour (Elster, 1986, p.98). We can interpret Marx's "settled plan", then, not as a kind of micro-management by some wiser beings for the benefit of the workers, but as a kind of macro-management that creates the conditions for unalienated labour.
Returning to the central conflict, then, if hacker communities can collectively shape their choices relating to their labour, then they can meet some or all of their social obligations consensually in a way that isn't alienating. A community may, for example, undertake a study to see how usable their software is. They would then identify certain shortcomings and draw up a list of tasks to resolve the usability problems. This would be exactly the kind of situation that I have, so far, identified as causing a conflict between the individual hacker's self-indulgent needs and the social obligations that the tasks represent. However, if the community were to draw up this list and then, by common consent, distribute the tasks evenly - a "settled plan" of sorts - then the decision would not create problems in the same way as under capitalism. If the worker were to work on a usability problem because they were paid then they would have no control over the matter, the product they create would be the realisation of somebody else's needs and the whole venture would be performed towards the goal of capital accumulation. In the hacker community, by contrast, the worker still exercises control over every aspect of the process, if not absolute; the worker can realise his capabilities to some extent, if not completely or most appropriately; and the worker is oriented towards a socially meaningful goal. This is exactly how hackers, tracking, assigning and resolving problems with public mechanisms and employing social incentives and sanctions to encourage collective action.
In this essay I have first sketched out an understanding of the Hacker Ethic with two components: a work ethic and a social ethic. In clarifying the somewhat fuzzy articulations of each ethic that currently exist, I have raised one central concern: how one can balance social obligations with those that I characterise as self-indulgent. Hackers themselves practise a balance that they decide upon already, but they lack a coherent normative basis for deciding how to achieve the most ethical balance.
I have then explored Marx's theory of alienation, developing an understanding of meaningful work as constituting relations between worker and product, worker and activity of labour and worker and other people. This allows me to describe the conditions and relations of meaningful, unalienated work by reference to each relation. I have shown that each relation has equal importance such that, contrary to more individualistic accounts of personal fulfilment, a worker can only be fulfilled when she stands in healthy relations to other people and, contrary to some strongly socialistic accounts of meaningful modes of production, a worker can also only be fulfilled when she stands in healthy relations to her work and her products. Thus modes of production can be judged according to how well they resolve conflicts between these three relations, by how well they balance competing demands on a worker's time and energy.
There is a slight flaw with my argument, but one that I think should be solveable. The hacker work ethic as described by Himanen operates on a kind of virtue ethic, as does my account of alienation; both concern themselves with promoting virtuous qualities in hackers. But Stallman's account of social obligations, as I mentioned on page 10, is morally ambiguous and is most likely based upon a Kantian, utilitarian or consequentialist position, none of which are immediately compatible with the virtue ethics of Himanen and my account of alienation. Thus an area for further work involves working on a coherent moral basis for all three accounts, which I suspect would involve developing a virtue ethic for social obligations in relation to intellectual products and production. Questions include: Is Stallman's position better captured by Kantian ethics, rule utilitarianism or rule consequentialism? Can Stallman's position be restated in terms of virtue ethics? Can we advance a virtue ethics of intellectual products and production, work or even more generally of meaningful life activities?
If we limit the scope of social obligations to those articulated by Stallman, which deal only with the relations defined by the way you distribute your work after it has been produced, then the Hacker Ethic achieves and ideal balance. Social obligations in no way abridge the hacker's scope to meet his self-indulgent obligations, and in fact the two kinds of obligations in the hacker mode of production complement one another. However, if we take more seriously stronger social obligations that deal with what you produce in the first place - i.e. could your products be more socially useful? - then the Hacker Ethic only represents an improvement upon the capitalist mode of production. This improvement is significant and shouldn't be denigrated, not least because it provides further opportunities for research in the justice of work distribution. These include questions of where we draw the line between reasonable and unreasonable social obligations, and how workers should make decisions about how to meet them.