Sunday, October 30, 2005

Naturalizing information

In this account of consciousness, "information" is a key concept -- it's the idea of information that allows the distinctive gap or "loose connection" between the two subcomponents of consciousness that in turn underlies its distinctive flexibility as a control system. But it's also a concept burdened with problems, including one major one for any use in a causal or mechanistic explanation -- it appears to require consciousness itself as generator and/or receiver of information. In an interesting post on Conscious Entities a while back (Sep 11/05), it was pointed out that "Information is a slippery but seductive term", at least partly because, though it can be treated in a very rigorous and well-defined manner, its more familiar use it carries with it the notion of "meaning", with hard-to-avoid overtones of conscious agency, for whom the information would be meaningful. Is information itself, in other words, not meaningful in a purely causal mechanism?

In trying to answer that, it might be useful to consider a couple of instances in which something that at least looks like "information" plays a key role in an operation that is without doubt mechanical. One such case is just that ubiquitous modern machine, the computer. Leaving aside, for now, the input and output of such devices (in which human agents are usually involved, though they needn't be), their data store and operational instructions (software) -- i.e., the means by which their behavior is controlled -- certainly have an information-like quality to them. In any case, another example would be that chemical machine, the living cell, in which the sequence of nucleotides on the long DNA coding molecule constitutes something that looks very much like information, used to construct the protein molecules that do the work of the cell. In both these cases, it's true that if you look closely enough, you can see the strictly causal (and/or stochastic, in the case of the cell) processes that are actually operating the mechanism, but such a close-up view also seems to lose an important or distinctive aspect of their functioning.

To see this, consider two kinds of machine: a basic mechanism involving straightforward causal processes and pathways; versus a mechanism based upon an information-like, intermediate structure that consists of patterns of small differences. The "small differences" may be differences in voltage, in magnetization, in nucleotide code, or some other content, but what's really determining the operation of the machine as such is the difference, not the content of the difference. In this sense, we could say that the "semantics" or meaning of any such pattern of difference is simply the difference it makes in the operation of the machine. (Information has been defined -- see Floridi, quoting Bateson, in the Stanford Encyclopedia of Philosophy -- as "difference which makes a difference".) The advantage of such information-based machines is just their adaptability -- their "programmability", in a real sense, whether by an agent or by "natural selection".

Generally, then, I think it's possible to understand "information" in a kind of naturalized or "de-agentized" sense that still retains some notion of "meaning". In this blog, I've been using this same naturalized concept of information to suggest a mechanism for consciousness, in which the intermediate, information-like structure is just the phenomenal world as created and presented by consciousness. Qualia, in this understanding, are simply the tokens in that structure, or the bearers of the difference that "makes a difference".


Thursday, October 27, 2005

Qualia: the "hard problem" as runaway recursion

Picking up around where I left off in the previous post, I want to look a little more closely at what happens when we try to apply the explanatory processes discussed there to the phenomena from which all explanations originate. As that post indicated, we start with the world that's presented to us -- or, really, with that world as already pre-organized into chunks and hierarchies of abstraction by the particular cultural imprint that we "learn". Upon that basis, we've collectively constructed very elaborate structures of explanation which embrace ever wider areas of experience, and in the process become ever more abstract (where "abstract" means less experiential content and more pure structure or form). One particular source or technique of such abstraction that's proven very effective because it can be applied in a very well-defined and repeatable manner involves the number system, which, combined with standardized units and sensitive measuring devices, allows us to construct "physical descriptions" that are almost entirely quantitative. And this, together with a realist/representationalist epistemology that views such descriptions/explanations as coming ever closer to the real nature of things, can lead to the rather odd view that quantities, rather than being just usable abstractions based upon an old and simple technique of counting, must be the basis for all physical descriptions at their root (with the clear implication that quantities are what lie at the core of "reality", whatever that's taken to mean).

Having painted ourselves into this corner, so to speak, it's not surprising that we have trouble when we try to turn this highly structured apparatus of abstraction back on the phenomena that it itself is made out of: namely, phenomenal experience, "feelings", or so-called "qualia". The problem is that it seems as though any physical description simply has no space in it for "feeling" as such, even though all such descriptions are founded upon just such feeling. And it's not just the fault of an over-emphasis on quantification -- with representational epistemology, we assume (in one sense quite appropriately) that what's real is what can at least be observed. But this assumption, when made in the context of the sort of self-investigation noted above, leads to a kind of runaway recursion (i.e., one with no stop-condition) , as we try to get a perceptual hold on these "feelings". And this in turn generates a kind of head-scratching perplexity -- and an almost comic image of investigators peering into the brain, hoping to catch sight of "seeing", or touch "touching", or hear "hearing".

And so qualia, or simple, basic "feelings", have become, in David Chalmers' now famous formulation, the "hard problem of consciousness". Unable to get a quantitative or even perceptual handle on such "feels", and yet faced with the embarrassing fact that they do seem to be there, scientists, being practical people, have tended just to ignore them or at least to have focused on what they had a hope of measuring. Philosophers, being less practical, have adopted a number of diverse tactics -- the time-honored one, since Descartes, of course, being the splitting of reality into dual realms, but others include spinning qualia off as some sort of puzzling side-effect called "epiphenomena", hoping that "feeling" might turn out to be one more bizarre "quantum" effect, or even locating "feeling" throughout the universe in some sort of pan-psychic "neutral monism".

It's interesting to compare this with the intuitions of people in general (i.e., non-philosophers). On the one hand, I've found that it's difficult to get people to appreciate that there's a problem with qualia at all -- they tend to view phenomenal experience simply as immediate, obvious and not inherently problematic; on the other hand, they have no problem denying such experience to a brick or even to a computer. And while this sort of intuition is philosophically naive, I think this might be one of those times when "common" sense has remained a better guide than more sophisticated analyses. In any case, one of the beneficial effects of the sort of epistemological inversion recommended previously is that it might allow us to, in a sense, demystify or normalize qualia -- instead of experience being viewed as some sort of strange and inexplicable irruption in the physical world, it's restored to its role as the substance of the world, and the fundamental stuff out of which all explanations or descriptions are built. And in that way, when we want to construct explanations for mental phenomena, we might actually be able to view qualia as functional -- as, e.g., necessary carriers of information that enable the critical loose connection between the two components of consciousness.


Sunday, October 23, 2005

The world and its explanations: an epistemological side-trip

This blog touches on such a melange of disciplines -- from "cognitive studies" (itself an amalgam of cognitive science, psychology, neurological science, cognitive philosophy, philosophy of mind, etc.), through linguistics, to cultural anthropology -- that you might almost say it was undisciplined. In any case, the deep and ancient waters of epistemology seem a bit of a stretch even for such a melange. But, as I indicated in the post on "general consciousness" below, the idea that consciousness actually creates the world that we experience (going Wordsworth one better) has philosophical implications that shouldn't be avoided even if we wanted to. I said I'd return to this, so here I am:

If I thought it were really feasible, I'd say that the created world of consciousness is the only meaningful sense of the notion of "world", and await the accusations of idealism with resigned equanimity. I'm not, in fact, advocating idealism, and don't have any doubt that there exists an environment that's independent of consciousness and to which the created world is one response. But I want to say that I think there are some compelling reasons to leave that immersive environment just as such, and save the word "world" for the sort of knitted-together totality that consciousness presents us with.

For a long time, of course, it's been commonly thought that consciousness provides us with a representation of the "external" world, but a flawed one. That is, what we get is simply an "appearance", which is at odds with an underlying "reality", and it's been the task of philosophers, originally, and then of scientists, to penetrate the veil(s) of appearance and arrive at the truth, or the really real. This appeals to an intuitive understanding of how appearances can mislead, and even though the philosophers tended often to obscure things more than clarify them, the scientists have had a series of unquestionable successes, at least of a certain kind (as I'll get to). But at the same time, this scientific truth has been getting further and further removed from human experience, removing layer after layer of appearance, until there seems to be little "appearance" left at all, and all we -- "we" being the few adepts with the requisite abilities and training -- have to cope with it are the diamond-hard structural abstractions of mathematics. And, as one scientific revolution succeeds another, there seems to be no end to this process, with new questions forming faster than old ones are answered, and the real truth receding at least as fast as we approach it. In any case, beyond all human reach, there hovers the tantalizing idea of the "thing-in-itself", the noumenon, the final truth. We seem to be left peeling an onion that has no core.

So, in contrast to that situation, let me suggest another. Do you want to know the truth? Think you can handle the truth? Then all you have to do is remove layers upon layers of cultural sediment, let go of all theories, ideas, concepts, and even thoughts and words, and leave yourself open, as far as you can, to the moment-by-moment experience that presents itself to you. Become like an infant, in other words -- not a Wordsworthian infant, "trailing clouds of glory", no, but like a pre- or non-linguistic consciousness. And that's as close as you can ever come to the really real, to the underlying, bedrock truth of things. Because that just is the world. (In this context, the notion of the "thing-in-itself", the noumenon, is a mere conceptual mirage, a will o' the wisp.)

Now, the problem with the truth, in this sense of the word, is just that it doesn't do us much good. Well, that's not quite right, since the conscious world is above all a practical construction, but it's not nearly as useful as even some fairly simple explanations of experience. Notice, however, that "explanation" now becomes not a "penetration" of appearance, but rather a kind of overlay, a way of structuring experience through appropriate and opportune abstractions (see below, on the symbol) that provides us with practical and effective ways of planning our behavior in the world. And the test of an explanation is not how closely it matches some supposed, but unreachable, "external" world, but rather how well it functions in serving our immediate and long-term individual and social needs. This is a pragmatism, certainly, but one built upon a kind of epistemological inversion -- in this sense, what science is really doing is not peeling away layers of appearance, but adding layer upon layer of explanation, each layer extending the reach of experience that it covers. It's not surprising that, in this process, it necessarily becomes ever more abstract, nor that the process should be potentially without end. And all the while we can happily make do with various intermediate levels of explanation that are closer to our actual experience, without feeling that we're somehow being fooled by appearances. We might, in fact, want to change the meaning of "truth" to refer not to correspondance to the world (since that we have immediately), but rather to the breadth of experience covered by a particular explanation -- in that sense, truth would be a relative term, and one explanation could be more or less true than another.

Well, I said near the start that "if it were feasible" I'd assert that the world that consciousness creates for us is the only meaningful sense of the word -- but I doubt that it's really feasible in general. None of us, including myself, can really get away from the common use of "world" to mean the external environment. Still, I think that this sort of re-orientation can have significant and, I hope, helpful consequences in trying to address some questions and issues of a less common nature.


Saturday, October 22, 2005

A Recap

This might be a good place to stop and summarize what's been proposed. The key posts, in order, have been:

From this, the following points are most important:

  • First of all, there is a crucial distinction between consciousness as such -- aka "awareness" -- and language-based consciousness (which enables self-consciousness or self-awareness, reflection, etc.). That is, consciousness in general does not imply self-awareness, introspection, etc. -- it simply implies (or is identical with) awareness, or experience, or so-called "raw feels".
  • Consciousness just as such, or just as awareness, is a general behavioral control system that has evolved in mobile organisms. The crucial feature of this control system is that it introduces a gap or "slippage" between environmental stimuli and behavioral response, and this gap allows for more complex and adaptive responses by providing an opening for other inputs such as memory, expectation, and system state.
  • The gap of consciousness is a function of its two subcomponents -- a world-creating mechanism and a behavior-determining mechanism -- and, crucially, of the loose connection between those subcomponents. "Loose connection" just means that the world that consciousness creates is made up purely of information-bearing tokens -- i.e., irreducible, qualitatively distinct signals, or "qualia". Since it's only the information contained in their qualitative distinction that provides the input to the decision-making algorithm of consciousness, qualia are not just functional, they're required.
  • The advent of language introduces a new order of complexity into this control system, by providing a way to separate experiential memory into "chunks", structure those chunks through abstractable features, and provide a handle for such structured fragments of experience by associating them with specific experiential signs, such as word-sounds. Among the most powerful of such sign/symbol combinations is that for the "self" or "I", which is something that only comes into being through language, and which provides the basis for a recursive self-awareness or introspection.
  • Unlike general consciousness, however, this special or language-based consciousness is an inherently social phenomenon, and is learned through a developmental process that imprints an entire "memetic" structure on each individual's mental apparatus, a structure that can be called a cultural imprint. This structure undergoes constant change not just in every "communicative encounter" between individuals within a cultural group, but also as individuals use the structure to think and formulate plans on their own. And this change provides the basis for Darwinian-like selection pressures on the cultural imprint, and hence for cultural adaptation and evolution.

To summarize the recap, the key ideas here are:

  • General consciousness or simple awareness is not the same as linguistic consciousness, and any discussion of consciousness should be aware of the distinction.
  • The function of general consciousness is as a highly flexible behavioral control system.
  • The qualia of consciousness are signals or bearers of information essential to its operation as a mechanism.
  • The advent of language produces a new or special kind of consciousness characterized by a socially learned and maintained cultural imprint or "memotype", which in turn provides the basis for social/cultural change or evolution.


Monday, October 17, 2005

General consciousness again

Here I'd like to step back from cultural or language-based consciousness, and look again at just the phenomenon of general consciousness, or what's often called simply "awareness" or "sentience". General consciousness, in other words, is consciousness without language. That such awareness is possible without words, in the first place, should be evident from our common experience of pre-linguistic infants, where I don't think anyone would doubt that they're aware of colors, tastes, touches, sounds, etc. -- that is, that they experience so-called "raw feels". But this common observation is then easily applied to the animals, like dogs or cats, that often share our lives as well. What about budgies? Goldfish? Again, it seems apparent just from the fact that they react to stimuli that they must also experience those stimuli in some manner. But as we move toward progressively simpler organisms, this becomes less apparent -- it's not so hard to imagine systems with relatively few behavioral options reacting to their environment in some hard-wired manner, as in reflex arcs, without necessarily any sort of internal experience. And when we go below the level of nervous systems altogether, it becomes hard to believe that "responses" to stimuli are anything other than merely chemical or physical.

Of course, ultimately everything is merely chemical or physical. But what these admittedly simple observations imply is that at some point in the sequence from, say, amoeba to dog, consciousness in the sense of experiential awareness appears -- at some point it becomes meaningful to assume that it is indeed "like" something at least to be that organism. If we accept that (if only for the sake of the argument) and if we reject dualism in all its forms, then this is saying that the chemical/physical behavior of neurons has produced a peculiar kind of mechanism, a mechanism that, however astonishing it may seem, is responsible for the appearance of the phenomenon of awareness. The hypothesis of this blog, as I've said, is that such a phenomenon is functional -- that is, that awareness, as such, has an adaptational purpose, in that it introduces a gap or distance between stimulus and response, which makes the stimulus available but not determinate. And this in turn allows for an exceptionally flexible form of behavioral control, needed for systems that have a wide range of behavioral options and operate within complex environments.

Now, there is an enormous body of literature -- putting it mildly -- that deals with the issue and issues of consciousness (see this collection for at least a start). But in what follows, I'd like to just set it to one side, temporarily (and perhaps foolhardily). Because I want, first, to be able to quickly sketch out some of the implications and elaborations of the hypothesis above -- in particular, that the mechanism of consciousness must have two main components -- two sides of the gap, so to speak -- one of which "presents" the environmental stimuli in some structured manner, while the other "apprehends" such presentations in some "loosely coupled" manner (where "loosely coupled" is intended to mean that the presentation is but one input among others -- others such as memory, expectation, internal state, etc. -- to the apprehender, whose job is to determine behavior or response).

Consciousness as World:
This is to make the claim that consciousness really does create some version of an "inner world" -- "inner" in the sense that it's created by, and only exists within, the mental apparatus of the organism or system; and "world" in the sense that it is a whole that unifies the various sensory sources of environmental stimuli in a presentation that is centered on the body and environmental location of the organism. I deliberately used the word "presentation" here, and avoided "representation", because I wanted to emphasize the fabricated nature of this manifold, as well as its practical or functional aspect. (Later I want to come back to some of the philosophical issues and implications surrounding this usage.)

Such a world (I hope later, as I say, to make it more apparent why I think the adjective "inner" is actually redundant here) is made up out of a number of distinct "channels" of sensory input, corresponding to the different kinds of sense organ. And each of those channels provides for an indefinite number of qualitatively distinct, but otherwise irreducible, "tokens" of information (usually termed "qualia"), identifiable by channel (e.g., as a color or a sound), and corresponding to at least some of the difference in impinging stimuli. Note that different channels are not only identifiably distinct from one another, they display different characteristics as wholes -- sounds, for example, can be organized into a linear spectrum (among other characteristics) from high to low, whereas color seems to display a circular one, even though both channels render environmental stimuli that display a linear variation in wavelength; smell seems to display simply a large collection of distinct qualia without any question of a spectrum; touch seems to involve a number of distinct "spectra", such as smooth to rough, soft to hard, warm to cool, etc. (perhaps implying that there are really a number of distinct sensory channels commonly associated as "touch"?). In any case, it is the job of this component of consciousness to take the input signals from various sensory sources and render them simply as distinct, irreducible tokens on the various channels of the manifold that is the world.

Consciousness as Actor:
If the fabricated World of consciousness is a presentation, there needs to be something that it's presented to or for. And this is the other major component of consciousness -- the other side of the gap that is its defining feature -- which we might as well call the Actor. Doing so, of course, immediately invites comparison with the oft-ridiculed "homunculus" theory of consciousness, which posits that there's a little person inside our heads who's monitoring sensory input on screens and dials -- which, apart from its inherent silliness, is clearly avoiding or begging the question of the nature of consciousness by merely locating it one step removed. Now, I don't think anyone has ever actually proposed such a theory, but it gets used often enough as a straw man in the course of proposing other, contrasting, theories. And that's the use that I'll make of it here too -- as a way of distinguishing this notion of an "Actor" component of consciousness from efforts at merely putting off the problem. The main idea is that the Actor component is itself just a mechanism, or a mechanical component of a larger mechanism, not a ghost in the machine, and not an agent at all. That is, "Actor" is just a label for a mechanism that outputs behavior based upon inputs from the "World" component of consciousness, but also, importantly, from other sources, such as a memory store, an anticipation generator, and various internal "drives". The algorithms, so to speak, by which this output is generated might vary considerably depending on the general complexity of both the conscious system and its environment, but a common theme might well be the ability to formulate a "goal" or intention based upon the general state of the system, and then the ability to prioritize or focus upon certain portions of the various input sources on the basis of that intention -- which would give rise to the phenomenon of conscious "attention". (Note that even though words like "goal" or "intention", or even "prioritize", commonly imply will or agency, here they're just used metaphorically, as is often done in describing the structure and operations of, say, software systems.)

To wind up this already too-long posting, let me address a couple of the more obvious questions that might arise:

Why does this "Actor" component need that fabricated "World"? Why doesn't the Actor simply receive, and respond to, sensory input directly?

Because the functional advantage of consciousness as a control system is precisely that it's not tied directly into its environment -- that is, the point of the created "world" of consciousness, in a very real sense, is that it acts as a buffer between Actor and environment. It does this by rendering the barrage of environmental stimuli as standardized, "tokenized" bits of information -- "qualia" -- on a pre-structured, unified, and persistent manifold -- which is all that's meant by "World".

But if both components of consciousness are "mechanical" or deterministic, then how is this proposal fundamentally any different from any other general assertion of causal, neurological, or algorithmic bases for consciousness?

Well, since this proposal is certainly a mechanistic one, it isn't fundamentally different from any other of the sort. But, first, this proposes an actual structure for such a mechanism -- two parts, loosely coupled (like a limited slip differential). Second, such a structure provides a reason for the logical necessity of something like qualia -- irreducible and qualitatively distinct tokens -- as being the only way that the two parts can be connected in a loose or non-determinate manner. Third, the two part structure offers an explanation for the functionality, or evolutionary effectiveness, of consciousness, since the loose connection provides a control system of unusual flexibility and adaptability. And fourth (though this is a little more vague), such a structure provides some basis for, and explanation of, the often-noted "freedom" of consciousness and of will, in its escape from causal determination by the world or by any single source of input (even though, like all mechanisms, it is determined by the sum of its inputs).

Despite the hand-waving about "metaphors" and so on, isn't this "Actor" still just a way of ducking the "hard problem of consciousness", since you never really say how such a mechanism would actually work?

No, I don't. Skepticism here is entirely reasonable, and at this point I'm really just putting forth an hypothesis or suggestion. But I'd make two general points in response: first, I think the suggestion is not implausible, and specific enough to be interesting. Second, since the mechanism being proposed is a general one, it should be possible to instantiate such a structure in an artificial system such as a robot -- in other words, the real test of this proposal would be the production of even a simple version of a synthetic consciousness.


Thursday, October 13, 2005

The nature of the symbol

The symbol is where the rubber, as they say, meets the road in cultural evolution. For all the analogies with biological evolution, memes are not genes, and the operations of the sign/symbol pair are quite different from the molecular machinery involved in genetics. In particular, as we've already seen, the various cultural imprints making up a cultural social system have nothing like the degree of similarity of the molecules that make up the genome of a species, nor the exact replicating mechanism of DNA. But though the similarity of individual cultural symbols is rough, it's also robust, since it's constantly updated in every communicative interaction. And at the same time, the symbols are used as elements of thought or as means of planning effective behavior in the world, and their own effectiveness is increased or reduced on the basis of their results. So I think it's evident that there are indeed causal, and hence machine-like, processes at work here, albeit, at this point, of a largely unknown neurological kind.

Without venturing into neurology, let me suggest a few more characteristics of the symbol as such, that might help provide a more concrete idea of how cultural change proceeds:

  • Symbols derive their individual relevance and behavioral effectiveness from the fact that they are built out of our conscious experience -- they are experiential containers, in a sense, that group or classify experiences on the basis of the abstraction of features. E.g., the symbol corresponding to the word "tree" is an abstraction from our experiential encounters with a number of different concrete instances.
  • Because of this, symbols can be related to one another hierarchically, in levels of abstraction, like containers that contain, and are contained in, containers -- e.g., "tree" relates to "poplar" in one direction and to "plant" (as opposed to "animal") in the other.
  • If we think of the "shape" of a given symbol as determined by its abstracted features, then this shape is a malleable one, as those features are constantly being adjusted under the pressure of new experience, including especially communicative experience. That shape also affects how it fits with other, related symbols.
  • At any point, new symbols can be made up by agglomerating new abstractions from experience -- we might decide we need a new category of "plants suitable for urban landscaping", for example, and give such a thing a name (sign).
  • Also at any point, new symbols can be formed by introducing a distinction that breaks an existing one into new, more usable parts. (In fact, these may well be the twin fundamental operations that underlie all thought: comparing/distinguishing, associating/discriminating, synthesizing/analyzing.)

Consider, for example, a micro-incident: X sees an odd-looking plant (to her), and says to Y: "Look at that odd tree," and Y replies, "Actually, that's a shrub, not a tree." At first, X's tree-symbol has been altered to some extent by the need to include this new experience -- and then it's quickly altered again by Y's reply, which serves at least momentarily to bring the two into greater (but certainly never exact) "semantic alignment". X may try to argue the point, may be puzzled about it, or may simply accept the correction, but in any case, both will have had their symbols associated with the words "tree" and "shrub", as well, perhaps, as their notions of each other, affected, however slightly, by both the perception and the communication. And the affect will not, typically (though it can), be a matter of their conscious wills or agencies -- it will be a matter of the mechanism of cultural adaptation.

    Cultural materialism?

    In the previous post I mentioned the possibility that a very significant change in cultural evolution may have been brought about by a purely cultural development -- i.e., not by a new technology, or the development of a new technique or mode of production, but rather through the emergence of a new cultural form, or through, in a sense, a purely cultural mutation. The change in this case referred to the rise of urban civilizations, usually associated with the advent of farming. But what if farming, at least at low levels of intensity, had been around for some time, many thousands of years perhaps, without giving rise to cities? What if that development, instead, hinged on the idea that the charisma of a god-like ruler was attached to the office rather than to the office-holder, and was supported or maintained by an increasingly powerful class of priests/scribes/administrators, with their associated ritual and mystique? What if it was only through such a purely cultural development, in other words, that significant numbers in the more densely populated regions could be organized and mobilized to supply the labor that not only built the cities, carried on the increasingly complex commerce, and developed more systematic farming practices, but also conquered the surrounding people who lacked such an awe-inspiring "meme" (though, of course, once conquered, didn't lack for long)?

    Well, that particular idea, however interesting, might well be wrong. The point I want to make, however, is that such an explanation, were you to make it, would appear to expel you from the camp of the cultural materialists and force you in with cultural idealists. And then I want to make the point that that appearance might be wrong -- first, because the explanation doesn't involve the idealist implication that human agency or will is what directs cultural evolution; and second, because culture itself -- even considered as an imprint on the mental apparatus or neural structure of language-using primates -- is a material thing, with at least as much material affect on the world as a practical technique like irrigated farming. What's important here isn't so much the labels or the disciplinary camps, but the understanding that culture isn't a mere abstraction (nor a matter of conscious choice) but has a real, concrete, and material presence in the world.

    Wednesday, October 12, 2005

    An interlude: on evolution, teleology, and "complexity ratchets"

    A major reason for talking about culture in this way -- that is, in terms of the functionality of culture, cultural imprints or memotypes, the changeable nature of the symbol/meme, etc. -- is to be able to talk about, and understand, the phenomenon of cultural evolution. But evolution in itself raises an interesting and controversial side-issue that is especially pertinent to cultural evolution: which is the question of whether or not evolutionary processes display a "directedness" -- i.e., the puzzle of "progress", or the notion of teleological change without a director. The conventional answer, I think, is that such "progress" is merely an illusion, clung to for self-flattering reasons (Stephen Gould being a typical proponent of such an answer). The counter assertion is to argue for "complexity" as being the long-term direction of change, where it would be helpful to make some further specifications along the following lines:

    • define "complexity" as something like the total number of parts of a system and the total number of connections between such parts;
    • stipulate that the purported increase in complexity over time applies only to entire ecosystems (i.e., is an assertion concerning the increase in average complexity of systems within an ecosystem);
    • further stipulate that the change is a stochastic process, meaning that ecosystem complexity fluctuates in the short term and only generally increases, usually, and in an uneven manner, over the long term.

    On the basis of that, then, a useful approach to making the case for directed change is the temporal cross-section, whereby we consider an entire ecology at long intervals of time and try to assess the complexity level of organisms or systems within that ecology. And it seems reasonably clear, at least in a quick examination of the history of both biological and cultural systems, that such cross-sections do in fact display increasing complexity. (We could also note that the universe itself appears to display a certain directedness to its evolution, not simply in an entropic, “running down” sense, though that certainly is happening, but also in a steady increase in heavier, more complex elements over time.)

    Without going any further into the dispute at this point, then, I'd like to just make the hypothesis that the phenomenon of directed evolution is real, and ask what could account for it. Gould is at least right to assert that Darwinian natural selection as such does not explain it. It could only do so if complexity in itself somehow conferred greater adaptational advantage upon a system. But while that might be so in particular instance, there are also lots of both reasons and evidence to show that, other things being equal, simplicity wins out over complexity in a straightforward competition. If the phenomenon of a long-term, or “teleological” trend toward complexity is nonetheless real, then other explanations are needed.

    One such might be something like a "complexity ratchet". Random fluctuations within an ecology (biological or cultural) will produce systems of varying levels of complexity from moment to moment. But suppose that occasionally one of the more complex systems takes on a form that’s more stable than others – this would raise the average complexity of the ecosystem as a whole, then, at least as long as the system persisted. And if, very occasionally, that system was in a form that was self-reproducible, then the increased stability would generate an enduring increase in ecological complexity. Which, interestingly, seems very much like what those particular chemical systems known as “life” represent.

    Of course, if such a ratchet were to violate the Second Law of Thermodynamics, then it’s not going to work – it would be as illusory as perpetual motion. The usual "out" here is to say that the Second Law really only pertains to closed ecosystems, not necessarily to particular systems or subsystems within those environments. But the real question is simply whether it's possible at all, in a stochastic environment, and within the constraints of the Second Law, for a more "complex" system (as defined above) to be more stable, even to a slight degree, than a less complex one -- if the answer is "yes" (which again I'll assume, for the sake of the hypothesis), then the complexity ratchet becomes a possibility.
    (An aside: is "disorder", in the entropic sense, necessarily the opposite of "complexity", in the above sense? Could a closed environment simultaneously become more disordered and more complex? Like a kind of crystallization process? If so, maybe it's not that God created the universe, but that the universe is in the process of creating God, hmm? Or, maybe I should just cut down on the caffeine.)

    So assuming that they're possible at all, then we'd expect complexity ratchets to come in all sizes at all times – they would be "fractal", in a sense. But at long intervals – depending on their relative improbability – there would appear major ratchets that provide a floor for a fairly sudden expansion in possible system complexity after that point. A major example of that, of course, as we've seen, would just be the molecular systems we call "life" itself. But now look at the history of life on earth: billions years of simple unicelled packets, and then, fairly suddenly, maybe half a billion years ago, the appearance of a significant increase in complexity – nucleated cells. And after that, the advent of multi-celled forms. It was as though, in stumbling across this invention of an inner, protected wall for the growth of the coding molecule, a new complexity ratchet was attained, which allowed for the proliferation of the enormously more complex machinery underlying developmental biology. This seems to say that eukaryotes were an even larger – even more improbable – step forward in complexity than the relatively simple chemical packets of prokaryotes.

    If we describe “life” as the phenomenon of molecularly coded and transmitted complexity, then "culture" could be described as the phenomenon of aurally coded and transmitted social structures -- "language", in other words, could be viewed as a complexity ratchet for animal behavior. Let’s say that such cultural social formations first make their appearance some half million years ago or earlier – then, again, for a relatively long stretch, very little changes. Until, only some 7 to 10 thousand years ago, farming first appeared, and so did writing. Whichever constituted the ratchet -- and another possibility might have been the purely cultural invention of new, surplus-generating religious forms -- it does seem clear that a major advance in social complexity took place in isolated pockets around that time, giving rise to the first urban cultures. And then these persist, in varying forms, spreading slowly and intermittently, but at roughly similar levels of complexity for thousands of years themselves, until about 500 years ago – when we see the first hints of an industrial culture. Was the invention of “science” the ratchet in this case? The emergence of the "bourgeois" class of merchant producers? The appearance of the modern notion of the "individual"? Whatever it was, here again we see a form of social organization at least an order of magnitude more complex then its predecessors, simply in terms, again, of the number of parts and the number of connections between them (communication, transport, velocity).

    And what, we might ask, about now? Would the advent of the digital computer, digital information, global networking, etc. be another major complexity ratchet? Perhaps we'll need to include, as parts of the systems under consideration, entities other than those carbon-based ones we've usually focused on....

    Tuesday, October 11, 2005

    The "I" and the cultural imprint

    This is a note picked from a bracketed aside, below, to the effect that we do not make our own cultural imprint "just as we please". Why not, you might wonder? We can't make our culture just as we please because "culture" (as understood here) is an abstraction from all of the cultural imprints of the individuals that comprise the cultural formation -- but our own cultural imprint is just that, our own, and you might think that its make up is, within practical limits, up to us to decide. What would limit our ability to do so?

    Two general considerations:

    • First, there is the idea that the symbols of the cultural imprint are affected by each "communicative encounter" through a so-called determinate or causal process, not a conscious one. Just as we can't choose whether or not to recognize a sign (e.g., understand a word seen or heard), so we can't help but be affected by the context and manner in which a sign is used -- where "affected" pertains to the bundle or chunk of experience that constitutes the symbol invoked by the sign.

      But -- such symbols are also affected by new experience that's relevant to their content (e.g., each encounter with a new instance of, say, the concept "tree" will have a small -- depending on one's developmental stage of course -- affect on the concept itself). And thought is also experience of a certain kind, so in that sense we can recover at least some control over our own cultural imprints -- that is, over our own concepts (among other things), as we expect. But there's another, perhaps more interesting, form of limitation:

    • This has to do with the fact that the "self" or the "I" is itself just another concept or symbol within the cultural imprint, and has no privileged access to any other symbolic construct. The advent of language, by providing a way of breaking conscious experience into chunks, (aggregating those chunks through abstraction, etc.), and by providing perceptual "handles" for those chunks in the form of signs, has made possible not just self-awareness, but a "self" or an "I" in the first place. The "I", notice, is inherently (grammatically) the active subject and hence difficult to make into an object of thought at all -- if we can even think of "I", it's as some kind of point of view, irreducible, and standing outside of all other objects of thought. But I (!) think this is a fundamental error, and source of error -- the word "I" is a sign like any other, and the symbol it evokes is also a construct or assemblage of experience like any other.

      Now, it's true -- and important -- that language provides a kind of recursive process by which we can interrogate or investigate our own symbolic constructs, including our notion of our own "self", and even our own "I". But there is no standpoint outside of our self from which to do that, and so any such investigation is necessarily affected from the start by the existing structure of the "I". And the very act of investigation, as another experience, also affects the "I" of the investigator and whatever symbolic construct is being investigated. In this sense, it's hard to resist bringing in another metaphor from, or analogy to, physics, and call this limitation a kind of Psychic Indeterminacy Principle (so I didn't resist). We're complex, mutable, contingent beings not just in our physical wholes, but in what we'd like to think of as our psychic core or essence as well.

    Monday, October 10, 2005

    The Sign/Symbol pair

    At this point, I don't want to make too much of this pairing, particularly because it's ground well-covered, in a wide variety of different contexts, meanings, histories, controversies, etc. The terms I'm using are an admitted mish-mash drawn from those different contexts, and pulled in here only for want of something better. Saussure, for example, uses the more technical-sounding, and certainly more coherent, "sign", "signifier", and "signified" to mean roughly what I refer to as "meme", "sign" and "symbol", but I didn't want to become entangled in all the structuralist, semiotic nets that such terms now drag with them. With "meme", on the other hand, while I recognize that the term has become an over-used pop-anthropology cliche, I did want to make use of its overt genetic and Darwinian analogies, and so I've included it as the general term for the sign/symbol (signifier/signified) pair. The term "symbol" is perhaps the sloppiest usage of the lot here, since it's often understood as something that already refers to, or stands for, something else. But I want to use it somewhat differently -- I mean by it a unit or chunk of conscious experience that is pre-defined and pre-structured ("pre-" meaning that it exists as a structure within an individual's mental apparatus and is available for use in communicative behavior, without having to be pulled together). And then "sign" is maybe the simplest and clearest of the three -- it simply means a perception (or a memory of a perception, as in internal speech or self-reflection) that evokes, or calls up, that chunk of experience that is the symbol.

    Out of all that, there are just two points that I want to make for now, since I think they've sometimes gone unnoticed or underemphasized, but are crucial for the operation of a cultural system:

    • First, we cannot choose whether or not to "recognize" a sign. We can and do choose, obviously, where to focus our attention, but once a sign is experienced, then its associated symbolic "meaning" is activated in a strictly causal or determinate fashion. (There are interesting fringe cases where we're confused about which sign was experienced, or even whether it was a particular sign, or where we can reduce a sign to just its contingent sense experience through repetition -- saying a word over and over again, for example -- but these serve only to outline the determinate nature of the sign-symbol interaction.)

    • Second, the symbol, since it's constructed out of the stuff of each individual's actual experience, is unique for each individual in a cultural social formation. When two (or more) people use a particular sign in a communicative encounter, then the associated symbols are always to some extent (however slight) brought into what we might call semantic alignment, but they can never be identical because they'll always be affected to some extent (however slight) by the differences in each person's experience to that point. And this too is a strictly causal or determinate process.

    That culture is functional

    Looking upon culture as a structured imprint in the minds of every individual in a cultural social formation, rather than as simply a collection of practices and artifacts, makes it less likely that we'd view it as any sort of social "epiphenomenon", more likely that we'd see it as functional. But functional in what sense? In the sense, I think, that it directly supports the maintenance and adaptation of the cultural group* (*leaving to another post the task of saying what exactly constitutes the "cultural group"). That is, the ideas, concepts, values, etc. (among a great many other things) that make up a cultural imprint have a direct effect on the behavior of the individuals making up the group -- this effect generating what might be called a cultural "phenotype". And just as the genetic phenotype exposes the underlying genotype to Darwinian selection pressures, so this cultural phenotype (the actual, summed behavior of the individuals in the group) exposes the underlying "memotype" to the same kind of selection pressures. Thus, cultural social formations adapt to changing circumstances and evolve just as do biological species -- but, because the memotype is much more changeable or dynamic than the genotype, cultural evolution typically proceeds at a much faster pace. It's worth noting that, at times at least, this can put culture in conflict with biology, with the latter losing its claim upon the "natural". (At the same time, it's also worth noting that, for a variety of reasons, and as Marx said of history, we do not make our culture "just as we please".)

    Sunday, October 09, 2005

    Culture as imprint

    Maybe even more than "consciousness", "culture" is a word that everyone uses but few are comfortable defining. Perhaps it's commonly thought of as art, design, custom, folklore, ritual, etc., but even in anthropological circles, in which culture is the core of the discipline, the word often has a peculiar and somewhat frustrating vagueness. It might be defined, for example, as the shared or common values, traditions, norms, etc., "of a people", but how does that relate to the actual individuals that make up the "people", for example? Is there a difference between these "shared" values and individual values? How "shared" do such values need to be? How do we determine the boundaries of the "people"? And where do these shared values actually reside, after all?

    Another hypothesis, then: linguistic consciousness provides the basis for a structure of sign/symbol pairs (a "sign" being a perception that evokes or activates a "symbol", which is a ready-made and meaningful element of consciousness), a structure which is imprinted on the mental apparatus of each individual making up a particular cultural group. If we use the admittedly slippery term "meme" to refer to such sign/symbol pairs, then we can refer to the entire structure of such pairs as the individual's "memotype", in an obvious analogy to the individual genotype. And just as each individual's genetic blueprint is unique, yet sufficiently like other humans' to constitute a species, so each individual memotype is unique, but similar enough to allow communication and shared understandings of certain values, etc. -- similar enough, in other words, to constitute a culture.

    A note on terminology

    A big part of the problem in this whole area has to do with the ambiguity and outright confusion of the terms that deal with it. E.g., "consciousness" sometimes is just supposed to mean linguistic consciousness or self-awareness, sometimes to mean just "awareness" or "sentience" as such, and sometimes, indiscriminately, to mean both or either. That's not so bad if the distinction among these usages is clear (more or less) from the start, but is a great source of muddle and pointless disputation if it's not. Similarly, terms like "perception" and (especially) "experience" have been used in a wide variety of contexts and carry with them such considerable semantic baggage that their use in more specific or defined senses carries the constant threat of misunderstanding. Still, it's difficult to do without them without seeming merely artificial. And this kind of dilemma will recur in subsequent posts as I try to deal with familiar topics but from a less familiar, and perhaps more comprehensive, standpoint. So I won't try to invent a new jargon to cope with this perspective, but I will try to alert any readers from time to time, as here, that some common words are being used in some uncommon ways.

    As an example, the terms "perception" and "experience" are often used somewhat interchangeably, though the latter usually includes awareness of internal states as well. But in the preceding post, and subsequently, I'd like to reserve the term "perception" to refer to the rendering of environmental stimuli on the internal "world" of consciousness, and "experience" to refer to the apprehension of that world (which includes other internal states, such as memories, imaginations, dispositions, etc.) by the "actor" component of consciousness. (But I can't promise I won't lapse into more casual usages myself.)

    Consciousness: the Gap

    An hypothesis: consciousness evolved in mobile life forms as a particularly efficient and effective kind of behavioral control system -- and its effectiveness derives from its defining feature: the introduction of a space or gap between perception and behavior, or between stimulus and response, or, we might say, between world and act. Compare it with a reflex arc, which "hard wires", in a sense, behavior to perception -- by contrast, in consciousness a perception is first rendered to an internal, created "world", and then becomes available as an "experience" (as opposed to "perception") to an an internal "actor", which determines behavior on the basis of that input along with many other features of the current state of the organism. Thus, the crucial and defining structure of a consciousness-type of control system is this two-part design -- on the one hand, a world-creating subsystem, and on the other, a behavior-controlling one -- the two components of which are only loosely coupled to one another, through the phenomenon of experience.

    The Gist

    Here are some key assertions (and negations) that form the basis for this blog experiment:

    That consciousness is not an inherently mysterious phenomenon, not an aspect of some parallel or ideal or non-physical realm, not some inexplicable by-product of neuronal activity, not likely a bizarre effect of quantum mechanics, and not solely confined to homo sapiens. Instead, it is as physical a phenomenon as the replication of a DNA molecule, say, or the working of a diesel engine, and is a feature of virtually all mobile organisms more complex than planaria.

    That linguistic consciousness is something significantly more complex than simple or general consciousness, and is likely something so far only found to have evolved in one species -- human beings. This special form of consciousness might also be called conceptual or, in a broad sense, symbolic consciousness, since those are structures that are simply aspects of language. One of the more important or central concepts or symbols that appear for such consciousness is that of the self, or the "I", this being an aspect of the capacity for self-consciousness, or internal reflection, that the advent of language allows.

    That culture is not a decorative by-product of human activity, but rather is a structure within the consciousness of language-using individuals, and is the means by which their activity is organized into social formations of varying complexity. Culture, in other words, is functional, and as such is subject to the same sorts of selection pressures that govern biological or genetic evolution. Culture is manifested through a large number of created signs and artifacts, but it exists as a constantly adapting pattern within the consciousness of each individual within a cultural social formation. (Not all of this pattern -- perhaps not even most of it -- is available to the self-consciousness of such individuals however.)

    Some of these propositions -- I hope -- are at least arguable, and making those arguments will be one purpose of the entries to come. Another purpose, however, will be to develop and extend the propositions in various directions, including pointing to others working along similar lines wherever I can.

    Saturday, October 08, 2005


    Just a quick explanation of the blog description, at the upper right. Consciousness, in its general sense, is understood here as a control system for complex mobile organisms (referring, to this point, to naturally evolved animals, but the idea is that this limitation is not essential). Consciousness in the "special" sense* refers to the profound change that occurs when language is added to such conscious systems -- change that allows the emergence of a "self", self-consciousness, and all of the phenomena gathered under the name of "culture".

    (* The use of "special" and "general" here, of course, is an allusion to the Theories of Relativity. Physics-envy, perhaps, but also a way to emphasize the importance of distinguishing the two kinds or levels of consciousness, which otherwise is often an important source of confusion.)

    UPDATE (Oct 11/05):
    These views and notes have been evolving (so to speak) for some time, going back to a "Culture Project" in the mid-70's. They've taken the form of notes and outlines, some privately circulated, some even online from the mid-90's and on. Here I'm hoping that eventually they'll spark a discussion, of which these views will just be a part. But initially at least, I'll simply be getting out the backlog in blog format.