05 May 2015

Banging Your Head Against Invariant Self

(Warning: Started out as discussion of intelligence, turned into a screed about physics and impostor syndrome, and is generally more grumpy than it needs to be. Take with a grain of salt.)

The typical model of intelligence looks something like this:

[[Resources]] -> [[Intelligence]] -> [[Result]]

There's a quantity we call "intelligence" that takes external resources and turns them into results we care about. My opinion is that this is a reasonable but impoverished model. It's the simplest way of decomposing the world into internal and external capabilities. Each gets reduced to a single summary statistic- for resources, you typically use money as the measure; for intelligence, IQ. Output is then some monotonically increasing function of these two variables. This is reasonable in the case that they are the only variables you know about.

IQ is the simplest possible quantity you can use for predicting output, but it is not sufficiently detailed to say anything about causality. Like many other predictive variables, it's only useful insofar as you don't have more specific information about the domain. If, for instance, you want to hire someone who can fix your company's problems but you have no idea what those problems are, selecting based on IQ makes sense. But if you have domain-specific knowledge about your problems, you can to pick a person whose specific strategies will help. (See this discussion of why anecdotal reasoning can be better than statistical reasoning.) The problem with "intelligence" is not that it's meaningless, but rather that it's a single summary statistic of an infinitely huge space.

We should view intelligence as a collection of strategies that either have or do not have adaptive fit to particular situations. This doesn't necessarily jive with standard measurements of intelligence. For instance, intelligence is in large part measured by capacity for articulating language well and for being able to think on an abstract level. But there are many tasks for which thinking gets in the way. Modelling social interactions explicitly can take the value out of them. Being trained in jumping to meta-level discussion is especially pernicious. If you can't stay on an object level, you can't do anything. Abstractions are only useful insofar as they cash out in concrete terms- otherwise you become addicted to thoughts about thoughts because you've spent your whole life being rewarded (by yourself and others) for having cool insights. But this addiction to "insight porn" can be maladaptive in cases where reality is inherently messy. (That is to say, almost all the time.) Focusing on language skills as a signal of intelligence creates a sinkhole of thought where the energy of potential creation goes to die. Instead of learning something and then going on to apply that to a problem, you theorize and write essays about it because that's what all of your favorite thinkers do.
 
As someone who was a physics major at MIT, I'm often told that physics is supposed to teach you a way of thinking about the world that will enable you to solve any problem. But does it? Most problems that one works on as a physicist (at least in general undergraduate classes) are ones that are analytically tractable- that have some universal solution. As we know though, competition is inherently intractable. Complicated coevolving nonlinear systems are much more ubiquitous than ones with simple, tractable causal structure. Studying physics encourages a mindset of simplified models, but attempts at simplification of complex systems either fail, or succeed at the cost of other values.


The physics mindset of extracting the most important abstraction out of a complex system is only useful at the beginning of a cycle of competition. Once things get moving, the specific details in the problem take over.

You see this all the time when people in the "hard" sciences talk about the "soft" sciences. Physicists complain that biologists or psychologists lack simple, central organizing models that explain the core phenomena in a way that you can derive the dynamics from first principles. But they forget that physics is restricted to the simplest possible physical systems. Say what you will about the difficulty of learning quantum mechanics, but modelling a single electron in the ground state of a hydrogen atom is way simpler than modelling even a single cell in the human body.

Perhaps even more importantly, it encourages a mindset of thinking as opposed to doing. Physicists tend to think more like philosophers than engineers, because most of what they do is mental refactoring, as opposed to a constant loop of action and feedback with their environment. Programmers constantly get feedback from a compiler, but the loop on theoretical physics tends to be quite long (in cosmology there are years between updated measurements of the CMB to test theories of the early universe).

Physicists can also fall into the trap of taking ideas too seriously. My favorite characterization of this problem is treating reasoning as a memetic immune disorder. People most of the time default to socially learned behaviors that can work in everyone's favor even if they seem illogical when you analyze them. But because you are encouraged to trust the results of your analysis, you're more likely to do things that are dumb in reality.


It doesn't help that most reasoning in physics is based on eternal laws and invariant properties. I think that this tendency toward inviolable, universal world models leaves physicists particularly prone to impostor syndrome.

----------------------

People resist the idea that skills are useful only relative to particular environments because there's a slippery slope towards saying that everyone is a special snowflake that has their own unique abilities. "Everyone is a winner" rightfully leaves a bad taste in your mouth. This perspective is damaging because it places value on the presupposed uniqueness of an individual. There are about 7 billion people in the world. There are not 7 billion unique niches for economically or socially useful activities. As such, there will always be other people competing for more or less the same role, and all but one will not be the best. And you can't ignore the people who are better than you either. Because of the way social networks form, your friends are on average better than you in every way
The idea that your value derives from some unique property of your self is what leads to impostor syndrome (at least in my experience). If I'm supposedly so unique, why can I observe evidence of my non-uniqueness all around me? For every component of my self that I value, I can identify both people that are better than me at it and times when I have sabotaged the development of that component. This uniqueness assumption is reinforced in the rhetoric used in combating impostor syndrome. Everyone supposes that you actually have a unique self hiding under there and that all this negativity is just some temporary mental block that's getting in the way of you seeing that*. 
But this "block" is actually a result of taking this model seriously and noticing how it doesn't seem to correspond to reality. Every time you fail to measure up to your supposed value, every time you run into someone who is better at your special skill than you are, a crack appears in this facade of uniqueness. The reality is that you don't have an inherent value. Your value is a function of "you" and "the environment," neither of which are fixed. 

(Forgive me: this is an easy way to sound deep without providing a more useful framework for thinking about problems of value.)

People who think they have inherent value are just ones that have good fit with the environment that they work in. The assumption of inherent value is psychologically protecting and very nice. You don't need to waste time wondering whether or not you should be doing what you're doing. When things are going as expected, you can just treat value as an invariant that holds for the system. And then things change. And then you try and hold on to this invariant, but it never existed in the first place. It was just a summary statistic.

You're not an impostor, but you're not real either. The very notion of being an impostor implies an inherent reality from which you deviate. That's the insight that you really need to get over impostor syndrome: seeing through the idea that there is a single true self. Any model of self that makes an attempt at incorporating all of your actual thoughts and actions will inevitably run into counterexamples. Then it's forced between suppressing that data or using it to revise your self-model into something horrible and maladaptive.

People graduating from college talk about being "a real person" or entering "the real world" as if either of these things existed. The behavior patterns that you have to take on to get by are different as a working person in your mid-twenties than a student in your early twenties, but that doesn't make the world of work more real. It has a different set of constraints on adaptive behavior, and thus gives people a different sense of constructed reality. But it is not itself "more real."

The world has regularities, and so do you, but it is impossible to tell in advance whether or not a particular trait of yours is a truly "inherent" property, or whether it's just a motif that happens to be recurring a lot in this movement of your life. 
---------------

As T.S. Eliot argues in his essay Tradition and the Individual Talent, we place too much value on the uniqueness of artists:
One of the facts that might come to light in this process is our tendency to insist, when we praise a poet, upon those aspects of his work in which he least resembles anyone else. In these aspects or parts of his work we pretend to find what is individual, what is the peculiar essence of the man. We dwell with satisfaction upon the poet’s difference from his predecessors, especially his immediate predecessors; we endeavour to find something that can be isolated in order to be enjoyed. Whereas if we approach a poet without this prejudice we shall often find that not only the best, but the most individual parts of his work may be those in which the dead poets, his ancestors, assert their immortality most vigorously.
Instead of focusing on being invariant, true selves, we should focus on channeling the truth. I want to write more on this, but it'll have to wait for later since I have already spent too much time on this post. Instead I'll just leave with this quote from Elizabeth Gilbert's great TED talk on genius in art:
I think that allowing somebody, one mere person to believe that he or she is like, the vessel, you know, like the font and the essence and the source of all divine, creative, unknowable, eternal mystery is just a smidge too much responsibility to put on one fragile, human psyche. It's like asking somebody to swallow the sun.



-------------------------------------------------------------------
* For more on the social dynamics that result from pinning value to uniqueness, see Venkatesh Rao's laws of status illegibility, as explored through The Office.

11 April 2015

The Eternal Werewolf Hunt

Cerbereoboros by Emily Fundis*

My favorite game ever is One Night Ultimate WerewolfIn it, everyone draws a card that specifies whether they are a werewolf or a villager. Then everyone closes their eyes. The two werewolves open their eyes so they can identify each other. After that, there are some villagers who can complicate the game in other ways- for instance, the troublemaker, who can swap two other players' roles without them noticing. Then everyone opens their eyes. Everyone yells at each other for a bit about who is or is not the werewolf, and then they vote on who to kill. If a werewolf is killed, the villagers win. Otherwise, the werewolves win.

The only concrete components defined by the game are your goals and a few bits of sparse initial information, private to each player. Since everything else is open, everything becomes part of the game. There are no turns (unlike in poker), so you are always playing. Every decision to speak or not speak, every flick of your eye, every bit of wording is information that can be used against you. Unlike Werewolf, the game from which it is derived, there is no source of objective, public information whatsoever. In that version, the werewolves kill someone each round and everyone knows who got killed. In this version, nobody gets killed and there is only one round. There is no recourse to objective knowledge as defined by the game (again, unlike in poker, where some cards are shared on the table). All facts established in the course of a round are social facts- even your own role. Whether you win is based on the card that you end up with, not the card that you were originally dealt, so you can never be quite sure what strategy you should be playing.

But by far the most interesting mechanic is the tanner card. The tanner isn't on the side of the villagers or the werewolves. Tanning is an awful, horrible, smelly job- so awful that the tanner just wants to die. His win condition is to be killed. So if you are the tanner, you want the villagers to believe you're a werewolf, and you start acting as if you were a werewolf. But in that case, seeming like a werewolf is a sign that you are the tanner and not a werewolf, so werewolves in threat of being outed can act "like a werewolf" so people will think they're the tanner. Any strategy that anyone has used in a previous round is a possible strategy for someone to use to pretend they are someone they are not. After playing more than a few rounds with clever individuals, you become completely unmoored, never knowing what level you're on. Are they being sincere? Are they pretending to be sincere? Are they acting like they're pretending to be sincere?

The tanner is an element that prevents gameplay from ever becoming static. There is no solution to playing One Night Ultimate Werewolf in the way that there is a solution to tic tac toe. There is no definable strategy that will lead you to win. That by itself isn't saying much- any well-designed game should be open in this respect. Every game with a sufficiently large strategy space and sufficiently sophisticated players has a meta-game. In Magic: The Gathering, for instance, one of the most important components of tournament play is constructing your deck based on what decks you think other people will bring. But One Night Ultimate Werewolf distills this into pure form. The game is the meta-game is the meta-meta-game and so on and so on. Every level is collapsed into one.

While it is possible to get better at One Night Ultimate Werewolf, pure skill does not help you the same way it does in games like chess. You need to gain information about your opponents by playing with them- you need to develop an organic fit within a particular group. But you can't do that without simultaneously giving information about yourself. To make yourself better at beating your opponents, you necessarily make your opponents better at beating you.

-----------------------------------

Now consider the acquired immune system. Vertebrate animals have two basic systems for disease prevention: the innate and the acquired immune system. The innate immune system consists of simple barriers to entry (i.e. your skin) and a collection of molecules that can identify and respond to molecules known to be associated with pathogens. The acquired immune system is the fancier, more evolutionarily novel system- the one capable of being able to identify new pathogens and build novel weapons against them, and then store the weapons plans for future use in case of another attack. Scientists have typically taken the view that the acquired immune system is the current optimal solution to the problem of defending against invaders, citing how necessary it is for survival in vertebrates. But there's a problem with this perspective. If the acquired immune system were really a dominantly better strategy, then we would expect invertebrates to be dying of disease all the time, while vertebrates had practically no disease. This isn't the case- death rates from infectious disease are pretty comparable between vertebrates and invertebrates. Stephen Hedrick explains this in his great paper "The Acquired Immune System: A Vantage From Beneath":
"Another way of looking at this is that acquired immunity was not a final solution to the problem of parasitism. There is no final solution. As novel as the acquired immune system was, for rapidly multiplying agents, it was just another hurdle. It may have driven parasites to invent new strategies for fitness, but it did not convey invincibility or anything like it. To say the combination of innate and acquired immunity is the optimal defense is a misunderstanding of the evolutionary landscape. I don't believe there is an optimal defense. I don't believe there is a conceivable immune system that could not be obviated once the barriers to infection have been breached. For all animals and their parasites, generation upon generation, it has been evolutionary thrust and parry, until today as it was a million years ago and as it will be a million years hence, each and every species is literally plagued by parasitic microbial agents and viruses."
The term "landscape" as a metaphor for the fitness space of possible strategies is misleading- a series of gravitational pulls on objects chaotically orbiting each other is a better view of the forces involved. Everything is constantly falling down fitness gradients, but it never reaches a stable ground state because there is no ground. In the case of physical systems, gravitational forces are exerted by other massive objects that are themselves in motion. In the case of biological systems, selective forces are exerted by other organisms that are themselves evolving.

It's funny to note that the acquired immune system is the component that opens the way for autoimmune diseases. Unlike the innate immune system, which only reacts to specific markers that are only present in pathogens, the acquired immune system is capable of reacting to any marker. It's capable of going on witch hunts against innocent cells- or werewolf hunts, as it were. There's even a virus that exploits this system like the tanner card in One Night Ultimate Werewolf- HIV. It infects helper T-cells in the acquired immune system by binding to CD4 receptors: the very receptors that those helpers use to identify pathogens. In a sense, HIV wins by being caught.

------------------------------------

The central motif here is captured by the idea of anti-induction. Inductive systems are ones where something working now means that it will continue to work in the future- they are systems admitting the possibility of static solutions. Anti-inductive systems are the opposite- if a strategy works now, you can predict that soon it will no longer work. The stock market is the canonical example of an anti-inductive system: if real estate prices have been rising, then it's probable that they will fall. If one company has a really successful business strategy, it usually means that it's bad for you to copy that strategy, as they've already filled that market niche. One Night Ultimate Werewolf and the immune system are also great examples. We see this sort of behavior in any system where there are interacting agents with comparable skill levels competing for a limited resource.

The caveat of comparable skill levels is important. If one agent in a competition is dominantly better than the others, then it will simply crush them all and the game is over. In the chaotic orbits analogy, this would correspond to one extremely massive object with other objects in uniform orbit around it. There is a sufficiently stable configuration such that talking about an energy "landscape" makes sense. But if you have a bunch of objects of approximately the same mass, their collective trajectories are extremely unpredictable.

But why should we expect to see any competitions where there are well-matched competitors in the real world? In game design, it requires a tremendous amount of skill and fine-tuning to make sure that gameplay is balanced. So why would we expect balance to exist in nature, in the absence of an agent maintaining it? The reason is that the meta-system in which games evolve- that is, the universe- is itself anti-inductive. The world does not simply stop moving when a particular competition is solved. Any time a game is won, the aftermath of that game becomes fuel for the next game. The system learns, the world learns, and it moves on to the next level of play. If one company dominates a market, the competition simply moves on to a different market. If one organism manages to drive a parasite extinct, a new one pops up to replace it. If one struggle over social norms has been resolved, the conversation shifts to the next most divisive topic.

The chaotic orbits analogy helps us here as well if we consider electromagnetic forces in addition to gravitational ones. Free protons and electrons group together to become hydrogen. Then that hydrogen collects in stars and the collective gravitational energy combines the hydrogen into heavier elements. Then chunks of those heavier elements coalesce into planets. Then some of those elements form replicators, which form cells, which combine to form multicellular organisms, which team up to form social groups, and so on up the ever-expanding chain of competition. In each case we have more complicated patterns built on top of underlying regularities. This process is so ubiquitous that one wonders whether our "fundamental" physical laws are themselves merely regularities in deeper competitive structures.



--------
* As far as I can tell the artist did not actually give this piece a name, but I found it elsewhere under the name "Cerberoboros"- a portmanteau of "cerberus" and "ouroboros"- and decided that it was too good of a title not to use. Another example of lexarchy in action.

28 March 2015

Links for March!

Lexarchy in action: how we divide up colors into different categories using words. The idea that linguistic development of color words proceeds in an almost universal order blows my mind in so many ways.

Also, lest anyone assume that I am against neon word art, let it be known that I absolutely love this piece at the Scottish National Museum of Modern Art:



The 10 worst things about listicles.

An NYT blogger reproduces a study called "The Experimental Generation of Interpersonal Closeness." A reminder that love shouldn't be viewed as a static solution, but as the dynamic result of asking simple questions. The particular list they used is here. wonder if it could be compressed further, and which questions have greatest repeat-value for sustaining relationships.

The Brindley lecture: where a urologist demonstrated his novel ED treatment by showing his erect penis to the audience.

Gnome Chomsky lawn art

Addiction is complicated. Prevailing narratives about addiction usually rest on explanations about mechanization of the individual like a disease, or moral failings on the part of the addict. Maia Szalavitz argues that we should view addiction as a learning or developmental disorder, and Johann Hari argues that it has more to do with failures in the environment as opposed to failures of the individual. The important takeaway here is to recognize that you can shoot yourself in the foot by assuming that "objective, scientific" framing to an issue- that is, metaphorically construing addiction as disease- is necessarily the correct one.

A glossary of hand gestures for critical discourse. I don't actually run in those circles, but I still find myself doing these gestures. Maybe they represent some sort of universal expression attractor for people with way too much to think about.

There's a type of slug that eats algae and then uses them to photosynthesize.

In high school, I and some friends of mine decided to turn a neighborhood into a giant game of Pac-Man using sidewalk chalk. It took us from about sundown to sunrise to complete, and only stayed for one day because it rained the next day. But by some miracle, Google Maps caught it by satellite, and we now go down in history as revolutionary artists that we truly are.

Apparently most image editing software blurs colors incorrectly because they don't understand how to properly average brightness values. (See this post for more about the difference between actual and perceived intensities.)

Albert Hoffman, the inventor of LSD, recently died at age 102.

More Samuel Beckett memes: MB(ecket)TA:


27 March 2015

Smash the Lexarchy!

Rene Magritte- The Palace of Curtains, III

Titles made of words make sense for art made of words. They are little bits of writing that are used either to describe the essence of a piece or to give an initial platform from which the reader is meant to approach the piece.  But why are they also associated with every other form of art? Music, film, theater, and visual art all buy into this lexical convention.

Titles function as handles or file names. They are compact, portable signs for the signified piece of art. People want to discuss and write about art. Critics and agents need to point other people toward or away from the art. All these signals are expressed in the medium of words. As such, any piece of art will somehow be shoe-horned into a system of words. The artist needs to make a preemptive strike and make sure that the words used are the right ones- ones that either convey what's intended or conspicuously avoid conveying anything.

First Movement

Classical composers typically resist imposing an interpretive or affective framework on the listener by giving formulaic, utilitarian titles like "Piano Sonata No. 14 in C-Sharp Minor." Sometimes, though, these official titles get replaced by more memorable nicknames: in this case, "Moonlight Sonata." On the other hand, post-rock bands are notorious for over-relying on titles to support their pieces. The Red Sparowes album Every Red Heart Shines Toward the Red Sun has a collection of comically long song titles which together tell the story of the Great Leap Forward in Maoist China:
  1. "The Great Leap Forward Poured Down Upon Us One Day Like a Mighty Storm, Suddenly and Furiously Blinding Our Senses." 
  2. "We Stood Transfixed in Blank Devotion as Our Leader Spoke to Us, Looking Down on Our Mute Faces with a Great, Raging, and Unseeing Eye." 
  3. "Like the Howling Glory of the Darkest Winds, This Voice Was Thunderous and the Words Holy, Tangling Their Way Around Our Hearts and Clutching Our Innocent Awe."
  4. "A Message of Avarice Rained Down and Carried Us Away into False Dreams of Endless Riches."
  5. "'Annihilate the Sparrow, That Stealer of Seed, and Our Harvests Will Abound; We Will Watch Our Wealth Flood In.'"
  6. "And by Our Own Hand Did Every Last Bird Lie Silent in Their Puddles, the Air Barren of Song as the Clouds Drifted Away. For Killing Their Greatest Enemy, the Locusts Noisily Thanked Us and Turned Their Jaws Toward Our Crops, Swallowing Our Greed Whole."
  7. "Millions Starved and We Became Skinnier and Skinnier, While Our Leaders Became Fatter and Fatter."
  8. "Finally, as That Blazing Sun Shone Down Upon Us, Did We Know That True Enemy Was the Voice of Blind Idolatry; and Only Then Did We Begin to Think for Ourselves."
Is this a case of poetry pairing with music, the excessive capitalization reflecting the names of Ridiculous Communist Reform Programs like The Great Leap Forward? Or is it just insecure musical artists with a case of word envy?*

What would a truly musical title sound like? In the case of popular music, we already use essential musical bits to communicate. You may not know what song I'm talking about if I say "Careless Whisper," but you'll almost certainly know the song if I hum that unforgettable saxophone riff. Everyone knows the main line in the Ode to Joy, or the beginning of the Carmina Burana, or The Flight of the Valkyries, even if they don't know the names. People automatically absorb and reuse these motifs as if they were titles. Ideally, the most musical form of title would actually be audio, so if you were say, buying a song off iTunes, you would scroll through a bunch of tiny sound files with little clips. A compromise would be to write titles using musical notation. Everyone would have to learn how to read it fluently, but that's already true for titles made of words.

Composers with complicated pieces might rightfully complain that it's impossible to reduce all of the themes and variations of a piece down to a single line of melody, or to a single bunch of chords. Then again, writers have been doing it since the beginning of writing, and they manage to get by unscathed- or at least with only a few more scars than they already had from the rest of the writing process.

Second Act

In theater (as in musical performance), the relationship of titles and words to the art itself can get complicated. Performances are titled based on the script that they are derived from, but this is misleading. The script is only a starting place- every production is different. But that too is trite- every production draws on and references the history of performances of the same play that have come before it, either in the minds of the artists producing or in the minds of audience members who have seen the play or read the script before. This is one of the things that keeps Shakespeare's plays interesting despite how often they are produced. Each performance acts in dialogue with its previous incarnations.

A couple of years ago I acted in a production called My Uncle. It was based on Chekhov's Uncle Vanya, but it was set in an insane asylum, the conceit being that we were patients performing Uncle Vanya as part of a psychodrama therapy session. Almost all of the words we spoke were from the original play (or rather, a blend of several English translations of the original play), but we used Jaques Tati's film Mon Oncle as inspiration for the set design and staging the scenes. We ended up using that film as the source for the title. It worked despite the fact that the initial connection between the two pieces was almost entirely linguistic. Was the performance really distinct enough that it merited a different title? I don't know. But choosing one way or the other can become a powerful statement of genealogical commitment. It's telling that on my acting resume, where I attempt to sell myself based on the work I've done in the past, I list the production as Uncle Vanya. The production wanted to distance itself from the script's legacy; I wanted to bring myself closer to it.

Triptych

I remember distinctly the first time I went to a modern art museum. It was shortly after the opening of the new building for the Institute of Contemporary Art in Boston. My dad brought me and we wound through room after room of visually fascinating things that I hadn't seen anything like before: installation sculptures of magnificent burnt driftwood hanging from the ceiling and boxes of mirrors inside mirrors reflecting light onto each other and into an infinite blackness. I was thrilled to see what was in the next room and the next room and the next room. I turned the corner to look at what I thought was the next exhibit but was stopped in my tracks when I turned into a room that was just a window overlooking the Boston harbor. I was astonished. The world looked completely new. I had been given license, somewhat by surprise, to reinterpret the entire visual experience of the external world as though it were in a museum. Line, color, form and symbol emerged out of an ordinary view of seagulls perching on decaying piers and orange floats- ordinary, that is, until I obtained this new sight. In a sense, I became this person:

Photo by Erik Johansson

Visual art can give us the capacity to perceive the world in an entirely visual framework. But even the world of visual art is notoriously overtaken by words. Every contemporary exhibit needs placards detailing and describing each piece. There is now always a several paragraph description printed on the wall at the beginning of the exhibit, describing what you need to know to get the right experience (or, at least, what the curator believes is the right experience). People spend almost as much time reading information as looking at the actual art work.

There's a good reason for this. Art derives its strength as part of a movement, as a response to everything in its artistic heritage. It is often meant by the creator to be perceived in a particular framework. The average viewer doesn't know where this stuff came from. But is text the right way to convey the message? Rene Magritte famously opened the world of art to the use of words in dialogue with images (see the painting at the beginning of this post), but maybe it's time to take back visual art in an aggressive way before it succumbs entirely to pieces like this one by Mauricio Nannucci:

You get three chances to guess the title.
Imagine a corresponding exhibit that has a similar structure, but is entirely visual. Instead of a placard, you have one "subpainting" that is maybe just a couple of lines or one block of color: it contains the single most essential element of the piece. Then there is a container with tiny samples of the materials used- a scrap of canvas, a paintbrush, a tiny palette with drips of oil paint in all the different colors used. Then a portrait of the artist in lieu of a name. Think of it as a word-deprivation chamber, a kind of place that is harder and harder to find in an increasingly labeled world.

Words, Words, Words

Neuroscientist Bevil Conway describes learning to read as culturally-enforced synesthesia: you learn to automatically hear certain sounds on seeing particular shapes. But there's another level of synesthesia that happens the more you learn language- you start to hear or see words automatically when you look at or think about things. Keith Johnstone gives an interesting exercise to combat this in his book Impro: look at objects around you and start saying the wrong name for everything. If you see a table, say "book," etc. If you are anything like me, this will be hard, but it will pay off by forcing a shift in your perception of your environment.

Sarah Perry calls this variety of synesthesia "word pollution"- the reduction of aesthetic beauty when words get in the way of direct experience. Or perhaps not direct experience, since the mind cannot create unmediated representations of reality, just different experience. The issue is not so much that dividing the world up lexically is wrong, but that it becomes the dominant system. We are pushed towards a scholastic-industrial form of consciousness. If I were feeling devilishly sociological, I might call this dominant system the lexarchy. Smash the lexarchy!** The common lament that modern internet users focus too much on images and video should be flipped on its head- the reason that text has until this point held primacy as the means for communicating ideas has been technological restriction. Our technology wasn't up to the task of communicating complex ideas en masse using other forms. Educated people then came to the incorrect conclusion that plain text is the only way to convey any sufficiently sophisticated idea. They view the restoration of a more natural equilibrium between text and other forms as an affront to the edifice of which they have become a part.

Language is not used because it has a greater capacity for expressiveness; it's used because it is more portable. Words are nice and easy to package. They're easy to distribute and easy to parse with computers. Their meaning tends to be robust against changes in representation- if you inverted italics and non-italics in this essay you would be left with basically the same meaning, in a way that would not be true if you inverted colors in an image. This ease of use is partly due to language itself and partly due to the textual institutions around which our society is based. Reading and writing are considered the most basic aims of public education, and any explanatory material is expected to be made of words. Words are now the gold standard for unambiguous communication- as in the movement to make sexual consent explicitly verbal.***

In Seeing Like a State, James Scott talks about how centralized governments forced a common language on groups with incompatible dialects by making everyone use the same language on any interactions with the state.
In the first efforts made to insist on the use of French, it is clear that the state's objective was the legibility of local practice. Officials insisted that every legal document- whether a will, document of sale, loan instrument, contract, annuity, or property deed- be drawn up in French... One can hardly imagine a more effective formula for immediately devaluing local knowledge and privileging all those who had mastered the official linguistic code... The implicit logic of the move was to define a hierarchy of cultures, relegating local languages and their regional cultures to, at best, a quaint provincialism
This process extends past unprivileged verbal languages to non-verbal languages. In the same way you can't write legal documents in African American Vernacular English, you also can't "write" them as paintings or musical pieces.

I can't finish this essay without acknowledging the obvious self-reflection. This essay is made of words. In launching an attack against words with words themselves, I have only bought into and strengthened linguistic hegemony. If I did my job well and you remember the ideas in this essay, they will now be represented in your mind using words, and you will convey them to others by pointing to these words. Another notch is ratcheted forward on the great machine of language, constraining our available thoughts to its structures.



----------------
Notes:
* "Word envy" is an analogy to "physics envy"- the tendency of scientific fields to oversimplify because they want theories as elegant and universal as those in physics. I think words takes a cultural place at the top of the artistic/expressive hierarchy in the same way physics takes a place at the top of the scientific hierarchy- regardless of whether that dominance is justified for any given problem.
** I dislike the "-archy" formulation (patriarchy, kyriarchy, etc.) for many reasons. But a complete unpacking of the distinction between local and global models will have to wait for another day (mostly because I haven't worked it out for myself yet).
*** To be perfectly clear, I agree with this movement. 

02 February 2015

Memes and Models

Inspired by Sarah Perry's great post "Why Cultural Evolution is True (And What It Is)." She does a better job with the substantive work of explaining cultural evolution- this is more of a meta-point about how we should treat models and metaphors.

It is unfashionable to use the concept of a "meme," in large part because it has been co-opted by The Internet [1]. This is a shame since it has the potential to be so useful as a base model from which to explain culture. Many people levy the accusation that the idea of a meme is just a metaphor, but I find that to be slightly misguided. Rewording George Box, all scientific concepts are metaphors, some are more useful than others [2]. It is better to explicitly analyze the ways in which a metaphor holds or does not hold in a specific situation.

In talking about cultural evolution, it's important to remember the distinction between evolution and natural selection, since the two ideas are so frequently conflated. (So much so that I feel like I may be cheating by trying to separate them.) Evolution is simply change over time. Natural selection, on the other hand, is a specific mechanistic model with four important components. In his essay "The False Allure of Group Selection," Stephen Pinker defines natural selection as having four key components:
  • The units of selection must be discrete
  • The units of selection must influence reproductive fitness
  • Mutation must be random relative to the fitness environment
  • Success must be defined by the number of copies present in the environment
But we can treat each of these components as a knob or a switch that we can twiddle to see what happens. Sexual selection can be viewed as a way of changing the "random with respect to fitness environment" requirement. Organisms don't choose mates at random- they choose mates that signal increased reproductive fitness. This can be viewed as a sort of agent-guided evolution. (There's actually a hypothesis that sex stuck around because it allowed us to out-evolve parasites). But I would hardly consider the existence of sex a point against using the model of natural selection in biology. Similarly, the fact that we can make predictions about, say, what songs will become popular doesn't automatically disqualify songs from being treated in a selective framework.

Some models of memetic transfer can explain cultural phenomena without even appealing to differences in fitness levels! In one paper, researchers showed that the rate at which popular songs shuffle on and off "Top N" lists (that is, lists of most popular songs, baby names, dog breeds, etc.) fits well with a netural random-copying model. In this model, people randomly either copy some other person's meme or generate a new meme. A similar concept in biology is that of genetic drift, in which genes spread across populations completely at random. Genetic drift is explicitly not the same as natural selection- it specifically violates "the correlation between the trait and reproductive success must be nonzero"- but it's an indispensable tool in explaining aspects of biological evolution.

We can in this situation treat the memetic view as a cultural null hypothesis. The null hypothesis answers the question: what do we expect to happen if nothing special is going on? In this case, it's: what do we expect to happen to societal trends in the absence of any specific trend or desire?

Another reason that people find the memetic view annoying or worthless is that many of its proponents (at least among laypeople) jump from "something analogous to natural selection is at play in cultural evolution" to "the only relevant factor in cultural evolution is a simply definable form of reproductive fitness." They then conclude that the only reason people hold ideas is because those ideas happened to be particularly good at replicating. Holding this position is like learning about biological evolution and concluding that the only organisms that should exist are viruses that reproduce as quickly as possible. In culture, just as in biology, there are a huge number of constraining forces that determine things like fitness or otherwise make change non-random.

So yes, we should be careful about taking the meme/gene metaphor too literally. But on the other hand, the fact that ideas and culture are not exactly like biology should not prevent us from assembling a toolkit of biologically-inspired models. We shouldn't come to a full stop when we run into differences, but rather ask how we can develop new, ideally mathematical, frameworks that account for these differences.

Why do I say mathematical? Mathematical models allow us to more explicitly lay out the metaphors that we're using for prediction. One might say that math is just the practice of constructing precise, publicly accessible metaphors. Mathematical models, even (in fact, especially) stupidly oversimplified ones, are useful in that they can identify the minimal set of assumptions needed in order to explain some phenomenon [3]. For instance, it is reasonable to conclude that communities become segregated by race when people are racist, and that intentionally constructed oppressive organizations are what keep communities segregated. But this great simulation by Vi Hart and Nicky Case shows that we don't need to postulate either explicit racism or structural oppression in order for segregation to exist. A slight preference for clustering with people similar to you can explain the global behavior. (Obvious disclaimer: I am not trying to make the argument that forces of oppression don't exist in society. They do.) Even if there are other much more complicated forces at play (and there are), the model tells us unambiguously that just telling people to not be racist will not desegregate communities. It can't necessarily tell us what will work, but it can say some things about what won't.

There are many ways in which evolution can be guided by both agentic and non-agentic forces. We don't need to draw a line in the sand between cultural change driven by purposeful individual action and that caused by uncontrolled collective behavior. [4]


-------
[1] It also doesn't help that the term originated in a Richard Dawkins book. Any self-respecting scientifically-minded countersignaller wants to distance themselves as much as possible from Richard Dawkins, who is associated mostly with asshole atheists.
[2] See Lakoff and Johnson's Philosophy in the Flesh, or Luke Muelhauser's summary.
[3] See this great article in The Atlantic, which discusses the segregation model as well as a number of other interesting simulations.
[4] For many more thoughts on purposeful vs. non-purposeful forces in society, see Scott Alexander's magnum post-us Meditations on Moloch. (Warning: discussion of strong AI.)

08 January 2015

(Mostly old) links for January!

My new goal in life is to win the World Pun Championships.

Slate published a good review of the study replication wars - nothing super new, but it gives a very accurate and balanced view of the situation.

It's possible to get by without large chunks of your brain. One 44-year-old man discovered that 50-75% of his brain was displaced by a sack of cerebrospinal fluid. A 24-year-old woman discovered she was born without a cerebellum. A study of six people born without a corpus callosum, the part of the brain most responsible for coordinating information between the left and right hemisphere, showed essentially no behavioral abnormalities or life problems.

The Quantum Pontiff recommends sticking a section of unit tests in academic papers, like people do when making code in the real world.

The Moire Eel. Warning: may cause uncontrollable insight into the nature of pattern. (Move your cursor around to change things.)

Apparently von Neumann advocated for preemptive nuclear strike.
Von Neumann believed that inasmuch as the US-Soviet standoff was a Prisoner’s Dilemma, and inasmuch as both actors were rational and self-regarding, the only defensible policy was immediate attack. Since there was some chance of destroying an adversary’s offensive capability and/or will to retaliate by attacking, the best course of action was to launch now. Many others argued in a similar fashion. As the Joint Chiefs of Staff maintained in 1947, “Offense, recognized in the past as the best means of defense, in atomic warfare will be the only general means of defense."
People commonly assume that when solving a problem, taking time away from it (or "incubating") is a good tactic for idea generation. Is this true? A meta-analysis finds, in true science fashion, that the answer is "probably sometimes in certain situations."
The authors identified a positive incubation effect, with divergent thinking tasks benefiting more than linguistic and visual insight tasks from incubation. Longer preparation periods gave a greater incubation effect, whereas filling an incubation period with high cognitive demand tasks gave a smaller incubation effect. Surprisingly, low cognitive demand tasks yielded a stronger incubation effect than did rest during an incubation period when solving linguistic insight problems.
MIT researchers figured out how to reconstruct speech by taking video of a potato chip bag. They look at the way the bag reacts to the sound, and infer speech from that. What the balls.

Pregnancy is a hormonal war between fetus and mother, and menstruation may have evolved due to the tremendous energy resources needed for the fetal brain to develop.



24 December 2014

Ask Simple Questions


As an obvious extension to my post on being obvious, I'd like to add an extra piece of advice- ask simple questions. I think that a super productive way to view fields of art or science is to try to reduce them to the simplest, smallest set of questions one needs to ask in order to think like a practitioner of that field.

In acting, the central triad of questions, as pretty much any good acting teacher will tell you, is:
  • What do I want?
  • What's in the way of what I want?
  • What's my action? (i.e. How do I get what I want?)
The David Mamet school of acting (described in A Practical Handbook for the Actor) essentially insists that there is nothing else to acting besides answering these questions and knowing your lines. I disagree with this as a rule to be followed to the grave, but the reality is that almost every problem that you run into in the course of rehearsing a play can be resolved by answering these questions as precisely as possible.

For physics, it's hard to pin down since different subfields call for different approaches. But the most important questions to ask are:
  • What are the symmetries?
  • What are the invariant quantities?
(Of course, if you are versed in the teachings of Zen Master Noether, you know these are but symmetric approaches to the invariant way.)

In science in general, we have the triad:

  • If things were this way, what would the world be like?
  • If the world is like this, what way would things be?
  • What is the world like?
For Keith Johnstone, the central question in improvisation is:
  • What's the status game?
That is, what is the social rank of each of the characters in the improvisation, and how do their ranks change (or stay the same) over time?

In lighting design, I would say that the two main questions are:
  • Where is the light coming from?
  • What is the actual color of the light?
You don't necessarily need to answer them verbally, but if you ask them to yourself anytime you will start to notice shadows and reflections and subtleties that you never noticed before. I highly recommend doing this whenever you think of it- like right now! (Granted maybe you shouldn't listen to me since I don't really know anything about lighting design.

If you don't know what simple questions are best to ask, then just ask any question! So long as it's simple, and can be applied to the This is the philosophy embraced by Brian Eno's card deck of "Oblique Strategies." It consists of a bunch of questions and/or imperative statements and/or otherwise structured small collections of words designed to get you to look at whatever you're doing in a new light. You can find an online deck here.

I'm interested to hear other peoples' thoughts- what are the most important dumb questions in your field?