Surgical Souvenirs

by Jonah Lehrer

What makes for an effective surgery? At first glance, the answer seems simple: surgery is about brute anatomy. It’s a matter of setting shattered bones, excising malignant tumors and clearing the pipes of the heart. In other words, a good operation fixes the body, that fleshy machine.

But is it really so simple? A new study demonstrates that even straightforward surgeries, such as repairing a “slipped” spinal disk, contain a large mental component that has nothing to do with the back. Here’s Vaughan Bell, over at Mind Hacks:

The researchers, a surgical team from St George's Hospital in London, were aware that anxiety and depression had a major influence on recovery after surgery for a type of spinal disc tear, commonly but inaccurately known as a 'slipped disc'.

They decided to try a simple measure to help patients feel less anxious and bolster their belief that a good job had been done: the surgeons presented randomly selected patients with the removed fragments from their back.

This simple technique had a remarkable effect. Patients given a 'souvenir' of their operation reported greater improvement in sciatic nerve pain, lower back pain, less pins and needles sensations, less leg weakness and a reduced use of pain killers.

The larger point is that our expectations have a profound impact on what we actually experience. If we believe that the surgery went well – and can visualize the shards of removed bone – then we’ll have an easier recovery. For too long, modern medicine has subscribed to a Cartesian duality, in which doctors treat the body and ignore the mind. (Perhaps this is because, for the first 2000 years, Western medicine consisted solely of the placebo effect. If that mercury tonic worked, it’s because you believed it worked.) Research like this demonstrates that we ignore the beliefs of the patient at our own peril.

In Defense of Robert Wright against Jerry Coyne

by Jim Manzi Jerry Coyne has a fairly scathing review of Robert Wright’s book Evolution of God in The New Republic. Here’s how the review begins:

Over its history, science has delivered two crippling blows to humanity’s self-image. The first was Galileo’s announcement, in 1632, that our Earth was just another planet and not, as Scripture implied, the center of the universe. The second—and more severe—landed in 1859, when Charles Darwin published On the Origin of Species, demolishing, in 545 pages of closely reasoned prose, the comforting notion that we are unique among all species—the supreme object of God’s creation, and the only creature whose earthly travails could be cashed in for a comfortable afterlife.

But there are some for whom the true evolutionary tale of human life is not sufficiently inspiring or flattering. After all, the tale seems to hold no moral other than this: like all species, we are the result of a purely natural and material process. While many religious people have been persuaded by Darwin’s overwhelming evidence, there still remains a need to find greater meaning behind it all—to see our world as part of an unfolding and divinely scripted plan. …

And so the faithful—the ones who care about science at all—have tweaked the theory of evolution to bring it into line with their needs, to make it more congenial. Although life may indeed have evolved, they say, the process was really masterminded by God, whose ultimate goal was to evolve a species, our species, that is able to apprehend and therefore to admire its creator.

Coyne is an eminent evolutionary biologist, but here makes an enormous claim about the philosophical implications of science: that evolution through natural selection demonstrates that there is no divine plan for the universe. I think this claim is, in fact, a gigantic leap of faith unsupported by any scientific findings. Let me try to explain why.

I’ll need to begin by considering evolution at a reasonably concrete level. It’s very helpful to look at a representation of the core algorithmic processes that comprise evolution through natural selection, but abstract away, for the moment, many of its biochemical complexities. Genetic Algorithms (GAs) are computer-software implementations of the same kind of information algorithm that takes place in the biological process of evolution through natural selection. Today, GAs are a widely deployed software engineering tool used to solve such prosaic problems as optimally scheduling trucks on a delivery route or identifying the best combination of process-control settings to get maximum output from a factory.

Consider the example of a chemical plant with a control panel that has 100 on/off switches used to regulate the manufacturing process. You are given the task of finding the combination of switch settings that will generate the highest total output for the plant. How would you solve the problem? One obvious approach would be to run the plant briefly with each possible combination of switch settings and select the best one. Unfortunately, even in this very simplified example there are 2^100 possible combinations. This is a surprisingly gigantic number — much larger, for instance, than the number of grains of sand on Earth. We could spend a million lifetimes trying various combinations of switches and never get to most of the possible combinations.

But there’s a trick that can help us. Once we start to try combinations, we might begin to notice patterns like “when switches 17 and 84 are set to ‘on,’ production tends to increase when I put switch 53 to the ‘off’ position.” Such insights could help us to narrow down our search, and get to the answer faster. This might not seem to be much help in the face of such an enormous number of possibilities, but the power of these rules is also surprising.

To illustrate this, think of a simple game: I pick a random whole number between one and a billion, and you try to guess it. If the only thing I tell you when you make each guess is whether you are right or wrong, you would have very little chance of guessing my number even if I let you guess non-stop for a year. If, however, I tell you whether each guess is high or low, there is a procedure that will get the exact answer within about 30 guesses. You should always guess 500 million first. For all subsequent guesses, you should always pick the mid-point of the remaining possibilities. If, for example, the response to your opening guess of 500 million is that you are too high, your next guess should be the mid-point of the remaining possibilities, or 250 million. If the response to this second guess is “too low,” then your next guess should be the mid-point of 250 million and 500 million, or 375 million, and so on. You can find my number within about a minute.

A Genetic Algorithm works on roughly the same principle. To return to our problem of the 2^100 possible combinations of switch settings, we can use a GA as an automated procedure to sort through the vast “search space” of possibilities — and thus home in quickly on the best one. This procedure has the same three elements as our procedure for guessing the number: a starting guess, a feedback measurement that gives some indication of how good any guess is, and an iterative method that exploits this feedback to improve subsequent guesses.

In order to establish the initial guess for the GA, imagine writing a vertical column of 100 zeroes and ones on a piece of paper. If we agree to let one=“turn the switch on” and zero=“turn the switch off,” this could be used as a set of instructions for operating the chemical plant. The first of the hundred would tell us whether switch 1 should be on or off, the second would tell us what to do with switch 2, and so on all the way down to the 100th switch.

This is a pretty obvious analogy to what happens with biological organisms and their genetic codes — and therefore, in a GA, we refer to this list as a “genome.” The mapping of genome to physical manifestation is termed the genotype-phenotype map.

Our goal, then, is to find the genome that will lead the plant to run at maximum output. The algorithm creates an initial bunch of guesses — genomes — by randomly generating, say, 1,000 strings of 100 zeros and ones. We then do 1,000 sequential production runs at the factory, by setting the switches in the plant to the combination of settings indicated by each genome and measuring the output of the plant for each; this measured output is termed the “fitness value.” (Typically, in fact, we construct a software-based simulation of the factory that allows us to run such tests more rapidly.) Next, the program selects the 500 of the 1,000 organisms that have the lowest fitness values and eliminates them. This is the feedback measurement in our algorithm — and it is directly analogous to the competition for survival of biological entities.

Next comes the algorithmic process for generating new guesses, which has two major components: crossover and mutation. These components are directly modeled on the biological process of reproduction. First, the 500 surviving organisms are randomly paired off into 250 pairs of mates. The GA then proceeds through these pairs of organisms one at a time. For each pair it flips a coin. If the coin comes up heads, then organism A “reproduces” with organism B by simply creating one additional copy of each; this is called direct replication. If it comes up tails, then organism A reproduces with organism B via “crossover”: The program selects a random “crossover point,” say at the 34th of the 100 positions, and then creates one offspring that has the string of zeroes and ones from organism A up to the crossover point and those from organism B after the crossover point, and an additional offspring that has the string of zeroes and ones from organism B up to the crossover point and those from organism A after the crossover point. The 500 resulting offspring are added to the population of 500 surviving parents to create a new population of 1,000 organisms. Finally, a soupçon of mutation is added by randomly flipping roughly every 10,000th digit from zero to one or vice versa.

The new generation is now complete. Fitness is evaluated for each; the bottom 500 are eliminated, and the surviving 500 reproduce through the same process of direct replication, crossover, and mutation to create the subsequent generation. This cycle is repeated over and over again through many generations. The average fitness value of the population moves upward through these iterations, and the algorithm, in fits and starts, closes in on the best solution.

This seems like a laborious process — but it works: it helps us get the factory to very high output much faster than we could otherwise. Computer scientists were inspired to do it this way because they observed the same three fundamental algorithmic operators — selection, crossover, and mutation — accomplish a similar task in the natural world. Notice that the method searches a space of possible solutions far more rapidly than random search, but it neither requires nor generates beliefs about the causal relationship between patterns within the genome and fitness beyond the raw observation of the survival or death of individual organisms. This is what makes the approach applicable to such a vast range of phenomena. That such a comparatively simple concept can explain so much about the way nature works is what makes genetic evolution a scientific paradigm of stupendous beauty and power. As Leonardo put it, simplicity is the highest form of sophistication.

We can make two simple observations about the properties of this GA. First, our factory evolution process did not begin ex nihilo. It required pre-existing building blocks in the form of the initial population and the rules of the algorithm. Second, one of the 2^100 possible combinations of switch settings will produce the highest output. With enough time, the algorithm will always converge on this one answer. The algorithm is therefore the opposite of goalless: it is, rather, a device designed to tend toward a specific needle in a haystack — the single best potential result.

These two observations are highly relevant to our consideration of the philosophical problems of creation (first cause) and purpose (final cause).

It is obvious from the factory analogy that evolution does not eliminate the problem of ultimate origins. Physical genomes are composed of parts, which in turn are assembled from other subsidiary components according to physical laws. We could, in theory, push this construction process back through components and sub-components all the way to the smallest sub-atomic particles currently known, but we would still have to address the problem of original creation. Even if we argue that, as per the GA which spontaneously generates the initial population, that prior physical processes created matter, we are still left with the more profound question of the origin of the rules of the physical process themselves.

This, of course, is a very old question that far pre-dates modern science. A scientific theory is a falsifiable rule that relates cause to effect. If you push the chain of causality back far enough, you either find yourself more or less right back where Aristotle was more than 2,000 years ago in stating his view that any conception of any chain of cause-and-effect must ultimately begin with an Uncaused Cause, or just accept the problem of infinite regress. No matter how far science advances, an explanation of ultimate origins seems always to remain a non-scientific question.

Now consider the relationship of the second observation to the problem of final cause. The factory GA, as we saw, had a goal. Evolution in nature is more complicated — but the complications don’t mean that the process is goalless, just that determining this goal would be so incomprehensibly hard that in practice it falls into the realm of philosophy rather than science. Science can not tell us whether or not evolution through natural selection has some final cause or not; if we believe, for some non-scientific reason, that evolution has a goal, then science can not, as of now, tell what that goal might be.

One important complication is that evolution in nature proceeds against a more complex fitness function than “see how much output this factory creates.” The natural fitness landscape is defined by survival and reproduction, and it is constantly changing as the environment changes — for example, as new species arise or the climate becomes colder. It is prohibitively difficult to calculate the result of this process, but it is, in principle, calculable; the fitness landscape, after all, is only the product of the interaction of other physical processes.

A second major complication is that genetic strings in nature have complex structures and can evolve to some arbitrary length, unlike our factory example, where the genome had a single string with a fixed length of 100 positions. But, even in nature, the genome must always have finite dimension, as regulated by physical laws; and therefore the total number of potential combinations of genetic components remains finite. It is often said, correctly, that the number of possible genetic combinations is “all but infinite”; but of course this is just a very loaded way of saying “finite.”

The combination of a constantly changing fitness landscape and an extraordinarily large number of possible genomes means that scientists appropriately proceed as if evolution were goalless, but from a philosophical perspective a goal may remain present in principle.

But what about the random elements of evolution – how can randomness possibly comport with a goal? First, note that in the factory example, this did not impact the goal, merely the path taken to the goal. Further, it’s especially important that we be clear about our terms on this subject, since a lot of philosophical baggage can get swept into the term “random”. It is often used loosely in discussions of evolution to imply senselessness, a basic lack of understandability, in occurrences. But in fact, even the “random” elements of evolution that influence the path it takes toward its goal — for example, mutation and crossover — are really pseudo-random. For example, if a specific mutation is caused by radiation hitting a nucleotide, both the radiation and its effect on the nucleotide are governed by normal physical laws. Human uncertainty in describing evolution, which as a practical matter we refer to as randomness, is reducible entirely to the impracticality of building a model that comprehensively considers things such as the idiosyncratic path of every photon in the universe compounded by the quantum-mechanistic uncertainty present in fundamental physical laws that govern the motion of such particles. So, said more precisely, the evolutionary process does not add any incremental randomness to outcomes beyond what is already present in other physical laws, simply such great complexity that scientists are well-advised to treat it as if it were goalless. We currently lack the capability to compute either the goal or the path of evolution, but that is a comment about our limitations as observers, not about the process itself.

The theory of evolution, then, has not eliminated the problems of ultimate origins and ultimate purpose with respect to the development of organisms; it has ignored them. These problems are defined as non-scientific questions, not because we don’t care about the answers, but because attempting to solve them would impede practical progress. Accepting evolution, therefore, requires neither the denial of a Creator nor the loss of the idea of ultimate purpose. It resolves neither issue for us one way or the other. The field of philosophical speculation that does not contradict any valid scientific findings is much wider open to Wright than Coyne is willing to accept.

Stomped On What Exactly?

by Chris Bodenner

Joe Klein has two main reasons for sticking it out in Afghanistan:

One is that the U.S. has a real national security interest in Afghanistan. We don't want to see it revert to its former status–run by Taliban extremists who provide a safe haven for Al Qaeda [….] The other answer is that we have a moral obligation to the Afghan people, just as we had to the Iraqis when we stomped in there and destroyed the most basic institutions of civil society

Crowley takes exception:

Really? America was attacked by people who were given safe harbor by Afghanistan's government. Our retaliation was well within the bounds of international conduct, as opposed to George W. Bush's extremely debatable war of choice in Iraq. It also did not involve the vast infrastructural damage that we inflicted on Iraq. (Primitive and endlessly war-torn Afghanistan just doesn't have that much infrastructure to destrory.) But even if we concede Klein his point, it feels dangerously open-ended. […] And how many [Afghans] are willing to be killed as collateral combat damage while we strive to meet that goal?

Means Without Ends

by Patrick Appel

Conor Friedersdorf, citing Manzi's writings on torture, wants to know why those who support the practice only look at half the picture:

In praising the treatment of KSM, and arguing that it ought to be standard detainee policy, McCarthy and like-minded pundits never consider the significant strategic drawbacks to the tactic of torture—among them eliciting false intelligence that squanders man hours; the fact that a torture policy causes some upstanding intelligence professionals to resign, and others to remove themselves from interrogations, hurting our capacity to gather good intelligence; that torture pushes more Muslims into the radical camp, increases anti-American sentiments, aids terrorist recruiting efforts, and undermines support for the war on terror even among significant numbers of Americans; that it causes allied countries to cooperate less with our counterterrorism efforts; that it reduces the morale of soldiers and intelligence professionals; and that “enhanced interrogation techniques” have demonstrably bled into military prisons, undermining our mission in a critical theater and leading to the rightful imprisonment of American soldiers, who were denounced even by the Bush administration.

What kind of national-security analyst ignores all that to argue that because KSM was waterboarded, sleep deprived, and later gave some useful information, the strategic case for “enhanced interrogation” is definitely vindicated?

This Is What Partisanship Looks Like

by Patrick Appel

Allahpundit places party over principle and praises Steele's Medicare ad:

Think of this as an attempt to set up a little “death panel” for the Democrats’ reelection chances, with grandma and grandpa seated on the other side of the table…this ad will sting a few years from now when we have to have a chat with granny about Medicare’s insolvency. In the meantime, carpe diem.

His commenters are not happy.

Things That Make You Go Hmmm, Ctd.

Bristollevitrig

by Andrew

Well now: this is interesting, isn't it:

Sarah told me she had a great idea: we would keep it a secret—nobody would know that Bristol was pregnant. She told me that once Bristol had the baby she and Todd would adopt him. That way, she said, Bristol and I didn’t have to worry about anything. Sarah kept mentioning this plan. She was nagging—she wouldn’t give up. She would say, “So, are you gonna let me adopt him?” We both kept telling her we were definitely not going to let her adopt the baby. I think Sarah wanted to make Bristol look good, and she didn’t want people to know that her 17-year-old daughter was going to have a kid.

So, according to Levi, Governor Palin was very, very interested in avoiding embarrassment for her daughter – and a political problem – by passing off someone else's child as her own and adopting him. This kid's name was Tripp. But this exercise is called "proof of principle." If anyone believed that Palin wasn't nutty enough to try to pass off her own daughter's baby as her own, they need to reassess.

F

by Chris Bodenner

Michelle Cottle sounds off on the mini-scandal in the Virginia governor's race:

Outlining beliefs [in his senior thesis] that would have made Regent founder Pat Robertson swoon with ecstasy, [Robert] McDonnell makes clear his disgust with (among many, many other things) gays, working women, and "fornicators." (McDonnell disapproved of the Supreme Court's 1972 decision legalizing birth control for unmarried couples.) Democrats are, predictably, rushing to hang this bit of Neanderthalism around McDonnell's neck as he tries to woo moderates and female voters. […]

I find myself torn in this fight. In general, I find the obsession with politicians' student writings excessive. Most of these papers spring from the  brains of people in their early- to mid-20s who have spent the past several years in the self-indulgent cocoon of academia. […] Many even (gasp!) change their views as they trudge through the big, wide, complicated world.

I had those same sympathies when I first glanced at the story's headline. Then, I learned: a) the thesis was written in 1989, and b) McDonnell was 34. If he had been a 22-year-old, sheltered undergrad in 1969, I might still have those sympathies.

How Green Is This Computer?

by Patrick Appel

The Economist reports:

According to a report published by the Climate Group, a think-tank based in London, computers, printers, mobile phones and the widgets that accompany them account for the emission of 830m tonnes of carbon dioxide around the world in 2007. That is about 2% of the estimated total of emissions from human activity. And that is the same as the aviation industry’s contribution. According to the report, about a quarter of the emissions in question are generated by the manufacture of computers and so forth. The rest come from their use.