31 October 2005

Myers Explains it All

Girding his loins, hitching up his pants, taking the bit between his teeth, and otherwise preparing himself for an unsavory task, the famous PZ Myers (AKA Pharyngula) spent one of his introductory biology classes (that’s Biology 1101 at U. Minnesota, Morris) on creationism. He has gallantly linked to the PowerPoint presentation that he used. As you might expect from his long history battling this particular demon, it is a fine summary of the more egregious recent claims and their rebuttals.

I am happy to say that I helped (if indirectly) in putting it together. Some months ago, I created a slide show about “irreducible complexity” as part of a presentation to a Northern Virginia group interested in maintaining scientific integrity in local schools. I adapted some of Myers’ commentary, and sent a copy to him to check over. To my immense pleasure, I see that he has incorporated a modified version of my slides into his lecture.

Much of what is being sought in the ‘teach the controversy’ movement would be better characterized as ‘lying to the students’. As the fight against claptrap moves to more school systems, it is important to have resources like this lecture generally available. Kudos to PZ Myers for generously contributing his labor to this cause.

28 October 2005

Sweet Statistic

It is probably true that astronomers are different from other people in a variety of ways. We are generally more comfortable with large numbers and physical phenomena of vast scale. My wife, on the other hand, can’t think about the expansion of the universe without getting nauseous. (I recall a character in a Peter de Vries novel who, upon encountering the expansion of the universe in class, took to his bed for a week with the vapors.) We are also, I think, adapted to seeing the beauty of the cosmos in the statistical analysis of heaps of numbers.

But seldom have I seen a statistic as sweet as the one in today’s Washington Post. (I haven’t bothered to link to it, because it will become inaccessible within a week anyway.)

The Office of Personnel Management has loaded the database of federal employment facts into a search system that enables the average citizen to make a variety of analyses of what her tax dollars are getting in terms of human resources. An enterprising reporter for the Post has culled the latest (FY05) information on average base salaries for various federal jobs. (The average for all, by the way, is $63,715.)

The average salary for government astronomers is $115,634.
The average salary for government lawyers is … $115,111.

The average government astronomer is better paid than the average government lawyer. True, it’s not much of a difference. And the lawyers outnumber the astronomers by about 50 to 1.

But still, it’s sweet.

19 October 2005

Teaching Alchemical Thinking

Another thought that diverted me from a linear reading of Bell’s biography of Lavoisier:

Sir Francis Bacon laid out the case for experimental validation of theories in the Novum Organum in 1620. Its importance was grasped almost immediately. For example, Sir Thomas Browne exploited it in his debunking of numerous popular beliefs, published as the Pseudodoxia Epidemica starting in 1646. (The link is to the final 1672 edition.) And yet, as protoscientists like Boyle, Glauber, and Newton extended and absorbed the results of laboratory work, they still persisted in casting things in the framework of traditional alchemy. Why did so much alchemical thinking persist into the age of experiment?

Bell notes that, while alchemy provided a scheme for ordering and understanding the very diverse facts of chemical experimentation, it also mandated a philosophical stance in which the real world of the lab is but a pale image of underlying truth. Experiments were illustrative, rather than confrontational. Bell quotes a French historian, Robert Halleux:
“In alchemy, the laboratory has no crucial role. The function of the practice is first and foremost to illustrate the truth of the theory. The success of a procedure demonstrates to the operator that he has understood the ancients well. The quality of the practice is the direct consequence of the level of understanding of the theory. For if the experiment fails, the failure does not weaken the theory.”
(“Pratique industrielle et chimie philosophiqe de l’Antiquité au XVII siecle,” L’Actuelité chimique, January-February 1987, p. 19.)

As I read that, I had a vision of almost every K-12 science lab I’d ever been in, as either student or teacher.

It’s not like I don’t understand why. There’s not enough time in the school day to be able to spend much time on a single lab experiment. There’s not enough time in the school year to spend exploring a topic by real experimentation. There’s not enough money to provide the kind of equipment you’d need. There’s no place to do it that doesn’t have to get cleared up within the hour for the next class. There’s just no way. Lab science in school is like music appreciation – watching the performance, but never learning how to handle the instruments.

But is it any wonder that, when we’re done, the kids come out with an understanding of the scientific process that’s, well, 16th century?

18 October 2005

Leveling the Playing Field by Making Your Own

I’ve just finished reading Madison Bell’s biography of Lavoisier, a pleasant (if concise) review of his major scientific work and the political forces that brought him to the guillotine. One of the episodes reminded me of an argument that I’ve been making about Intelligent Design and scientific publication, so I thought I’d repeat myself.

In 1787, Lavoisier (working with Berthollet, Fourcroy, and Guyton de Morveau) published Méthode de nomenclature chimique, in which he recommended what has come to be modern chemical nomenclature. The idea was to provide “rather a method of naming than a nomenclature”, in which the name would express chemical relationships. Thus, the names of elements reflected their characteristics (oxygen = acid-making; hydrogen = water-making; nitrogen = niter-making). The names of compounds used suffixes to describe a relevant quality of their composition; for example, calcium nitrate has a higher oxygen content than calcium nitrite.

Bell argues that Lavoisier’s rationale for proposing this change was only partly due to his personal preference for imposing order on the confusing system of names then in existence. In addition, by getting chemists to adopt the name ‘oxygen’, he was intentionally trying to get them to buy into his theory of combustion, which (although eventually triumphant) was still a controversial alternative to the phlogiston theory. Traditional chemists were not unaware of this extra-scientific aspect, and were not eager to adopt its anti-phlogiston bias. Thus, the system was not initially well received, even by scientists otherwise disposed towards Lavoisier’s ideas (such as Franklin).

The point I want to bring out, though, is Lavoisier’s response to the considerable opposition expressed in the primary research journal, the Journal de physique. Recognizing that they could not easily get papers using the new terminology into that journal, Lavoisier and friends founded a competing one, the Annales de chimie. That provided them with a forum whose editorial policy did not automatically discriminate against their approach. Of course, the modern policy of peer review of journal articles was not yet in place, but the papers of the Lavoisier group were still reviewed by members of the Academy of Science (at least, until it was disbanded by the Assembly).

Annales de chimie is still published. In 1816, it became Annales de chimie et de physique and, in 1914, split into the two journals Annales de chimie and Annales de physique.

I hadn’t encountered this piece of history before, and was struck by the similarity to what Huxley and Darwin did in the 1860s. The story is told in Janet Browne’s biography of Darwin. In order to have a reliable channel for publishing papers on Darwin’s theory of evolution, Huxley attempted twice to start a new journal. The first only lasted for a year before going under, but the second, Nature, was more successful, becoming one of the two most important general science journals published today.

One will occasionally hear advocates of Intelligent Design complain that the reason they do not have published papers in refereed journals is that they are automatically discriminated against by the editors and referees of mainstream publications. Michael Behe made this comment in his testimony at the Dover PA trial about teaching ID in the public schools (Kitzmiller et al. vs Dover Area School District). In fact, they have published a few articles in mainstream journals, but the bulk of ID papers are found elsewhere. (The Discover Institute has a lengthy excuse for this, but as other bloggers have already discussed it, I won't bother.)

But with all the financial resources that the ID movement has at hand, one wonders why they don’t just start their own journal and publish there? Well, they do. The International Society for Complexity, Information, and Design publishes a quarterly journal, Progress in Complexity, Information, and Design. ISCID describes it as a
“cross-disciplinary, online journal that investigates complex systems apart from external programmatic constraints like materialism, naturalism, or reductionism. PCID focuses especially on the theoretical development, empirical application, and philosophical implications of information- and design-theoretic concepts for complex systems. PCID welcomes survey articles, research articles, technical communications, tutorials, commentaries, book and software reviews, educational overviews, and controversial theories. The aim of PCID is to advance the science of complexity by assessing the degree to which teleology is relevant (or irrelevant) to the origin, development, and operation of complex systems.”


Papers submitted to PCID are reviewed by fellows of the ISCID, who comprise most of the well-known names in ID.

So, it seems to me that what we all (pro- and anti-ID both) need to do is let the normal process of science take its course. Proponents of ID can be assured of a friendly venue in PCID. Others can be assured of finding papers on ID there. If the ideas and results that are published there are seen as important or useful to other work, then they will get cited by scientists using them. Over time, we will either see citations to PCID grow, as the field becomes increasingly important, or not, as it becomes clear that it is a dead end.

And, should the former prove to be the case, then it will eventually be appropriate to include ID in the curriculum. But only then.

01 October 2005

Sharing Minds

The latest issue of Nature had a story about new technology for assisting stroke victims and paraplegics. Because the act of thinking about moving your body activates the same neurons that are involved in the act of moving itself, it is possible to measure the relevant neuronal activity, decode the intent, and control devices accordingly (G. Pfurtscheller and C. Neuper, "Motor imagery and direct brain-computer communication," Proceedings of the IEEE, vol. 89, pp. 1123-1134, 2001.). The technique has been used to enable paralyzed patients to control robots and to ‘key’ messages into computers. Until now, though, it involved the fairly invasive process of implanting sensors directly into the brain.

At a conference earlier this week on virtual reality and ‘telepresence’, held at University College London, a group from Graz, Austria, reported on experiments in which several people wearing what looks to be a modified bathing cap covered with sensors were able to direct a simulated walk through a virtual environment (Leeb R., et al., “Walking from thoughts: not the muscles are crucial, but the brain waves!, Presence 2005.pdf, 2005; the link is to the entire 15 MB meeting proceedings, so link cautiously). The sensor cap does not measure the neuronal pattern directly, but rather the EEG, from which the computer derives a control signal. It is not an easy device to work, requiring a training process that can be difficult. One of the authors said that it took him about five hours to learn the fairly simple control of moving versus standing still.

This is, of course, a good thing with tremendous potential for enhancing the lives of the physically disabled. And yet, it has me wondering.

Imagine how this technology might develop. At present, it only measures integrated signals. Given our understanding of how to map out neuronal activity with fMRI and PET and other tomographic methods, one can anticipate using devices similar to the Graz brain interface cap to infer the full map of brain activity associated with motor actions. Of course, it would require different kinds of sensors, which would themselves require significant miniaturization of existing devices. But none of this development is forbidden by the laws of physics.

Now, suppose we have two people wearing these “full capability brain interface” caps. Connect both to the computer. Call one the agent and the other the replicant (for reasons that will become obvious). We don’t require that the agent imagine an activity; she can actually perform it. In this way, we get the neuronal elements of both the intentional and performance aspects of the activity, including all the musculoskeletal feedbacks. And we don’t require that the computer actually map out the agent’s neuronal patterns. We just have the computer make some comparison between the measurements on the agent and those on the replicant. (It may be that this is most accurately done by mapping both brains and comparing the maps, but it isn’t clear that that is essential.) This gives us a difference signal.

That signal could be quite complex, carrying information about a number of characteristics distinguishing the two neuronal maps. Such complex difference signals could be constructed to be fairly vivid, e.g. by coding them as music, using multiple pitches, timbres, etc. The result, then, is something that could be used as a feedback signal, which we give to the replicant.

Would it be possible to use that feedback process to train the replicant to duplicate the neuronal map of the agent, by carrying out the same activity? (There are some difficult details here on how you change the signal to encourage changes toward, or discourage changes away from the agent’s map, but those seem to be technical, rather than conceptual, problems.) Would it be possible to do this not just for motor activity but for perceptual activity? For feelings? When the feedback signal has been zeroed, are the replicant’s thought the same as the agent’s?

Philosophers and cognitive scientists assure us that the description of a mental state is not the same thing as having the mental state itself. But what if the description of the mental state is a perfect replication of the mental state?

As I said at the outset, there are obvious technical difficulties in implementing these “mind-sharing” caps. But it ought to be feasible to try out elements of this process with existing measurement devices. You could pipe the signal from an agent in one MRI machine out to a lab in another hospital where the replicant is in a second MRI machine. The rest is just software. It would be an interesting test, I think.