Massimo Piattelli-Palmarini is Professor of Linguistics and Cognitive Science at the University of Arizona.
His work applies math to biology in an absolutely fascinating way.
I was honored/thrilled to interview him. See below my interview with him that I edited for flow and added hyperlinks to.
During editing, Dr. Piatelli-Palmarini and I received the sad news that Richard Lewontin had passed away. Here’s an excellent remembrance of Lewontin, whose work had a major impact on biology and will continue to do so.
1) What are the most exciting projects that you’re currently working on?
My work with a highly-distinguished Italian physicist Giuseppe (“Peppino” for his friends) Vitiello looks at interesting parallels between Minimalism and quantum field theory. See our 2015 paper “Linguistics and Some Aspects of Its Underlying Dynamics”, our 2017 paper “Quantum field theory and the linguistic Minimalist Program”, and our 2017 paper “Third Factors in Language Design”.
At a minimum, we see analogies. But maybe we have something deeper: third-factor principles—that is, instantiations of basic laws of physics in language.
I’m also working on the syntax and semantics of referring to oneself (attitudes de se), which was pioneered some years ago by the late James Higginbotham (a dear friend) and by Gennaro Chierchia (another dear friend). In this area, there are interesting differences and commonalities across different languages. I was supposed to take a sabbatical in the fall of 2020 to go to Harvard and work with Chierchia on this, but this plan has been deferred to 2022 because of Covid.
2) What are the most exciting projects that you know of that others are working on?
Let me mention some projects at different levels and in (for the time being) different fields.
Noam Chomsky’s latest developments towards refining the Strong Minimalist Thesis.
Sandiway Fong’s parallel developments towards a truly satisfactory computational linguistics.
And in the neurosciences, the discovery of microtubules’ immensely complex activity inside each neuron (due to Stuart Hameroff, Anirban Bandyopadhyay, and Roger Penrose). Oscillations of a few hertz all the way down to terahertz, with triplets inside triplets inside triplets (a fractal). No one yet knows how to integrate these signals with the activity of millions of neurons, but in the near future, I think, this connection will be made and it will be a revolution in the neurosciences.
3) What’s exciting about applying math to biology?
Well, math and physics and chemistry together, as started by Alan Turing with his pioneering 1952 work “The Chemical Basis of Morphogenesis”.
Biology needs mathematics at many levels: from the Lotka–Volterra equations of predator–prey dynamics, to the innumerable applications of Fibonacci-structures in botany and in embryology, and all the way to the brain and microtubules. The integrations between math and biology are too numerous and too complex to even just mention them.
With David Medeiros and Juan Uriagereka, I’ve traced the presence of Fibonacci-structures in syntax and phonology. See my 2004 paper with Uriagereka “The Immune Syntax”, my 2005 paper with Uriagereka “The Evolution of the Narrow Faculty of Language”, and my 2008 paper with Uriagereka “Still a bridge too far?”.
This Fibonacci-integration is very interesting. See my 2018 paper with Medeiros “The Golden Phrase”.
4) What’s the earliest instance (in the history of science) where math was applied to biology in an interesting way?
The equations of Alfred Lotka and Vito Volterra that describe predator–prey interactions and that identify limit-cycles, regions of cyclic stability, and regions of destructive instability.
Then came the pioneering work of D’Arcy Thompson, with his monumental volume On Growth and Form that reconstructed the topology of morphogenesis in a huge variety of species.
Aristid Lindenmayer established his “grammars” of growth and differentiation in the world of botany, which have been more recently revived and applied to neurons’ dendritic growth and—in linguistics—to classes of grammars in the literal sense of “grammar”.
5) What are the biggest problems/challenges/mysteries that scientists currently face in applying math to biology?
There’s an entire new domain called “systems biology” that tries to cover—under a unified approach—molecules, cells, organs, organisms, and ecological niches. A lot still defies understanding, and computer-models are heavily involved. The field is explicitly declared to be still in its infancy.
Applications of math are difficult in the neurosciences, and the so-called neural nets often work by brute force without any deep mathematical understanding. The proof that understanding is lacking is that those scientists often candidly confess that they are surprised by what they find.
6) Do you have any favorite examples of nature’s mathematical properties? Starfish and flowers are commonly used to illustrate nature’s mathematical properties.
At the University of Maryland, Christopher Cherniak and collaborators have done something truly exceptional.
They’ve established simple equations of maximal connectivity, have used powerful computers to compare real brains (cat-brains, for instance) with millions (really, millions) of possible variants, and have shown that the real brains are the best of all solutions.
Moreover, they’ve mathematically justified something counterintuitive: why are brains in the head and not in the center of the body (near the stomach, for example). In his talks, Chris revives D’Arcy Thompson to show—with a simple twig and with its ramifications—that Nature really does always find the optimal solution.
Noam Chomsky was greatly and favorably impressed by these demos in a conference I attended and that Cherniak also attended.
7) Can it be said that there are very few possible lifeforms? If so, what constrains the range of lifeforms that can exist? One might imagine that there’s an extraordinary number of lifeforms that can exist.
Well, see my reply to the previous question.
One of the most important discoveries in genetics—and in biology in general—was that whole batteries of genes are extraordinarily conserved, from the fruit-fly to humans. See the 1995 Nobel lectures of Christiane Nüsslein-Volhard (lecture here), Edward Lewis (lecture here), and Eric Wieschaus (lecture here). In those lectures, they explicitly declare that this high conservation of genes was not predicted and that it came as a great surprise.
Combine this with the fundamental constraints imposed by physics and chemistry—and with general optimization-principles—and you can see why there’s only a restricted possibility of lifeforms, contrary to superficial intuition.
Boston University’s Michael Sherman has proposed the daring model of the “Universal Genome” (UG). His 2007 paper says: “This model has two major predictions, first that a significant fraction of genetic information in lower taxons must be functionally useless but becomes useful in higher taxons, and second that one should be able to turn on in lower taxons some of the complex latent developmental programs, e.g., a program of eye development or antibody synthesis in sea urchin. An example of natural turning on of a complex latent program in a lower taxon is discussed.”
Curiously, Sherman and Chomsky—only a few miles apart as the crow flies—were using the acronym UG with quite different meanings, completely unbeknownst to each other at the time.
Cedric Boeckx and I established a connection between them. They instantly liked each other, and I’ve heard Noam approvingly cite the universal-genome model.
As daring idealizations—but far from being crazy—there is in essence and at a deep level only one animal (Sherman’s UG) and only one language (Chomsky’s UG).
8) Could evolution have played out in different ways to yield a different biosphere than we currently have? How many different “pathways” were possible?
The idealized experiment would be to go back to the Precambrian and let life evolve from that point a second time.
How different would the biosphere be? Superficially, possibly quite different. But at a deeper level, the differences would be only marginal, for the reasons given in my previous answers.
9) Noam Chomsky commented that all lifeforms seem to be based on very-narrow principles that are more-or-less generative and that have some similarities to what we see in language. What are these principles? Where do they come from? What is their nature? In what way are they very-narrow, in what way are they more-or-less generative, and in what way are they similar to what we see in language?
This would require a whole treatise.
In part, I answered this very succinctly in what I just said. The core principles are strict locality, maximum efficacy of computation, and optimality of interactions. Chomsky discusses these core principles in his 2021 piece “Reflections”.
In very recent writings and lectures, Noam has stressed that strict constraints (such as the Strong Minimalist Thesis) also have an enabling power. Several syntactic derivations become possible precisely because there are strict constraints.
And in biology, inactive genes (à la Sherman) allow genes to be activated—and allow new organs to be developed—in more recent species (PAX6 is inactive in the sea-urchin, but then fully active in higher species where it allows the eye to develop).
10) Why would there be similarities between the principles that constrain which lifeforms are possible and the principles that we see in language? That seems like a striking coincidence.
Maybe it’s a coincidence.
But more likely it’s a consequence of the fact that our brain, obviously, is part of nature, and language as a biological entity is also part of nature. Fundamental laws of nature and basic principles are instantiated in lifeforms and in language.
11) What do you think about the idea that the more you learn about a topic, the more you see fundamental structures/uniformities behind the chaos? Chomsky comments that this transition (from seeing chaos to seeing fundamental structures/uniformities) happened with language and is also happening with human cognition generally.
He’s right.
When you analyze phenomena and structures at deeper levels, the chaos turns into order, always leaving major problems to be solved, but these are deeper problems.
It’s happened in all the sciences. But it’s only happened in parts of human cognition—the notable examples are vision, acoustic perception, brain-activations, and the deep study of language-acquisition. But we know nothing about creativity, consciousness, and sane people’s aberrant behaviors.
12) Is there a way in which the soft sciences need to catch up with this picture (of seeing fundamental structures/uniformities behind the chaos) that has become the consensus in biology? Chomsky commented that this is the general consensus in biology, but not in the soft sciences.
It’s difficult in the soft sciences, partly because there might be nothing deep to discover (no laws) in sociology, anthropology, and history. There are too many moveable factors and too many contingencies.
Moreover, in biology there are DNA sequences, plenty of microscopic elements that you can visualize and analyze, and a huge number of laboratory-experiments that you can do to isolate and detect the causal factors.
But experiments are hard-to-impossible to do in the soft sciences. Two exceptions are (1) experimental economics (see the 2002 Nobel Prize in Economics) and (2) the field of judgment/decision-making (see the 2017 Nobel Prize in Economics). Far from perfect—far from physics, chemistry, and biology—but basically OK.
13) Why are the soft sciences resistant to this picture that is now the consensus in biology?
See my answer to the previous question.
No one thinks that one’s intuitions and common-sense inferences must apply to the hard sciences (biology included).
But the case is different in the soft sciences (linguistics included), where it’s believed that ordinary prejudices and common-sense beliefs should guide academic inquiry. These prejudices are hard to subvert, and create strong resistance to Generative Grammar.
As Chomsky has frequently said, scientific standards are believed not to apply once we deal with the mind. It’s believed that all one can have is languages’ surface-diversity, grouped into families.
14) Why exactly was Richard Feynman so fascinated with the “principle of least action”? In a 2012 talk, you quoted Feynman’s comment on this topic: “When I was in high school, my physics teacher—whose name was Mr. Bader—called me down one day after physics class and said, ‘You look bored; I want to tell you something interesting.’ Then he told me something which I found absolutely fascinating, and have, since then, always found fascinating. Every time the subject comes up, I work on it. In fact, when I began to prepare this lecture I found myself making more analyses on the thing. Instead of worrying about the lecture, I got involved in a new problem. The subject is this—the principle of least action.”
Indeed, that’s a fundamental law of nature: the principle of least action. Largely thanks to Feynman’s contribution, it’s been transferred from classical physics to quantum physics.
Chomsky is applying it with success to language. It’s the core of the Strong Minimalist Thesis: minimal computation and minimal search. As always in the sciences, success yields new interesting problems and yields deeper problems that hadn’t been seen previously.
15) How exactly can one show that a scientific law is evident and not just accurate? In the 2012 talk, you quoted Feynman’s comment about science (emphasis his): “Now in the further development of science, we want more than just a formula. First we have an observation, then we have numbers that we measure, then we have a law which summarizes all the numbers. But the real glory of science is that we can find a way of thinking such that the law is evident.”
Right: once the law is discovered, it appears evident after the fact.
Isn’t it now “evident” that a syntactic derivation must start with the minimal operation of Merge, create a binary unordered set, and then continue this way with Internal Merge to the sentence’s completion? Chomsky’s “virtual conceptual necessity” just means that you can only introduce elements that are evident.
16) How much progress have you and your colleagues made (since 2012) on the problems (of biolinguistics) raised in the 2012 talk?
Very rich progress in the development of Minimalism, in neuronal correlations, and in a new approach to language-evolution.
17) What are the biggest problems/challenges/mysteries in biolinguistics now?
With more refined methods of neuroscience—unthinkable today, just as fMRI was unthinkable 30 years ago—we’d like to understand many linguistic computations’ subtle brain-correlates.
Progress is also needed to understand language-evolution in a post-Darwinian framework that steers things away from an alleged evolution of communication, since communication is ancillary to the inner linguistic computations.
18) What are the “laws of minimization” that you discussed in the 2012 talk?
See my previous answers.
Strict locality, economy of computation, elimination of unnecessary components, attributing to the interface-with-externalization formerly central elements like Agreement and Case.
19) In how many different domains—and at how many different levels—do these laws seem to apply?
As far as we can see right now, they apply to phonetics, phonology, morphology, syntax, and semantics.
We don’t know much about pragmatics. But interestingly, universal principles also appear to be present there.
20) What exactly explains why these laws seem to show up in so many different domains and at so many different levels?
We’re dealing with a natural science, with laws of nature that are instantiated at different levels.
21) What do you think about Ian Stewart’s work? His 1998 book Life’s Other Secret and his 2011 book The Mathematics of Life both discuss how math applies to biology.
He’s a good popularizer, and good popularization is essential.
Many young people (high-school students) are initiated into the study of science after reading a good piece of popularization.
In my case, George Gamov’s One Two Three… Infinity showed me the beauty of mathematics, Laura Fermi’s Atoms in the Family the beauty of physics, and James Watson’s Molecular Biology of the Gene the beauty of modern molecular biology.
22) What are the main ideas in the 2010 book What Darwin Got Wrong that you co-wrote with Jerry Fodor?
That natural selection is marginal in biological evolution and surely (pace Darwin) not the factor in the origin of species, and that many of the reasons why behaviorism was wrong apply directly to Darwinism.
23) What’s been the outcome of the debate that the 2010 book sparked?
The debate is dead.
We’ve been negatively reviewed (except by Richard Lewontin in the New York Review of Books), declared incompetent to discuss evolution, and accused of being crypto-creationists. No one wants to confront me today on this, and Jerry Fodor is sadly not with us any more.
For a TV-broadcast in Italy about the Italian translation of our book, the organizer asked a number of professors of evolutionary biology to intervene and discuss the topic with me. No one accepted—except Enzo Pennetta, a (unfortunately) marginal Darwin-critic.
I’ve been invited to give talks at very few universities in the US, at one in Geneva, and at one in Rome.
The issue is considered to be dead.
24) Apart from the articles linked throughout this interview, what can people read to get up to speed on your work?
See my 2019 paper “Reflections on Piaget”.
And see my 2020 paper “Minds with meanings (pace Fodor and Pylyshyn)”.