Humanity's single greatest achievement is, perhaps, to understand something about the way that the world really works. I take understanding to be a matter of knowing correct scientific explanations: explanations of why particles have mass, why perpetual motion machines are impossible, how complex life could evolve from undistinguished chemical soup. The study of explanatory knowledge - of what it is to provide a scientific explanation for some phenomenon - is the study of the deepest cognitive feat our species can manage.
Twentieth century philosophy of science started out skeptical about the very possibility of explanatory knowledge. The purpose of science,
writers such as the French physicist Pierre Duhem declared, is to organize facts, not to explain them. Since that time, explanation has been rehabilitated. Science explains, according to the new consensus, and it does so - according to a consensus only slightly less solid - by telling causal stories. To explain a phenomenon is to describe how it is caused; it is to give an etiology, a view that reunites the two meanings of Aristotle's word "aition": cause and explanation.
The causal view of explanation works most transparently for relatively simple explanations: the window broke because it was hit by the brick. Many deeper explanations, however, among them some of the most admirable achievements of science, do not yield so easily to a causal analysis. The purpose of my book Depth is to develop an account of explanation in science
that both does justice to the insight that causation and explanation go hand in hand, and also illuminates the cases where
the intimate connection between the two is difficult to discern.
I offer, then, a causal account of scientific explanation
that tries to make sense of those explanations in science that skirt, ignore, distort, or dismiss factors that appear to be causally relevant to the phenomenon to be explained.
To make sense of this apparent disdain for the causal, I propose that explanations do not merely set out to tell a causal story, but to tell a causal story that includes only those causal factors which decisively make a difference to the phenomenon to be explained. A good causal explanation, to put it another way, abstracts away from causal details that play a role in determining how the phenomenon occurs but that make no difference to whether the phenomenon occurs. Explanations balance two goals, then: to give enough causal information to imply that the phenomenon occurs, but to be as abstract as possible while respecting this constraint. They tell the least detailed causal story that is nevertheless a story that necessitates, or at least highly probabilifies, the thing to be explained. I call this the kairetic account of explanation, building on the Greek word "kairos", a decisive point.
The kairetic account's two explanatory desiderata - causal detail and causal generality - correspond to two senses of explanatory depth. On the one hand, we say that an explanation is deep when it goes far down toward the physical level, the level of detail at which ultimate causal underpinnings are found. On the other hand, we also say that an explanation is deep when it has a certain striking generality - when it attributes the phenomenon to be explained not to some very particular set of initial conditions, but to some high-level, abstract, often virtually mathematical state of affairs.
Let me give some examples of the way in which the kairetic account makes sense of scientific explanations, often deep-seeming in the second sense, that show a lack of concern for causal detail.
An economist or an evolutionary biologist might explain the observed state of a system by showing that the state is a global, stable equilibrium, that is, by showing that whatever conditions the system starts out in, it will eventually find its way to the observed state. Some philosophers have argued that this cannot be understood as a causal explanation, because a causal explanation will tell the exact causal story about the system's journey from its initial state to its end state, whereas the equilibrium explanation says nothing about either the initial state or the path, showing rather that every path must lead to the observed end state.
The kairetic approach to explanation challenges the assumption that a causal explanation must give the details of the path. Since the path taken makes no difference to the end state, I argue, a causal explanation ought not to specify the path. Rather, it should give a more abstract characterization of the system's dynamics, just enough causal information, and no more, to show that a system in any initial state will be caused, eventually, to enter the final state and to remain there.
Explanatory models frequently "idealize" the causal story that they tell, which is to say, they simplify the story by leaving out or distorting causally relevant details. The standard explanation of Boyle's law of gases (stating the inverse proportionality of pressure and volume), for example, leaves out long-range intermolecular forces and assumes that gas molecules are infinitely small and so never collide. Models in evolutionary biology often assume that populations are infinitely large. Models in economics assume that economic actors are perfectly rational. None of these assumptions is true; they constitute deliberate falsifications of the causal story. Explanation, it seems, cannot simply be the elucidation of the causal story, or else what are these imaginative fictions doing in science's explanatory models?
The kairetic account's explanation of idealization has two parts. First, it is noted that the distorted elements are not difference-makers: they may influence the phenomenon to be explained, but their effect is not so great that they make a difference to whether it occurs or not. Collisions in a gas, for example, make no difference to whether or not it conforms to Boyle's law; likewise, if more controversially, human irrationality makes no difference to the occurrence of certain economic phenomena, because humans behave approximately rationally in certain very circumscribed circumstances. An idealizing model, then, does not distort explanatorily relevant factors. Why idealize at all, though? In part, perhaps, to keep things simple, but Depth proposes an additional reason: idealization can help to communicate the fact that (contrary to expectations) certain factors are not relevant. By idealizing away collisions in a gas, for example - by denying, in the face of incontrovertible fact, that molecules repel one another - a model emphasizes that collisions are unimportant in the explanation of Boyle's law.
Deterministic Production; Statistical Explanation
Sometimes in science, a phenomenon is explained using probabilities, by showing that it is very likely to occur, despite the fact that there is nothing probabilistic about the process that produced it. In evolutionary biology, for example, the selection of a certain trait may be explained by showing that organisms with the trait are more likely to survive to reproductive age than organisms without the trait; arguably, however, their survival or failure to survive is not determined by probabilities but by low-level deterministic processes. They fail to survive, for example, because they are in the wrong place at the wrong time, rather than because they they lose some quantum-mechanical game of chance.
The kairetic account makes sense of this explanatory practice by showing that a statistical representation of the relevant causal facts - that is, the causal facts that bestow a biological advantage on the organisms possessing the trait - achieves the optimal balance of the two desiderata of detail and generality: omitting certain tranches of causal detail makes little difference to an evolutionary model's ability to predict the trait's selection, so the kairetic account recommends deleting this detail. But deleting the detail transforms a causal story that is deterministic into one that has the hallmarks of a statistical explanation.
Some aspects of scientific explanation accounted for in Depth do not depend so much on the explanatory importance of abstraction as on other features of the kairetic account, which can only be mentioned briefly here:
1. The postulation of a new explanatory relation to stand alongside causation, called entanglement.
2. An account of the role, in scientific explanation, of black-boxing - the practice of treating
of a system under investigation as inscrutable "black boxes" that deliver certain outputs given certain inputs, but concerning which nothing more can be said.
3. An account, based on the kairetic theory of explanation, of various features of our causal attributions, that is, of our claims about what causes what. These features include the causal role of absences and other negative facts; the apparent importance of facts about what is "normal", statistically, morally, or otherwise, in determining what causal claims are appropriate; and the apparent failure of transitivity in such claims - the fact that in some cases, it seems that a cause of a cause of an event is not a cause of that event.