The Impact of Mathematics on Organismal Biology

Organismal (sometimes called organismic) biology deals with all aspects of the biology of individual animals and plants, including physiology, morphology, development, and behavior. As such, it interfaces cellular and molecular biology at one extreme, and ecology at the other. In the former, one attempts to develop integrative theories of organismal function; in the latter, one tries to place individual behavior and function within an an environmental context. Mathematical theorists have made signal contributions to organismal biology. Examples range from technological advances to theories of biological structure and function, and rely on a wide range of mathematical techniques. We begin this section with a review of some of the outstanding examples.

**3.1 Accomplishments of the Past**

Image reconstruction is of importance across a range of levels of organization in biology. At the molecular scale, work by the applied mathematicians Karle and Hauptman in constructing algorithms to reveal structure from x-ray data was rewarded with a Nobel Prize in 1987. As discussed elsewhere, on the organismic level a Nobel prize was awarded to Cormack and Hounsfield for algorithms that permit structure to be determined from tomography. PET and NMR are other areas where mathematical analysis is essential. Past achievements are impressive, but they must be supplemented by significant further advances before the difficult but vital problem of image reconstruction is minimally satisfied.

One of the most exciting areas of applications of mathematics has been to cardiac function. A major cause of death from malfunction of the heart is the phenomenon called ventricular fibrillation, wherein properly coordinated heart action is replaced by purposeless local oscillations of the ventricles. Mathematical modeling has revealed why this phenomenon occurs. Major experimental efforts have been suggested by the modeling. The leading figure in this line of theoretical research, Arthur Winfree, received the 1989 Einthoven Prize for his contributions to the subject. (This prize is awarded every five years to a cardiologist, usually a surgeon.)

In related work, powerful numerical algorithms and state-of-the-art computing have been applied by Peskin and others to study blood flow in the heart. Even with the use of two-dimensional models, progress has been sufficient to enable significant input into the design of heart valves,with resulting patents and licensing agreements (see McQueen and Peskin 1983, 1986). Three-dimensional models also are under development (Peskin and McQueen 1989, Part I; McQueen and Peskin 1989, Part II).

Another major contribution of mathematics to physiology is the theory of cross-bridge dynamics in striated muscle. Introduced by A.F. Huxley (1957) and further developed by T.E. Hill, Podolsky, Lacker, and others, this theory not only has provided a satisfying explanation of the mechanical behavior of muscle, but it also has served to provide organizing principles for biochemical research on the fundamental energetic and control mechanisms of muscle contraction.

Mathematical methods for the quantitative description of morphogenesis of organs composed of nonmigrating cells (including plants, animal bone and skin, and shells) were suggested by Richards and Kavanagh (1943) and by Erickson and Sax (1956). These methods, which involve evaluation of velocity gradients from empirical data, have provided the phenomenological basis for understanding the physiology of growth (for reviews see Erickson 1976, Silk 1984, 1989).

As will be argued in detail below, theory is essential in understanding hierarchical systems phenomena in biology. A famous contribution in this area is the theoretical model made by Hodgkin and Huxley (1952) of the electrical signals in the squid axon. This Nobel prize-winning work incorporated the findings of a series of brilliant experiments concerning the ion permeability of the axonal membrane into a set of mathematical equations that predicted the shape and speed of the "action potential" wave that moves down the axon. Patch clamp recordings now permit investigators to relate the Hodgkin-Huxley membrane models to the opening and closing of the molecular channels that span the membrane and are responsible for their ionic conductance. Hodgkin and Huxley's inferences from macroscopic current measurements have been confirmed in basic form, but greatly expanded with respect to their descriptions of configurations and transition mechanisms. In recent years the work of Hodgkin and Huxley has found unexpected application in non-neural systems in which electrophysiology plays a surprising regulatory role. One example of this is the control of insulin secretion by the electrically active beta cells of the pancreas.

In developmental biology, it was hypothesized decades ago that gradients of
key chemicals were responsible for triggering macroscopic events. In recent
years, especially since the landmark paper of Turing (1952), the gradient idea
has been greatly elaborated by theorists. In parallel, experimentalists
devoted considerable efforts to find the "morphogen" chemicals whose gradients
were postulated to have such importance, efforts that recently have been
successful in *Drosophila*, hydra, and limb morphogenesis.

A wide variety of exciting venues exist for the application of
mathematical and computational approaches to organismal biology. Among these,
two stand out as having exceptional promise and importance: the study of**
complex hierarchical biological systems** and of **dynamic aspects of
structure function relations**.

**3.2.1 Complex Hierarchical Biological Systems**

The analysis of complex hierarchical systems is one of the most important open areas in modern biology. This holds true at all levels of organization, and is a theme to which we return in the discussion of ecological and evolutionary processes. The essence of the matter is this: On several levels, the components of biological systems are being revealed by modern experimental biology. The techniques of molecular biology are most important here; other experimental advances are also of major utility. The central theoretical question is, how are the molecular details integrated into a functional unity, a question central to at least three major fields: neurobiology, developmental biology, and immunology. We now consider each of these areas in greater depth.

**Neuroscience**. Mathematical modeling has made an enormous impact on
neuroscience. The Hodgkin-Huxley format for describing membrane ionic currents
has been extended and applied to a variety of neuronal excitable membranes.
The significance of dendrites for the input-output properties of neurons was
not understood before the development of Rall's cable theory (Rall 1962, 1964).
Hartline and Ratliff (1972) were pioneers in developing quantitative and
predictive network models. In addition, Fitzhugh's work (1960, 1969)
demonstrated the value of simplified nonlinear models and of qualitative
mathematical analysis. The success of these theoretical contributions, and the
high degree of quantification in neurobiology, ensures continued opportunities
for mathematical work.

Recent technical advances in experimentation, e.g., patch clamp recording, voltage- and ion-specific dyes, and confocal microscopy, are providing data to facilitate further theoretical development for addressing fundamental issues that range from the sub-cellular to cell-ensemble to whole-system levels. For thorough understanding, we must synthesize information and mechanisms across these different levels. This is perhaps the fundamental challenge facing mathematical and theoretical biology, from molecule to ecosystem. How do we relate phenomena at different levels of organization? How are small-scale processes to be integrated, and related to higher level phenomena? For example, in modeling neuronal networks, what are the crucial properties of individual cells that must be retained, in order to address a particular set of questions? Most network formulations use highly idealized "neural units," which ignore much of what is known about cellular biophysics. We need to develop systematic procedures to derive, in a biophysically meaningful way, descriptions for ensemble behavior.

Correspondingly, we seek to identify low-level mechanisms from data at higher levels. The Hodgkin-Huxley theory hypothesized that macroscopic currents might be generated by molecular "pores"; only much later were these individual channels discovered. Another set of common modeling needs are methods for dealing reasonably with the wide range of time and space scales involved with different intracellular domains and processes, and short- and long-distance interactions between cells, and among different cell assemblies.

At the lowest level, improved biophysical understanding is needed of the mechanisms for ion transport through membrane channels. How does the voltage dependence of opening and closing rates arise? What accounts for ion selectivity by which, for example, channels discriminate among ions of the same charge and similar properties? Theories at this level are beginning to involve stochastic descriptions for fluxes (Fokker-Planck equations) and simulation methods for molecular structure and dynamics. Kinetic modeling of single channel data is being debated hotly with regard to whether a finite or infinite number of open/closed/inactivated states are appropriate.

The discovery of new channel types continues at a rapid pace (Llinas 1988). Of basic interest is how the mix of different channel types, and their nonuniform distributions over the cell surface (soma, dendrites and axon), determine the integrative properties of neurons. Some cells fire only when stimulated, others are autonomous rhythmic pacemakers, and some fire in repetitive bursting modes. Theoretical modeling plays an important role here since channel densities cannot yet be measured directly, especially in dendritic branches. Computational models that incorporate detailed dendritic architecture, in some cases known from morphological staining, are suggesting that individual regions of dendrites can perform local processing (Fleshman et al. 1988; Holmes and Levy 1990). Differential dendritic processing has been implicated in motion detection in the visual system (Koch et al. 1986).

One of the most active pursuits in neuroscience research is to discover the mechanisms for plasticity and learning at the cellular/molecular level. The above techniques, together with state-of-the-art biochemical methodologies, are beginning to yield the information for feasible detailed biophysical modeling. Dendritic spines, NMDA receptor-channels, spatio-temporal dynamics of calcium and other intracellular second messengers are focal points for these explorations. Such studies are bringing together theoreticians, neuroscientists, and biochemists.

Although theorizing about mechanisms for synaptic plasticity is proceeding, disagreement remains about the basic mechanism of chemical synaptic transmission. Two competing hypotheses (one involving calcium alone, and the other including voltage effects as well) are being explored with fervor, and mathematical modeling is a key ingredient in arguments for each case. Many additional experiments have been suggested from these debates (see Zucker and Haydon 1988 and Parnas et al. 1991).

Models of neural interactions lead to many interesting mathematical questions for which appropriate tools must be developed. Typically, networks are modeled by (possibly stochastic) systems of differential equations. In some simplified limits, these become nonlinear integro-differential equations. The question now becomes one of proving or otherwise demonstrating that the simplified models have the desired behavior. Furthermore, one must characterize this behavior as parameters in the model vary (i.e., understand the bifurcations in the dynamics). Another important point that mathematicians must address is the extraction of the underlying geometric and analytic ideas from detailed biophysical models and simulations.

The next level of neuronal complexity beyond the single cell is the small network with of the order of tens to hundreds of neurons. Such networks have been most extensively studied in invertebrates and the sensory or motor systems of vertebrates, in which the function of small groups of neurons can be related to specific behaviors of the animal (Selverston and Moulins 1985, Lockery et al. 1989, Kandel 1984). These so-called simple systems also are attractive because one can expect to characterize their cellular and intercellular properties more completely than in vertebrates. Much research on their structural features has been based on the explicit assumption that once network structure was understood, functional understanding would follow. Recently, however, many workers have come to realize that, even with a great deal of structural information, the understanding of functional mechanisms will require the development of sound, structurally based theoretical models.

A principal challenge for modelling at this level is the development of more biologically realistic computational models and mathematical analyses that can provide insight into how these networks function. While these networks involve relatively small numbers of neurons, their complexity will require increasingly powerful mathematical tools. At the same time, modelling at this level is likely to be especially valuable for neurobiology. In few other neural systems is the link between neural structure and behavior more direct. Thus, it is already possible to see in the structure of the nervous system its functional correlates. Also, few other systems currently provide the anatomical and physiological parameters essential for realistic modelling. As models for understanding the general dynamical properties of such neural networks or for understanding the way in which feedback modifies neuronal behavior, small neural systems represent a gold mine for computational and mathematical neurobiology.

Coherent brain areas dedicated to particular functions, for example primary sensory cortical areas, provide complex challenges for computational and mathematical models (Sereno et al. 1988). Such areas typically contain multiple types of cells, receive inputs from multiple distinct sources, and often are heavily interconnected with their links to inter-area recurrent or reentrant loops. Large bodies of anatomical and physiological data are available, but the integrative capabilities are poorly understood and modelling techniques will almost surely be needed to unravel them.

Developmental neurobiology is a source of biologically important and mathematically interesting questions. Modelling at the large network level has played an important role in this field, with many collaborations between mathematicians and experimental biologists. Among the important questions arising in this field are the topography of connections from one part of the brain to another and how these maps might spontaneously form. Many examples exist of such maps in the central nervous system; the best characterized are in the vertebrate visual system. The earliest theoretical models and experiments concerned the wiring from the retina to the optic tectum. Many models have been proposed and analyzed (Whitelaw and Cowan 1981, von der Malsberg 1973, see Linsker 1990 for a review); but as new experimental results have become available, many of the models must be altered or eliminated. Recent investigations have led to the formulation of minimal hypotheses for the explanation of the large body of experimental manipulations (Fraser 1985). These mechanisms are ripe for mathematical formulation and analysis.

Several new technologies, such as voltage sensitive dyes and deoxyglucose injection, have led to the discovery of beautiful regular maps in the visual cortex of mammals. The patterns include stripes of ocularity and twists and singularities of orientation preference. Models have been proposed for these patterns (Miller et al. 1989, Durbin and Mitchison 1990) involving mechanisms ranging from band-pass-filtered noise, to competitive interactions, to Hebbian rules with lateral inhibition. What must be done is to decide what the common idea is that underlies these models, and how these mechanisms might possibly be realized in the nervous system.

As we begin to understand the mechanisms of synaptic plasticity, it is natural to ask about the consequences of this for the behavior of large networks involving plastic elements. Only in this way will we understand the relation between synaptic plasticity and learning at the organismic level. This has been a major focus in the study of computational properties of large scale neural networks across a number of disciplines including physics, biology, psychology and mathematics (Hopfield 1984, Rumelhart et al. 1986). Mathematical analysis promises to provide an important bridge between computational and behavioral studies and the empirical results of neurobiology (Poggio and Girosi 1990). An excellent survey is Koch and Segev (1989).

Models at the level of the complete organism provide an opportunity to make real progress on the long sought unification of the behavioral sciences with neurobiology. Models intended to explain behavioral observations (e.g., from psychology and ethology) can be cast in terms of underlying neural mechanisms, rather than at the phenomenological or control theory level as before. Such models can bring about a new understanding of such phenomena as visual illusions (e.g., Treisman et al. 1990), the relation between long- and short-term memory and category formation. They will provide significant constraints on psychological explanations that have not in the past been easy to correlate with the nervous system. To carry out this analysis, one must eventually couple models of the nervous system with those of the environment in which the whole system exists (Kersten 1990).

**Immunology**. The immune system contains 10^{12} cells
comprising at least 10^{7} specificities. These cells move within the
body and communicate both by cell-cell contact and via tens, maybe hundreds, of
regulatory molecules. The system is capable of pattern recognition, learning
and memory expression, and thus has many features in common with the nervous
system.

Theoretical ideas have played a major role in the development of the field. Controversies such as instructive vs. selective theories of antibody formation, germ-line vs. somatic mutation models for the generation of antibody diversity, and regulatory circuits vs. idiotypic networks, have dominated the intellectual development of the field and determined the direction of much experimental effort. Mathematical theories have not been nearly as important, but this appears to be changing as the field addresses more quantitative issues such as the role of somatic mutation in the generation of antibody diversity, the role of receptor clusters in cell stimulation and desensitization signals, the effects of different concentrations of cytokines, receptor affinities, and receptor number on cell stimulation, cell proliferation, cell differentiation, and the engagement of effector functions.

Modeling the immune system requires the same type of hierarchical approach as does neurobiological modeling. At the lowest level, one must develop quantitative models of the action of single lymphocytes as they interact with antigens and cytokines. A large amount of effort involving the study of infinite systems of ordinary differential equations and branching processes has gone into the mathematical modeling of receptor cross-linking by multivalent ligands (cf., Perelson 1984, Macken and Perelson 1985). Cell response in terms of proliferation or differentiation has been examined from an optimal control perspective (Perelson et al. 1976, 1978). The effects of the T cell growth factor IL-2 have also been incorporated into cellular models (Kevrekidis et al. 1988). At the next higher levels, small idiotypic networks containing two complementary cell populations have been modeled, as well as networks containing hundreds to thousands of B cell clones (Segel and Perelson 1989, Perelson 1989, Weisbuch et al. 1990). In the immune system, not only is the number of components large, but in distinction to the nervous system, the components turn over rapidly. The average life span of a B cell is of order four days, that of serum antibody one to two weeks. Thus, on a rather rapid time scale, many immune system components may be replaced, although the system as a whole remains intact.

New ideas and mathematical representations are required to handle systems with
large numbers of constantly changing components. Some promising approaches
involve the formulation of models in terms of a potentially infinite
dimensional "shape space," wherein emphasis is placed on determining
interactions among molecules based on their shapes. In computer models binary
strings have been used to represent molecular shape, with the obvious advantage
of fast algorithms to determine complementarity and the ability to represent 4
x 10^{9} different molecular shapes with 32 bits (Farmer et al. 1986).
To handle the perpetual novelty that the elimination of old components and the
generation of new components introduces into the immune system, models can be
formulated using "metadynamical" rules, wherein an algorithm is used to update
the dynamical equations of the model depending upon the components present in
the system at the time of update (Bagley et al. 1989). One needs to understand
in a mathematical sense the dynamics of a system in which the variables of the
model are in constant flux. What does it mean to have an attractor if the
variables describing the attractor are eliminated from the system before a
trajectory approaches the attractor? Formulation of models appropriate to
unravel the observed complexity in the immune system is the first major step.
Next, a massive effort is required to unravel the behavioral modes of these
complex models and compare them with experiment. Here theoretical immunology
merges into the mainstream of theoretical biology.

There are other areas in which we see future growth of theoretical ideas in immunology. For example, vaccine design depends on the ability to predict T cell epitopes. DeLisi and Berzofsky (1985) suggested that T cell epitopes tend to be amphipathic structures. Alternative algorithms have been suggested (e.g., Rothbard and Taylor 1988), and databases have been used to identify sequence patterns characteristic of T cell epitopes (Claverie et al. 1988). This area is clearly one in which we will see future growth and which will rely heavily on theoretical and computational analyses.

Understanding the dynamics of HIV infection (AIDS) and its effects on the
immune system is another important area for future research. Quantitative
questions include: How can the CD4^{+} T cell population be depleted
if only one in a hundred cells is infected? Why is there such a long
incubation period from time of infection to the clinical symptoms of AIDS? Why
is this incubation period different in children than in adults? In a
seropositive patient, what does the level of serum antibody predict about the
course of the disease? Can one define quantitative measures of an individual's
chance of infecting a sex partner based on antibody or antigen levels measured
in the blood? Models also will help in determining the pathogenesis of the
disease and in isolating primary effects of HIV from the secondary effects of
immune dysfunction. Mathematics also can play a role in the development of
optimal treatment schedules and in the design of clinical trials of multiple
drug therapies for AIDS. Development of epidemiological models is currently an
active area of mathematical endeavor and one that will continue at a high level
as we attempt to track the course of this epidemic and develop vaccine
strategies aimed at its eventual eradication.

**Genomic regulatory networks**. A fundamental activity over the next two
decades will involve analysis of the integrated structure and behavior of the
complex genetic regulatory systems underlying development in higher organisms,
a massive task since the human genome encodes perhaps 100,000 genes. Its
accomplishment will require uniting work in molecular and developmental
genetics with new mathematical and computational tools.

In more detail, recent progress in molecular genetics in eukaryotes now is revealing the detailed composition of structural genes as well as cis acting regulatory loci such as promoters, homeoboxes, and tissue and stage specific enhancer sequences, as well as trans-acting components. These genetic elements, together with their RNA and protein products, comprise the genomic regulatory network that coordinates patterns of gene expression in cell types, cell differentiation and ontogeny from the zygote. Understanding the structure, logic, integrated dynamical behavior, and evolution of such networks is central to molecular, developmental and evolutionary biology.

The Human Genome Initiative will provide massive sequence data from which we can eventually identify the diverse locations in the genome of each regulatory sequence, as well as the locations of many or most structural genes. These data are fundamental to understanding the "wiring diagram" of the genomic regulatory networks in eukaryotes. Analysis will require development of appropriate computer data bases and development of new theory and algorithms in the mathematical theory of directed graphs. Understanding the evolution of such genomic networks under the influence of point and chromosomal mutations that literally scramble the genomic wiring diagram will require new uses of random directed graph theory, stochastic processes, and population genetic models.

In addition to understanding the structure and evolution of genomic regulatory networks, we must understand the coordinated behavior of such systems that integrate the behavior of 100,000 molecular variables. It is here, in the effort to relate the information that we can achieve about small parts of the genomic system to the overall behavior of the integrated system, that a new marriage of mathematics and biology must be found. We have no hope of understanding the integrated behavior of such complex systems, linking the "microlevel" of structure and logic with the macrolevel of behavior, without mathematical theories. While no approach is yet clearly adequate, new avenues are available.

A first approach is via ensembles. Statistical mechanics is the paradigmatic example of a theory that links microscopic and macroscopic levels. There it is possible to explain macroscopic behaviors without knowing all the details of the microscopic dynamics. Similarly, it may be possible to build up statistical understanding of the integrated behavior of extremely complex genomic regulatory systems without knowing all the details of microscopic structure.

Molecular genetic techniques reveal small scale features of genomic systems such as the sequences that regulate a gene, and biases in the "rules" governing the activity of genes as a function of their molecular inputs. Using these local features, one can construct mathematically the ensemble of all genomic systems consistent with those local constraints. This ensemble constitutes the proper null hypothesis about the structure and logic of genomic systems that are random members of such an ensemble. Thus the typical or generic behavior of ensemble members are predictions about the large-scale features of random members of the ensemble. This is a new kind of statistical mechanics, averaging over ensembles of systems (Kauffman 1969, 1974, in press, Derrida 1981). If the distributions of properties parallel those seen in genomic regulatory systems, then those properties may be explained as consequences of membership in the ensemble. Indeed, past work based on this approach (Kauffman 1969, 1974, in press) has shown that many features of model genomic systems parallel, hence may explain, a number of features of cell differentiation such as the numbers of cell types in an organism, the similarity of gene expression patterns in different cell types in an organism, and other statistical features. Improved ensemble models, coupled with population genetic models, offer hope of understanding how evolution can mold the structure, logic, and behavior of integrated genomic systems.

A second approach may be the development of new mathematical and experimental tools to "parse" the genomic system into structurally or functionally isolated subcircuits. Thus, clusters of genes may be regulated in overlapping hierarchical batteries, or some genes may fall to fixed steady states of activities that are common to many or all cell types, while other subsets of genes oscillate or exhibit complex patterns of temporal activity unique to different subsets of cell types. Analysis of such temporal patterns by time series techniques, and based on temporal series of two-dimensional protein gel data, where each gel shows the synthesis patterns of up to 2000 genes at a time, may help resolve the genome into behavioral "chunks." If so, this will help block out the overall behavioral organization of the genomic system. Thereafter, analysis of detailed midsized subcircuits, with perhaps several to 100 or so genes, will require use of promoter constructs allowing activation or inhibition of arbitrary genes in arbitrary cell types at arbitrary moments, with analysis of the cascading consequences. Union with dynamical systems theory for modestly small systems, where the "inverse problem" of guessing plausible circuitry to yield observed synthesis patterns is practical, then can be carried out.

**Developmental biology**. As already described, mathematics can play a
crucial role in connecting different levels of organization. What biologists
seek are molecular level explanations of supramolecular phenomena. For
example, embryogenesis involves the coordinated movement and differentiation of
cell populations. Biologists would like to understand this in terms of
chemistry and genetics. To understand organismal biology is to understand how
high-level coherent organization results from mechanisms operating at the
molecular level. The essence of the problem is to build from one level to
another. How can we bridge this gap?

The mathematical, analytical, and numerical problems posed by the nonlinear systems of partial differential equations that arise in modeling developmental processes are extremely challenging and interesting. Reaction diffusion equations, for example, as discussed earlier, have already stimulated the creation of new mathematics to study the wide spectrum of solution behaviors exhibited by these equations. The numerical simulation techniques to investigate solutions in three dimensions are still very difficult and need a great deal of further refinement to be useful practically. Mechanochemical models for generating pattern formation deal with more directly biological quantities (see Murray 1989 for a general survey of these and other pattern formation models); but they are more complex than, for example, the Navier-Stokes equations, which govern fluid flows, and possess a correspondingly richer solution behavior.

Bifurcation theory, linear analysis, and singular perturbation methods already have revealed new phenomena. Numerical simulation, particularly with the mechanochemical models, is challenging even in two dimensions. Real biological applications require solutions in three-dimensional domains whose sizes increase in time. New analytical and numerical simulation techniques as well as novel visualization methods will have to be devised before we can explore the sophisticated solution behaviors of such models. Unfortunately, the methods developed for Navier-Stokes equations frequently are not adequate to cope with the new models that arise in biology.

Recently, several advances in experimental biology (e.g., recombinant DNA
technology, computer enhanced imaging) have created new databases so extensive
and complex that mathematical and computational approaches are essential to
make sense of them. For example, a network of perhaps 60 cross-regulating
genes has been shown to regulate early development in *Drosophila*;
similarly, cell motility, which underlies morphogenesis, is driven by the
cellular cytoskeleton, whose mechanochemical regulation is controlled by a
network of more than 40 regulatory molecules. These systems should catalyze
new collaborations between biologists and mathematicians to deduce the
macroscopic consequences of newly revealed molecular mechanisms. Below we
illustrate the general case with a few specific examples.

In the past five years, recombinant DNA technology advances have produced an unprecedented molecular-level data base documenting a complex network of genes that code for proteins that control the expression of other genes. Mathematics can compute the macroscopic pattern formation consequences of this molecular level information. Indeed, mathematical analysis may be the only way to synthesize the global picture from the molecular level parts, given the apparent complexity of genetic networks, in which each gene's expression is modulated by many other genes.

Computer graphics can be used to visualize data and the dynamical behavior of mathematical models. Many instruments in the biologist's arsenal (e.g., the confocal scanning laser microscope, gene sequencers) gather data into a computer-based graphical data base. Modern computer graphics technology makes it possible to display, dynamically and pictorially, the dynamic behavior of a mathematical model in the same form in which experimental data are stored. This technology should become the common way to compare the behavior of a quantitative model with the data it purports to explain. Moreover, this same technology yields the fastest and most compelling medium of communication between mathematical modelers and biologists.

Using immunofluorescent probes to cloned gene products and scanning confocal laser microscopy on whole mount Drosophila embryos, one may now obtain three-dimensional stereo reconstructions of the temporal evolution and spatial expression pattern of each of the genes that organize future morphological segmentation of the larva. Similarly, it is possible to observe intracellular and intercellular events such as cytoskeletal reorganization, calcium transients, distribution patterns in cell adhesion molecules and putative morphogens in real time. Thus, a model of early pattern formation and/or morphogenesis (Edgar et al. 1989) in the Drosophila embryo, if it is correct, should produce the same output that confocal microscopy gathered as input. The intellectual challenge is to understand how the gene network, operating identically in every cell, results in globally coherent spatial pattern as a consequence of temporal biochemical dynamics.

Theoretical models have stimulated a great deal of experimental work in developmental biology. Here we briefly describe three major classes of models that illustrate the way in which mathematics provides a framework for connecting information at the micro level to macro level observations.

Spatial patterns can be created according to the classical local activation
lateral inhibition mechanism (Keller and Segel 1970, Oster and Murray 1989). A
purely chemical mechanism for pattern formation (but not morphogenesis) was
proposed by Turing (1952). In this model activator and inhibitor morphogens
diffuse at different rates and react with one another. Mathematical analysis
shows how spatially heterogeneous patterns of morphogen concentration can
arise. For pattern to emerge, it is necessary that the activator be relatively
short-range relative to the inhibitor, i.e., that the activator diffusion be
relatively slow. If cells can sense the morphogen level and respond, then we
have a molecular mechanism for Wolpert's (1969) notion of "positional
information," one of the most influential concepts in modern developmental
biology. Although chemical gradients have been suspect in biological pattern
formation for over 100 years, it is only recently that their existence has been
unequivocally demonstrated (e.g., the bicoid protein in *Drosophila*, and
retinoic acid in vertebrate limb development). However, morphogenesis may not
be a purely chemical phenomenon in which cells merely respond to pre-existing
chemical patterns.

One possibility is the generation via chemotaxis, the response to a chemical gradient. The classical example is the slime mold Dictyolstelium, where cells produce the chemoattractant (cAMP) as well as a chemokinetic morphogen (ammonia). Starting from the view that morphogenesis is, at least proximally, a mechanical event, several modelers have shown that the same spatial patterns that arise in Turing models can be produced by biomechanical models whose variables are cellular stresses and strains. These mechanochemical models have stimulated experimental programs to address their validity (Wolpert and Hornbruch, in press).

**3.2.2 Dynamic Aspects of Structure-Function Relationships**

The relation between structure and function is a central theme of classical biology. Some mathematical models have already illuminated problems in this area. For instance, McMahon and Kronauer (1976) modeled the tree branch as a beam of greatest lateral extent. Another example involves the biomechanics of feeding of aqueous organisms. Solving the Navier Stokes equations for flow through small, bristled appendages, Cheer and Koehl (1987) have shown how the geometry permits the appendage to function either as a paddle or a rake.

In temporally shifting systems, the description of structure-function relations remain especially elusive; and it is here that mathematical modeling is particularly essential. In physiology, for example, only by solving the appropriate equations of fluid mechanics and elasticity can one understand the relationships between the structure of the heart and its function of providing appropriate blood flow, and changes in blood flow, in response to changing environmental conditions. Similar remarks apply to other organs, for example, the kidney. Here fluid mechanical considerations play a role, but the details of chemical reactions are perhaps even more crucial to describe accurately. The interplay between chemistry and solid and fluid mechanics is similarly important in the description of plant growth.

Organ physiology is a natural target for mathematical and computer modeling. Such models can serve a three-fold purpose: to understand the normal structure-function relationship of the organ, to study the mechanisms and impact of disease processes, and to aid in the design of artificial devices that can be used to repair, assist, or replace the organ. For plants one can add the possibility of aiding breeders by identifying structures that optimize performance.

In the case of the heart, a computational method has been introduced (Peskin and McQueen, 1989) to solve the coupled equations of motion of the muscular heart walls, the elastic heart valve leaflets, and the viscous incompressible blood that flows in the cardiac chambers. Variants of this method have been applied to other problems in bio-fluid dynamics, including platelet aggregation during blood clotting, aquatic animal locomotion, and wave propagation along the basilar membrane of the inner ear. In the heart itself, the method has been used to study the optimal timing of events of the cardiac cycle, to simulate a disease state involving prolapse of the mitral valve, and to conduct parametric studies aimed at the optimal design of prosthetic cardiac valves. At the level of mechanics, another set of challenges is to develop theories for explaining the heart's structural components: the orientation and layering of muscle fibers in the ventricles, the position and makeup of the heart valves.

Cardiac contraction is mediated by propagation of electrical activity over the three-dimensional multi-cellular musculature. Disturbances in this electrical system result in arrhythmias; the most severe of these is ventricular fibrillation, the principal cause of death after a heart attack. This is an active area of modeling research, with many open avenues to explore: the ionic channels underlying the cardiac signal (Noble 1962); the effects of spatial inhomogeneities, say from damaged tissue; the consequences of discreteness (finite cell size and gap junction coupling); and the fundamental nature of synchronization and sustained propagation patterns in three dimensions (Winfree 1990).

Other organ systems under intense investigation, which cannot be understood without the help of mathematics, include the kidney and pancreas. The kidney's countercurrent mechanism achieves a substantial separation of water and solutes, which determines, under the influence of antidiuretic hormone, whether a dilute or concentrated urine will be excreted. A key difficulty in this field is that the basic laws governing the transport of ions and molecules (e.g., Na+, Cl-, urea, and water) across the walls of renal tubules are quite different in different parts of the nephron (the fundamental unit of renal function), and are in many cases unknown. Differential equation models are leading to considerable insights in this area by illustrating the physiological consequences of different assumptions and therefore suggesting experiments critical in distinguishing the possibilities (Stephenson 1972, Layton 1989, Weinstein and Windhager 1985). The many nephrons in a kidney are spatially distributed in a particular way; modeling will be invaluable in helping us to understand the reasons.

The pancreas also plays a key role in homeostasis, the control of the body's internal environment in which cells must operate. Although the classical view of homeostasis is based on steady-state notions, the release of insulin for metabolic regulation actually occurs in a rhythmic, pulsatile manner (period of 10 minutes or so), which appears to involve a hierarchy of oscillatory time scales. Release by cells in the islet (the functional unit of the pancreas) is correlated with their electrical activity, which exhibits a 5-10 second oscillation in response to glucose. Modeling, analogous to that for ionic currents in neurons, is helping to identify how the cellular oscillations arise, how cells are synchronized, and what are possible glucose-sensing mechanisms (Keizer 1988, Sherman et al. 1988, Rinzel 1990). Further challenging questions have to do with coupling between electrical activity and release, and interactions among the million or so islets in the whole pancreas.

In organ morphogenesis, important challenges for future work include finite element analyses of mechanical stress fields in the cellular continuum of growing tissue; optimization models to understand the functional significance of morphologies; and hydrodynamical models for nutrient transport in plants and animals (including marine invertebrates). Another interesting class of problems involves demographic models to predict cell cycle duration, age distribution, and family trees of cells in developing tissue (Bertaud and Gandar 1986). Kinematic analyses could be used to help unravel the physiological significance of gene products recently found to be correlated with the events of the cell cycle (reviewed by Murray and Kirschner 1989).

One of the strengths of mathematics is, of course, the ability to contend with temporally varying phenomena, and in particular to use models to deduce mechanism from kinetic data. It is a theme of modern biology, which has been reiterated several times in this report, that what has previously regarded as static has now been understood to be dynamic. We have just cited the dynamic nature of pancreatic homeostasis. An example of similar type is the hormonal regulation of ovulation that has been shown in the laboratory of Knobil to involve pulsatile secretion of the relevant hormones with a periodicity of about one hour. This too is an especially fertile field for mathematical investigation. The book edited by Goldbeter (1989) is a source for up-to-date references for theoretical work on this and many other dynamical problems in physiology.