SYSTEMS BIOLOGY

What Is Systems Biology?

Systems biology has many distinct forms and it is rather difficult to give the domain a singular overarching definition. Nature () defines systems biology as “the study of biological systems whose behavior cannot be reduced to the linear sum of their parts’ functions.” Hiroaki Kitano, a key proponent of systems biology, writes that, though there are diverse approaches within systems biology, systems approaches tend to involve “integration of experimental and computational research” (Kitano ) applied, for the most part, to large biological data sets so as to explore the interactions between the biological components of the system in question. Quantitative modeling methods can then be applied to help form concrete empirical hypotheses, which can be tested by standard biological empirical means (Nature ). As Trey Ideker writes: “Systems biology studies biological systems by systematically perturbing them (biologically, genetically, or chemically); monitoring the gene, protein, and informational pathway responses; integrating these data; and ultimately, formulating mathematical models that describe the structure of the system and its response to individual perturbations” (Ideker, Galitsky, and Hood , 343).

The causal workings in biological systems, composed as they are of often tremendous numbers of variables, are not adequately described in linear A → B → C terms, but rather need to be understood as operating through multisynchronous systems of feedback operating concurrently. Indeed, such causal relations are best understood as comprising linkages and relationships in terms of the dynamics of these variables across numerous different scales. Such an understanding of biological causality can present the need for sophisticated analytic tools. Describing this need, Ottoline Leyser asserts:

There's a feeling that development in general, and certainly plant development, has got to the point at which it needs rigorous computational models to allow us to understand the regulatory networks that underlie development. Once you've got a sufficient understanding of the components in a system, and know that there is a lot of feedback regulation, it becomes incredibly difficult to make sensible predictive experimental plans without a computational model…[something that] stimulates integration of computational approaches and wet experiments. … There are lots of very good examples of computational modeling providing insights that would've been hard to get from the sort of classical, “back‐of‐an‐envelope” approach that people used before. (Amsen , 4816)

Holistic, Mechanistic, Agnostic, or None of the Above?

It has been purported that one of the distinctive features of systems biology is its explicitly holistic character, that it even represents a paradigm shift in thinking in this regard (Laszlo ). However, one must immediately note that there are different strands of systems biology that are more or less amenable to holistic assumptions about biological causality, and in different senses of the word holistic. In any case, as commentators have observed (e.g., Gatherer , 10; Fang and Casadevall , 1401), one of the difficulties with calling systems biology holistic, or saying that it is nonreductive, even antireductive, is that no single definition of holism or reductionism (nor of systems biology, for that matter) exists. And, as has been commonly observed, holism and methodological reductionism, that is, the reductionism of the laboratory required for standardization and measurement, have long coexisted by necessity (Gatherer ).

If one is interested in casting systems biology in terms of the holistic/mechanistic dichotomy, all of these terms need to be nuanced with finer discriminations. One needs to inquire as to what kinds of systems biology there are. And, for that matter, what kinds of holism (if any) we are dealing with in each case. Moreover, we must examine the ways in which the frameworks used across systems biology force us to rethink the simplistic reductive folk views of biological causality that remain all too prevalent in the public space. All too often, these forms of discourse assume the existence of very clear‐cut and direct lines between clearly identifiable biological components and expressed traits. Whatever kinds of holism systems biology might embody, its emphasis on the interactions between innumerable parts of biological systems problematize such clear‐cut, monocausal, monodirectional, and direct‐causative assumptions about how biological systems relate to expressed human traits and behaviors.

One sense in which systems biology might be seen as supportive to those favoring a more broadly holistic view on biological causation comes in its power to take account of, model or correlate, then empirically test, potential emergent phenomena within biological systems. By emergence, I refer to the self‐organizing properties of biological systems, the emergence of larger scale phenomena whose synergistic operations cannot be adequately described solely in terms of the parts of that system. As Leyser puts it: “[I am] less interested in descriptive studies of the functions of individual genes, and more interested in analytical studies of regulatory processes that explain the emergence of higher order properties in a developmental system. That's the key issue: to understand such genotype‐phenotype connectivity” (Amsen , 4817 [emphases added]).

Holism then, taken in its broadest possible form, as that which sees biological wholes as bigger than the sum of their parts, arguably finds its clearest concrete analogue, viz. systems biology, with respect to emergence. This is because biological systems, so understood, cannot be simply described as aggregates of their basic components, as it were, but rather as dynamically interacting with the larger wholes that emerge out of those components. Even then, different forms of systems biology go on to simply treat emergent properties as just another component needing to be modeled as part of the system in question.

It is notable that focusing on systems and networks has given rise to new methods and experimental approaches. The so‐called top‐down omics methodologies, which generate huge lists of data attempting to enumerate the components of biological systems (what might be described as a cataloging approach) is the dominating approach within systems biology. This might be thought of as a form of biological cartography, a process of mapping out, in incredible detail, the components and relations of, say, the genome, or epigenome (more on this later). Another distinctive set of approaches within systems biology is represented in the so‐called bottom‐up approaches, which attempt to derive computational models of biological systems in order to make predictions and hypotheses that can be empirically tested. This combination of techniques, data collection (via omics analysis), and the understanding of dynamics and mechanisms (via modeling), combines to produce accounts of systemic behavior in response to perturbation, all of which can then be experimentally tested (Mesarovic, Sreenath, and Keene , 19).

Certainly, many of the limitations of methodological reductionism can be overcome by using systems biological approaches. Because systems biology often deals with computational data analytics, researchers are able to assay huge quantities of biological elements, and so make much more finely grained hypotheses regarding the effects of any given factor, or factors, in its respective system over time. This is often done through what is called black boxing, that is, correlating the patterns between changes in inputs and outputs of complex biological systems with respect to a given perturbation—in short, making some manner of change to a system and just seeing what happens. This allows one to get a sense of the correlations between specific causes and overall systemic effects, though without necessarily producing any understanding of the convoluted mechanisms governing how the one led to the other (hence the name black boxing—one only looks at the inputs and outputs, and brackets out questions regarding why it is exactly that such causes lead to such effects). In any case, such methods are made much easier with newer data‐analytic powers. Computational power used to generate simulations of huge quantities of interacting factors has opened up the empirical biological space for gathering, integrating, and exploring quantities of biological data in ways previously impossible.

Given the diversity of approaches (from cataloging and black boxing, to the attempt to model systems in order to gain a more profound understanding of the key dynamics of biological systems across various scales), it is quite hard to map systems biology onto the notions of holism or mechanism in any comprehensive or neat way. Given the variety of techniques, uses, and possibilities with this broad term systems biology (not to mention the diversity of positions within the umbrella terms holism and mechanism), how much sense does it really make to say that systems biology is holistic, or that it is mechanistic? Perhaps it might be better to say that its different strands are able to support elements of both kinds of outlooks. If that is the case, then the lenses of holism, mechanism, or reductionism might not be the best ways of approaching systems biology, nor of clarifying what is distinctive about it to begin with. Such lenses might represent, as Tim Gwinn observes, a case of “arguing between two putatively contrary ill‐defined concepts (holism and reductionism), … in relation to how those concepts apply to a third ill‐defined concept (systems biology)” (Gwinn ). Certainly, forms of systems biology that rely on cataloging look very much like a mechanistic analysis of parts, though other kinds of systems biology that are more sensitive to multiscale dynamics are more or less open to holistic outlooks. In any case, nothing here is really clear cut as far as the terms holism, reductionism, and mechanism go.

Indeed, it might be argued that systems biology, depending on the particular approaches used, and (perhaps most importantly) the mindset of the researchers themselves, proffers opportunities both for holists and mechanists of all kinds to explore their convictions. And, if systems biology turns out to be de facto holistic or mechanistic in its actual concrete application this might be more of a reflection of current attitudes prevalent in biological research circles (and society generally), and less about the nature of systems biology itself. Moreover, it must be noted that the broadly ideological and metaphysical machinery given in both holistic and mechanistic worldviews (e.g., Gaia theory), are just as empirically impenetrable as they ever were, and can be neither supported nor confuted by systems biological approaches.

Integration and Convergence

The modular nature of computational forms of biology invites and welcomes the integration of data sets and methodologies from numerous other domains. The common tongue of computational analysis, and the software engineering undergirding it, serves as a Rosetta stone between these disciplines, and has already yielded an immense quantity of multiscientific conversation. Systems biology is very much a part of a larger project of what has been called advanced integrative scientific convergence (AISC; Giordano, Kulkarni, and Farwell , 74). Here, systems biology might be understood, in part, as a component within a larger system of interacting scientific modules including engineering, network thinking, synthetic biology, computer science, information theory, systems analysis, DNA computing, and cryptography, as well as numerous other domains, too many to be listed, any of which might be usefully combined with other domains in order to generate real‐world, practical advances. It is important that one understand systems biology (and predictive neuroscience) as being components within a larger trend, a larger convergence project, in order to understand the argument that will be presented here later on. Systems biology is not a thing in itself, as it were, to be understood in isolation, or as if from nowhere, but rather a component in a much larger data‐driven movement and draws no small significance therefrom.

Yet, computation does not just drive external convergence between quantitative‐driven data. Biology is a very broad discipline composed of numerous subdisciplines, and the computational aspect of systems biology has shown a remarkable power for uniting subdomains, and motivating the creation of huge data sets from across many of biology's internally divergent subdisciplines. Increasing integration is always welcomed (Pecina‐Slaus and Pecina , 2). This gives rise to metadisciplines devoted entirely to the prospect of further integrating and making sense of all these data sets and information (e.g., bioinformatics, the process of applying computational software to biological data for making clear any potential practical fruit that might be derived from various recombinations of the data). Numerous strategies for such integration exist. So, systems biology can be understood as both expressing a self‐integration within biology (call it endoconvergence), as well as being a part of the external de‐siloing of the scientific disciplines for translational purposes too.

Computer Modeling and Mathematical Biology

All this collection, aggregation and integration produce a tremendous quantity of data. To analyze all this, systems biology makes use of, and has further facilitated, mathematical and computational biology. The National Institutes of Health (NIH) defines computational biology as the use of “mathematical and computational approaches to address theoretical and experimental questions in biology … the development and application of data‐analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems” (Huerta et al. ). We will comment later on the much‐too‐causal way that biological, behavioral, and social domains are simply thrown together here.

The omics approach mentioned above is composed, similarly, of this production of ever‐increasing data sets cataloging the parts of systems. Herein, any biological system involving complex and interacting elements can be cataloged and listed in order to provide inordinately detailed descriptions of the components thereof. A familiar example is the study of the genome. Listing the elements of the genome gives us genomics. And anywhere there is an ome an omics can likewise be created. Study of the epigenome leads to a catalog qua epigenomics, and there are simply too many omics analysis procedures to feasibly describe here (to name but a few: connectomics, embryomics, pharmacogenomics, phenomics, proteomics, transcriptomics—the list goes on, and is ever expanding).

In this so‐called molecular to modular movement, that is the movement from molecular biology toward more modular thinking, the metaphor of modularity can serve as an interpretive key for understanding the overarching process of convergence. Modularity is, after all, a central feature of the programming development in computer science. To illustrate, the very process of creating complex software, and the organization of software‐creation teams in general, are all modular processes from the outset, wherein each module needs to be coded separately by different teams of programmers for later integration. These modules are coordinated and assembled using standardized basic language allowing the various components to be variously integrated with ease at a later time. The embracing vision of AISC might very well be described in such terms. This subsuming of huge swathes of biological data into modular format, the creation of a common structure into which empirical data sets can then be variously recombined, is the substructure of computational biology too. And, at the macrolevel, one of the chief challenges of big data aggregation is the general or universal platforms through which as many different kinds of data can be feasibly integrated and made conversant as possible.

What one has here, then, is the attempt to generate a universal, or semiuniversal, coding fabric, a universal scientific foundation and language upon which the quantitative outputs of numerous sciences can be efficiently plugged into each other, quite literally in some cases, so these fruits can be used to build upon each other. Just as synthetic biology breaks down DNA into bioblocks that can be recombined with the greatest of ease, we have a movement by the sciences toward the creation of a common set of quantitative building blocks that can then be used to produce numerous cross‐disciplinary technological advances, insights, and outputs. Systems biology, as one component in this larger series of building blocks for use in the overall scientific‐integrative project, is thus part of what might be called a computational federalization of the sciences. Whether one regards systems biology merely as a fad, or as a radical paradigm shift in thinking, one needs to step back in either case and observe the larger schema of which it is a part.

Prediction and Intervention

Computational biomodeling aims to create simulations of biological systems in order to make predictions of how they will react under different changes to the environments. Such predictions facilitate interventions. Moreover, the interest is translational—that is, concerned with the translation of the discoveries made in scientific domains and their being used to devise outcomes, potential treatments, interventions, clinical trials, and real‐world practical fruit of all kinds (Huser and Cimino , 400). As Kitano asserts, “computational systems biology addresses questions fundamental to our understanding of life, yet progress here will lead to practical innovations in medicine, drug discovery and engineering … through pragmatic modeling and theoretical exploration” (Kitano ).

The potential for practical fruit, given the broad range of potential disciplines to be integrated in this way, is great. One sees attempts to use systems biology in order to prosper the work of the environmental sciences (e.g., food and crops), health and nutrition, and advancement of the medical sciences generally. Agriculturally, systems approaches have provided huge amounts of data about plants (Kumar et al. , 581), and promise to help understand traits linked to agricultural productivity, biotic and abiotic stress resistance, photosynthesis efficiency, and nutrient mobilization. As Anil Kumar et al. note, such developments “have made it possible to design smart crops with superior agronomic traits through genetic manipulation of key candidate genes” (Kumar et al. , 581).

Medical science projects—from genomics for understanding the architecture of everything from inflammatory bowel disease to Alzheimer's (Platt, Thiel, and Kurths ), to understanding infectious diseases, cell dynamics, cancers, the brain, so‐called P4 medicine (predictive, preventive, personalized, and participatory), and proteomics—have been thus advanced. Everything from modeling the spread of infectious diseases (and modeling how to most effectively close schools in order to mitigate influenza spreading), epidemics and pandemics (Fumanelli et al. ), to mapping how diseases in the body are created and how they develop, modeling cardiovascular pathology (Grebogi and Booth ), is being undertaken. The potential arises for predicting, preventing, and diminishing a huge range of human illnesses (Institute for Systems Biology ). For example, promises are made of “dissecting cancer through mathematics” (Byrne , 221) in the search for “circulating biomarkers for detection and treatment personalization” (Friboes et al. , 163).

PREDICTIVE NEUROSCIENCE

If you are a neuroscientist, a central premise is that it is not possible to understand behavior, including human behavior, and even abnormal human behavior, without biology. But at the same time, another central premise must be that you're not going to understand behavior if you think that biology will explain everything. (Sapolsky , 10)

Sibling to Systems Biology

The use of analytics to form predictions of human behavior is big business. Between the years 2006 and 2010, IBM spent $11 billion researching crime prediction analytics, and its products are used by police forces in the United Kingdom and the United States to predict crime and future recidivism (Thompson ). Insofar as any given analytics draw primarily upon neuro‐related information derived from neuroscience and related fields, we have what might be called predictive neuroscience. This is related to, but certainly not the same as systems biology. Yet, in all the four features outlined above (holistic‐mechanistic ambiguity, integration, computation, translation), predictive neuroscience overlaps considerably with systems biology. Again, this is because both domains are subject to the supervening convergence project and its data‐driven reliance on computation and modeling. Like systems biology, predictive neuroscience is fervently multiscientific and integrative. And, much like systems biology, there is a heavy emphasis on analytics and use of various statistical methodologies. The center of such research is translational; computational neuroscience is seen as a “crucially important discipline for furthering our understanding of brain function and translating this knowledge into technological applications” (BU [Boston University] Neuroscience ). The interest in advancing medical science, viz. prediction and prevention, is a central motivator here.

Brain research has moved beyond psychology, psychiatry, and neurology and merged with other disciplines—statistical analysis, information technology, engineering, and many more. Advances in neuroimaging, alongside cloud computing and big data analytics, have created major changes in brain research (Le ). As with systems biology, the colossal quantities of data produced require computational analysis. James Giordano, Anvita Kulkarni, and James Farwell write:

The volume and complexity of differing modes, types, and levels of data to be processed and analyzed within the AISC paradigm necessitates computation technology to optimize the validity, reliability and utility of these approaches. These methods can facilitate the comparative, analytic, predictive, and normative value of multidisciplinary data sets that can be employed in forms of neuropsychiatric intelligence acquisition, gathering, and analysis (i.e., what has been termed “NEURINT”). (Giordano et al. , 74–76)

The results of the given quantitative outputs are then represented schematically, or in terms of nodograms, series of connected nodes “with each node assuming a relative value of probabilistic weighting based upon relative types, extents, and validity of data obtained,” all connected by various lines which “represent dedicated applications of integrative collaboration” (Giordano et al. , 75).

The intrinsically probabilistic dimension of such predictive analyses is an important implication of the nonlinear relationships that exist between the various factors in the systems being investigated. Unlike cruder reductive approaches that have too often characterized biological causality in terms of neat, direct, and, above all, deterministic causal relationships between biological structures and their expressed characteristics, predictive neuroscience is necessarily probabilistic because it understands that the causal relationships between elements in a given system are all simultaneously affecting one another.

Nonlinearity forces such researchers to admit a degree of inherent uncertainty about the operations of the systems they are investigating. Again, this is tremendously significant given that such systems approaches are lauded as being capable of predicting, and thus preemptively treating, individuals thought to be likely to develop social problems in the future. The wish to prevent social ills necessarily relies on unavoidably probabilistic data in making predictions about harms, and judgments about persons’ future behavior, before any such potential harms have been carried out.

In predictive neuroscience, the nodograms and schemas applied articulate the relationships between many concerns, including behavioral and social dynamics, brain function, imaging, chemical biomarkers, anatomy, genetics, and genomics. What is in play here is very much the schematic kind of approach one finds in the omics version of systems biology, as well as the basic underlying assumptions that complex behaviors cannot be adequately described in linear A → B → C causal relationships, or in the hard reductive brain spot for thinking. If there are any predictions to be made at all, the assumption is that more and more information, and from as many different scales and sources as possible, needs to be assayed, integrated, and assessed. And so a surge for amassing as much data, from as many sources as possible, arises.

Imaging, Genomics, and Cloud Computing

The heart of predictive neuroscience is the manner in which it integrates data about the interactions among the following:

  • (1)

    the function and structure of the brain; mapped against

  • (2)

    genomic information about predispositions; and in relation to

  • (3)

    behavioral context, aggregated in cloud‐based analytics by drawing on various streams of demographic information (Giordano et al. , 73); compiled into catalogs represented as

  • (4)

    big data, analyzed through increasingly advanced data mining methodologies.

Predictive neuroscience begins with imaging. An entire alphabet soup of neuroimaging techniques exists, each with its respective strengths and limitations. In combination they can be used to yield views of both the structure of the brain (its architecture and physiology), as well as its function (the activity, blood and/or electrical, involved in that structure over time). Such laboratory‐based imaging certainly has its uses, and in various medical contexts imaging devices can provide information that is immensely valuable (e.g., detecting blood clots in those that have suffered a stroke). However, when it comes to more contextual, socially embedded behavior, laboratory‐based imaging information can be misleading (Wiseman ). Indeed, the brain changes depending on where it is: “context matters” (Le ). But as means for empirical testing outside the laboratory become more reliable, new modes of social neuroscience and medical analysis of the brain in action have come to realize the value and importance of looking at complex neurological problems in situ, in the context in which actual problems occur.

Development is also a crucial factor in systems approaches, since it is regulatory processes that are of particular interest. By extension, predictive neuroscience must also contend with the manner in which the brain changes over the lifespan. And it is here that the convergence of scientific disciplines, and the need to view the brain as a system that exists within other systems, arises (or, as David Depew put it, as an ecosystem that is itself made up of various ecosystems). A systemic understanding of neural operations and their relationships with factors at other scales is absolutely required. As such, it will rarely do to simply think one can point to a given structure of the brain and say one has located the reference point of specific behavioral outcomes. As soon as one grasps the complexities of the interactions between the brain's interacting parts (itself increasingly viewed as a nonlinear system of modules whose operations occur primarily through communicative feedback loops), its genetic and epigenetic expression, and the developmental and environmental systems in which a given person's brain is but one moving part, it becomes clear that such neuroreductive explanations are highly impoverished, and require a broader multilevel framework if they are to be understood at all.

Given this need for a systemic and multiscale view, genomic analysis is taken as a complement to imaging analysis. If brain structure and function give the menu for brain expression, so to speak, then, according to Giordano, genomics gives us the ingredients. Herein, neurogenetic approaches “may help improve probabilistic prediction by providing information about hereditary and population patterns of neural structure and function that contribute to neuropsychological states” (Giordano et al. , 78–79). Such a combination of data is welcomed, just as with systems biology generally, because of the limitations of any one given information stream. It is simply assumed (without sufficient explanation) that the more information one can integrate, despite the very different nature of all these data sources, the more likely one is to get an accurate picture. Giordano writes:

Shortcomings can be delimited and compensated by the use of co‐registered, combinatory neuroimaging techniques, and the conjoinment of other biological approaches, such as neurogenetics, neuroproteomics (e.g., phylogenetic and cladistics analyses), and biomarker assays…. When coupled to large‐scale arrays of individual and group/populational databases, such convergent scientific methods enact rapid (if not real‐time) accessibility of information that may allow formulation of comparative and normative indices and may provide considerable diagnostic potential. (Giordano , 54)

In short, when it comes to data, the more the merrier is the rule. It remains to be asked whether combining all such data sources, given their wildly different natures, their differential levels of fidelity, and their variable susceptibility for data error in collection, is more likely to lead to a Chinese‐whispers type of aggregation than to form a more accurate picture—because both accuracies and inaccuracies are multiplied through data aggregation (this is part of the problem known as garbage in garbage out). Attempting to translate and integrate too many different kinds of data might therefore lessen our capacity to understand the behavior and traits in question rather than increasing it. In which case, it may be that, with such data convergence ideologies, actually more is less, and less is more. In which case, the entire ideology of data convergence is severely undermined. It may be that less data, but better data, might be much more useful than simply throwing absolutely everything into the same analytic pot. What is required is better understanding of a more limited scope of crucial interactions in system dynamics rather than attempting to simply quantify and integrate everything.

CONTRARY MESSAGES, AGENCY, AND DEPERSONALIZATION

An Increase in Agency—Medical and Social Interventions

Predictive neuroscience is currently applied for looking into various medical, mental health, and social issues (and there is not always a clear line between them). Be it dementia, or antisocial behavior, schizophrenia, or alcoholism, medical and social problems alike are taken to be well within the purview here.

One of the benefits of predictive neuroscience is that it has no place for the crude, nothing buttery frameworks that proliferate in the public space (e.g., we are nothing but a bunch of neurons, we are nothing but the activities of genes, and so on). It is not going to be acceptable, from a multiscale, dynamic view, to just say aggression exists because of malfunctioning amygdala, or psychopaths are the product of the MAOH gene mutation, or to say this sadistic person lacks empathy because he was bathed in too much testosterone in utero. One of the advantages of multiscale approaches is that they provide a rigorous framework with which we can respond to proponents of such hard reductionisms and instead encourage a more systemic or dynamically interactional way of conceptualizing and responding to human complexities.

The brain can no longer be understood just as an assemblage of parts that are produced directly from a genetic blueprint, and whose functions then merely play out according to its preset given structure. Nor is social determinism acceptable either, for it is the interaction of scales that matters. And this is precisely why we need a nonlinear systems approach that understands complex difficulties as neither monocasual, nor merely summative aggregates of linear causes, but rather as synergistic systems of feedback and regulatory processes.

As such, some form of systems thinking regarding biosocial interaction does not just provide a better conceptual framework for describing how profoundly interconnected the various features of our biology are with the more human level of choice and agency. With the medical advances that predictive neuroscience proffers, we can come to expect an increasing preponderance of multiscale, biopsychosocial modes of medical intervention too (qua P4 medicine). A biopsychosocial approach to medicine, that is, seeing biology as one essential element of disease, but not the totality thereof, represents an important shift in the way we think about medical treatments. For one thing, the shift involves a greater emphasis on preventing disease before it comes about. By understanding the relationships between the expression of diseases (particularly genetically related diseases) and their related environmental and biological factors, more responsibility is placed on individuals to act in ways that can stave off illness before it occurs. Testing for genetic predispositions and understanding environmental triggers go together to form a broader treatment context in which individual persons have to take a much more active role than with the all‐too prevalent attitude that one should get ill first, and then simply seek out medication for ongoing treatment after the fact.

This rediscovery, the active role of persons in their own treatment and health, has its dangers too, because once the patient is held accountable for his or her lifestyle, a blame‐game becomes possible wherein patients become subject to normative critique for becoming ill (an example is the recent denial of nonemergency surgery to the obese and to smokers by the the UK's National Health Service). On the one hand, it is unambiguously good that persons regard themselves as having an active stake in their own health, rather than thinking of themselves as passive receivers of treatments from medical technicians. On the other hand, there is the need for this participatory element not to be turned into the more sinister sense that all illness might be the patients’ own fault, for which patients are to be held accountable by medical practitioners and insurers alike.

A clear example of the shift toward more systemic and interactive thinking can be found in research into treating ailments like Alzheimer's. Contrary to tabloid effusing about cures for Alzheimer's that massively overstate the power that developing medications proffer, the growing consensus is that Alzheimer's should no longer be understood as a matter of seeking out some miracle cure, and one should not be hoping for some pharmaceutical or bioenhancement technology to solve the problem outright. Indeed, uniscale, pharmaceutical approaches will always be inadequate for treating problems like dementia because, as Steven Rose () points out, aging and our assessments of what counts as good memory for which age are socially relative and shifting judgments, as such, are (at least in part) socially constructed phenomena anyway. Social remedies must be a part of maladies that are likewise in part socially defined.

The rise of P4 medicine (predictive, preventive, personalized, and participatory), embodies the need for dealing with problems, such as those related to aging and memory loss, across numerous scales. Dementia is a perfect exemplar of a medical condition that very much benefits from multilevel treatment protocols. As Rose suggests, the best suggestion is a use it or lose it understanding for dementia prevention (Rose , 170)—staying active, good sleep, combined with dietary measures and social interactions. Instead of seeking out the cause for dementia, as if one will be able to point to one singular hobgoblin producing the malady, biopsychosocial intervention and prevention help stave off the various processes of decline to begin with, before any kinds of pharmaceutical approaches are even necessary. It should be noted that this article was written during the media furor regarding the testing of the new miracle drug for Alzheimer's, solanezumab. As of November 2016, the drug failed testing, showing no significant improvements regarding the rate of patients’ decline (The Guardian notes that “between 2002 and 2012, 99.6% of drugs studies aimed at preventing, curing or improving Alzheimer's symptoms were either halted or discontinued”; Devlin ). Despite such media claims, the general shift is tending away from monocasual explanations and the magic bullet approach that it lends itself to and toward an attitude that attempts to deal with complex problems, from the outset, along as many dimensions as feasible.

When one is dealing with such complex ailments (as opposed to, say, breaking a leg), monocausal thinking perpetuates a serious misapprehension of the nature of the difficulties themselves. It matters a great deal whether one thinks of memory as something just to be treated by pharmaceuticals or technologies, which are inherently patient‐passive, or as something to supplement the formation of broader good habits. More than anything, it is the participatory element of such P4 medicine that is significant here. The aim is to remove the idea of patients being merely passive consumers of medicine toward generating the notion of patients being active participants in their own disease prevention and recovery—though preferably without falling toward the other extreme, making the patient feel entirely responsible, and generating an unhelpful and excessive culture of blaming patients for illnesses that might have simply beset them, or disproportionately punishing persons for simply becoming ill.

As such, the dominating scientific rhetoric has social ramifications that need to be considered. This is one reason why the underlying rhetoric embedded within scientific paradigms needs to be made explicit. A public that has been educated to understand that complex human problems need multiscale approaches from the outset is less likely to think in terms of magic bullet approaches, and more likely to understand that medical and social problems alike, from dementia to alcoholism, are all the sorts of things they have an active personal stake in. Properly conveyed, such a message might lend itself to an increase in agency and self‐responsibility across the various populations in which such science discourse is expressed. In this way, the very language of multiscale biology has some potential to be itself healing by implicitly conveying a more agency‐enhancing social message in medical context.

Depersonalization: Public Health Interventions and the Janusian Quality of Predictive Neuroscience

Such frameworks can also proffer unhelpful messages. Predictive neuroscience is clearly related to systems biology, but the major difference between predictive neuroscience and systems biology has to be the manner in which it traverses both biological and nonbiological data. This is the crucial observation that will concern the latter part of this article: the overall integrative scientific paradigm that embraces both systems biology and predictive neuroscience does not draw a qualitative distinction between analyzing the elements of biological systems that contain no persons or agents, and those systems that involve the interactions between persons, agents, with other social agents, and their environments.

When one is exploring the systemic feedback relationships between, say, the epigenomic components of corn when exposed to differing light conditions, I would suggest that one is doing something of a radically different order to the exploration of feedback systems between persons’ demographic, genetic, and neural complements. Yet the quantitative methodologies share an underlying frame and approach that is identical (aggregate, analyze, correlate, predict). Predictive neuroscience is interested in human agents, and precisely as agents, whereas systems biology is more interested in the interactions between nonpersonal biota more generally. This represents a crucial category difference between their objects of inquiry. Yet, the overall integrative project, returning to the software team analogue drawn earlier, functions by reducing everything to the level of modules, of the same sort, whose analyses can be parceled out in a uniform manner, and whose resulting data sets can be later recombined as if there is no important difference between the objects of inquiry themselves. What we have then, is the generation of a new kind of reductionism, wherein no distinction is drawn at all between nonpersonal agents and human persons in the overall integrative project.

It was noted above that both biological and nonbiological factors are important for understanding human behavior. But predictive neuroscience, systems biology, and the larger AISC project are all overwhelmingly quantitative—human behavior is treated as if it were just another biological component to be quantified. This is a point that needs to be explored. It is all well and good observing that behavior cannot be understood in purely biological terms, but when one treats the nonbiological factors as if they are objects of the same order as biological data, just another module to be combined with biological data modules, then a very subtle but important confusion arises with profound implications. Given that predictive neuroscience's data traverse both the biotic and nonbiotic domains, one needs to be concerned about the way in which such data are integrated, and what means are available for qualitatively differentiating between these two fundamentally different kinds of system—one which involves human, personal agents, and the other that deals with biota qua nonpersonal, nonagent interacting parts.

This difference takes on importance once one recognizes the translational and practically oriented intentions of those applying predictive neuroscience. Such applications are being used to heal, but also to control and deter activities at the social level. Just as neuroscientific assessment and interventional neurotechnologies “are being increasingly regarded as viable techniques and tools within psychiatric research and practice” (Giordano , 54), these same technical and assessment tools can be, and are being, taken from their medical context and put to use in public protection, public health, and even for deterrence purposes and for national security (Wurzman and Giordano , T60). The inextricability of these types of outputs produced by predictive neuroscience does need to be acknowledged.

There are certain calls for predictive neuroscience to be used in such a social capacity. Giordano writes, for example, that in the light of terrorism and very visible mass killings, there has been a public call for neuroscience and neurotech (neuroS/T) to be devised for purposes of public protection. Or, at least, if it is not the public crying out for such intervention, those within the neuroS/T domains (also in psychology and the social sciences), are taking it upon themselves to think about how their studies might be translated for tackling radicalism and mass public violence. NeuroS/T is being called upon (and is in fact being used) to define, assess, and potentially prevent wanton acts of aggression and violence (Wurzman and Giordano , T61; DiEullis and Cabayan ), and the neurobiology of aggression is one particularly prominent example of this interest in neuroS/T in helping form strategies for public intervention, paternalism, and encouraging deterrence at home and abroad (Wurzman and Giordano , T68; DiEullis and Cabayan ).

Various ethical concerns over the use of such technologies arise, but there is a more foundational problem—the dominance of computational analytic methodologies over more humanistic empirical modes. There is no question that a computational approach to analyzing human persons can yield important fruit, but when it comes to dominate as a methodology over more humanistic forms of inquiry, the effect is deleterious. Human persons need to be understood as qualitatively different to the sorts of beings and interactions one is used to investigating as part of the larger systems biology approaches. When there is no balance between the quantitative or computational modes of analyzing persons, and a more qualitatively sensitive, humanistic, person‐centered mode, all sorts of problems and ethical dangers arise, not least of which is the implicit message that one is not being scientific unless one looks at human persons as if they are no different, in effect, than molecules, proteins, and various other nonpersonal biota in their environment. With that we have returned to something very much like the old forms of reductionism we were hoping to have dispensed with.

Then, one is left with an excessively schematic way of regarding human persons. Such schematizing can be useful in its place, but the question is one of balance, of leavening such approaches with more humanistic approaches. Things are moving increasingly further into such imbalance, creating a self‐reinforcing mindset in which humanistic approaches become less and less relevant, less respected, less well funded, pursued less in favor of the overwhelming deluge of computational, quantitative, analytics‐based modes of describing the human person. The temptation to carry things in this direction is inherent in the AISC framework itself. Quantifiable data are more readily integrated, whereas more sensitive and humanistic empirical approaches are less readily integrated. And when the trend is toward integration, that which is less easily integrated gets sidelined. Such computational methods, the standardization processes, are precisely what are dominating (and overwhelmingly so) the scientific scene at present.

It is from here that the need for a double helical approach becomes apparent. The point to be underlined over and over is that the problem is not the use of computational methods per se (though problems do arise because such methods are tempting and seductive with respect to what they promise to provide). Rather, the problem is the current trend toward the increasing dominance of quantitative work at the cost of more humanistic scientific work. The assertion posed earlier was that systems biology should be called multiscientific, rather than multidisciplinary (which is how it describes itself), and even a cursory look at the sorts of major funding grants available (e.g., the White House's current Brain Research through Advancing Innovative Neurotechnologies [BRAIN] project), all of which call for a multidisciplinary approach, make virtually no mention at all of philosophy, politics, and (least of all) theology. The humanities are largely excluded from most of these so‐called multidisciplinary investigations.

But there is a more worrying and subtle aspect to this increasing emphasis on schematic analyses. It is not just the humanities that are lacking, but also the human‐centered elements of the sciences themselves (particularly the social and human sciences) that are being sidelined in favor of the more schematic‐friendly elements of the various hard sciences. Human‐centered, person‐facing, and qualitative aspects of, say, the psychological sciences, because they are not readily reducible to schematic and computational format, get downgraded in importance. So what we have is a narrowing of the modes of scientific inquiry itself. What one has, in effect, is indeed a multiscientific approach, but it is only the approaches within those sciences that can be readily rendered in standardized, computational terms, that are favored (and it is clear from the numerous complaints within the psychological discipline that the research funding is going overwhelmingly toward the measurement‐based, purely quantitative, standardization‐based kinds of applications; Raven ). What one has, in effect, is a diminution of scope within the human sciences themselves wherein the computational, schematic‐based elements are massively overprivileged. Again, it is the domination of this mode at the cost of more humanistic perspectives that is the problem, and the sense it gives rise to that one is not being scientific unless one thinks about persons in purely numerical terms.

A double helical approach is very much needed right now—one which restores not just the broader, richer work of the humanities, but which restores the importance of qualitative work within the human sciences themselves. The image of the double helix is very generative in helping one understand the need for, and in describing the nature of, the kind of integration of approaches that is required if one is to provide a rich picture of human interaction. A double helical approach would be one in which the two strands of investigation (computational and schematic versus qualitative and person‐centered) are understood to be complementary approaches that need each other, separate and opposing approaches that are to be kept separate (McGilchrist , 93), but wrapped intimately around each other. Both are necessary in coding for the whole being.

CONCLUSION: QUANTIFICATION IN THE SERVICE OF THE UNDERSTANDING, THE DOUBLE HELIX IN PRACTICE

In the Beth Israel Deaconess Medical Center, Boston, there is a diagnostic computer used in the emergency rooms that analyzes colossal quantities of data from across the general population—250,000 medical records gathered over 30 years. The computer uses that data to form diagnoses for patients brought into the hospital. It analyzes quantities of information from so many streams that it would be utterly impossible for a physician to look over them, reflect upon them, or even to notice such correlations at all, let alone to draw diagnoses from such quantities of diverse, population‐wide data. The analytics software, in contrast, is capable of drawing novel diagnoses on the basis of this mountain of information that no human person could analyze.

That being so, should this machine be given the final word on diagnosing the patients that it computes all these variables for? The computer articulates that there are correlations, but it does not tell you why, what these correlations mean, or what their wider significance might be. In order to discern which relations are significant, the why of the relations in the system, a more human element is required. The physician's judgment is required, for only the human professional can understand the situation of the patient in its qualitative reality. And he or she does this face to face, using quintessentially human discernment.

It should go without saying that problems are going to arise when quality is neglected in the overriding pursuit of quantification data, when quantity is literally taken over quality (as it overwhelmingly is in predictive neuroscience and related domains). Indeed, the very criterion quantity over quality might as well be part of the definition of predictive neuroscience, and the entire project of AISC as a whole. The pejorative sense of this is entirely warranted, and something that needs to be discussed.

Systems approaches rely on gathering immense quantities of data. But the point of accumulating all this data must be to enhance our understanding of the systems in question rather than simply producing inordinately detailed cartographical descriptions of phenomena and their components. It is possible to have too much data, and for the sheer quantity of data to confute, rather than enhance, an understanding of a given system's dynamics and core relationships. Overemphasizing omics approaches, for example, which tend to merely list the components of systems, rather than attempting to understand the core relationships within systems, is in Leyser's words “part of a desperate hope of learning something about systems by merely cataloging parts without having to think about the relationships” (Leyser, personal communication, October 15, 2016). Systems need more than quantification, cataloging, and listing—they must be understood.

Such concerns are particularly relevant to predictive neuroscience, and its various siblings, whose interests lie in predicting and intervening in the lives of persons and groups. Meaningful interventions will have to hinge on understanding why such correlations arise, and what they might mean. When predictive neuroscience involves making statements about dangerous individuals, or, more pressingly, predictions about persons who might become dangerous individuals given the shapes of their brains and genetic predispositions, how much more crucial is it going to be to make sure that one understands why such predictions come about, why regularities occur, and what these regularities mean, or how far they extend through time and across societies? For this a more discerning approach is required, one in which not every single factor needs to be accounted for, but rather a human sense of the important relations, to the exclusion of what is extraneous, might be of greater benefit. Too much data, too much aggregation may very well be, in the end, counterproductive.

Yet, what one finds in the Beth Israel Center case is an indication of the appropriate harmony that needs to be struck between the need for qualitative human understanding and quantitative methodologies—their need to go hand in hand. Above all, one benefits by being constantly reminded that the whole point of systems methodologies is to refine, and contribute to, our understanding, to be at the service of understanding those systems, rather than obfuscating that understanding through the compilation of endless series of descriptions of the moving parts of the system in question.

In their proper place, in the service of the understanding, quantitative and computational systems approaches hold unfathomable positive potential for medical treatment and increasing social messages of responsibility and agency. But when these systems approaches are applied to persons, particularly within the context of predictive neuroscience, all these benefits rely on understanding humans as precisely human, as agents too, rather than as a schematically representable series of cogs in a feedback system of elements whose parts—genomic, neural, demographic, developmental—are merely quality‐less mechanical components to be homogenized, quantified, statistically correlated, and computationally analyzed. Both approaches, quantitative and qualitative, computational and humanistic, are required for a broad, rich vision of the human person and human interaction to be articulated. The two approaches are complementary, and need to be understood as being so.

ACKNOWLEDGMENTS

The author would like to thank Templeton World Charity Foundation for funding this article through the International Society of Science and Religion's grant “The New Biology: Implications for Philosophy, Theology, and Education.”

Notes

  1. This emphasis on understanding relationships is important. Leyser (Amsen ) argues that the purely bioinformatic kinds of systems biology, and the omics approaches which simply rely upon correlating massive amounts of data, constitute approaches that very much fail to understand the relationships in biological systems. Instead, they constitute a sort of black boxing wherein the only thing that really matters is not so much understanding the system but merely being able to correlate inputs and outputs in biological systems. Such a process involves simply describing what happens to a system when one makes a given change. But one does not understand why such a change happens until one takes a more integrative look at the linkages and the key dynamics between the variables in a multiscale manner.
  2. It should be noted that, although systems biology uses computational modeling, not all such modeling is systems biology.
  3. Because individual omics data have been seen to create false negatives and false positives, and because single annotations are generally not adequate for describing the function of biological elements (Ge, Walhout, and Vidal , 551), more and more integration of data sets is seen as being desirable.
  4. For example, with “collective mining” and nonnegative matrix factorization–based approaches as integration methodology (Devarajan , 1), or with advanced network analysis for providing multiplexed and functionally connected biomarkers (Bebek et al. , 446). For a nontechnical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques, see Meier‐Schellersheim, Martin, and Rederick (, 4).
  5. See Wurzman and Giordano () and Rose () for an extremely disturbing catalog of ways in which neurotech and neuroscience have already been, and are being, researched to be used for purposes of national security and protection, predicting and doing harm to potential aggressors against the state.

References

Amsen, Eva. 2011. “An Interview with Ottoline Leyser.” Development  138:4815–17.

Bebek, Gurkan, MehmetKoyuturk, NathanPrice, and MarkChance. 2012. “Network Biology Methods: Integrating Biological Data for Translational Science.” Briefings in Bioinformatics  13:446–59.

BU [Boston University] Neuroscience. 2016. “Computational Neuroscience  .” Available at www.bu.edu/neuro/graduate/computational-neuroscience

Byrne, Helen M.2010. “Dissecting Cancer through Mathematics: From the Cell to the Animal Model.” Nature Reviews Cancer  10:221–30.

Devarajan, Karthik. 2008. “Nonnegative Matrix Factorization: An Analytical and Interpretive Tool in Computational Biology.” PLos Computational Biology  4:e1000029.

Devlin, Hannah. 2016. “Dismay as Alzheimer's Drug Fails in Clinical Trials.” The Guardian  , 23 November. Available at https://www.theguardian.com/society/2016/nov/23/dismay-as-alzheimers-drug-solanezumab-fails-in-clinical-trials

DiEullis, Diane, and HriarCabayan2013. Topics in the Neurobiology of Aggression: Implications to Deterrence. Strategic Multi‐Layer (SMA) Periodic Publication. Available at http://nsiteam.com/neurobiology-of-aggression-implications-to-deterrence/

Fang, Ferric, and ArturoCasadevall. 2011. “Reductionistic and Holistic Science.” Infection and Immunity  79:1401–04.

Friboes, Hermann, Louis TCurtis, WuMin, KaniKian, and MallichParag. 2015. “Simulation of the Protein‐Shedding Kinetics of a Fully Vascularized Tumor.” Cancer Informatics  14:163–75.

Fumanelli, Laura, MarcoAjelli, StefanoMerler, NeilFerguson, and SimonCauchemez. 2016. “Model‐Based Comprehensive Analysis of School Closure Policies for Mitigating Influenza Epidemics and Pandemics.” PLoS Computational Biology  12:e1004681.

Gatherer, Derek. 2010. “So What Do We Really Mean when We Say that Systems Biology Is Holistic?” British Medical Council Systems Biology  4:1–12.

Ge, Hui, AlberthaWalhout, and MarcVidal. 2003. “Integrating ‘Omic’ Information: A Bridge between Genomics and Systems Biology.” Trends in Genetics  19:551–60.

Giordano, James. 2012. “Neuroimaging in Psychiatry: Approaching the Puzzle as a Piece of the Bigger Picture(s).” American Journal of Bioethics Neuroscience  3:54–55.

Giordano, James, AnvitaKulkarni, and JamesFarwell. 2014. “Deliver Us from Evil? The Temptation, Realities, and Neuroethico‐Legal Issues of Employing Assessment Neurotechnologies in Public Safety Initiatives.” Theoretical Medicine and Bioethics  35:73–89.

Grebogi, Celso, and NualaBooth. 2016. “Nonlinear Dynamics in Flow Abnormalities Related to Cardiovascular Pathology  .” Available at http://www.abdn.ac.uk/systemsbiology/research/blood

Gwinn, Tim. 2010. “Systems Biology, Holism and Reductionism  .” Available at http://panmere.com/?p=105

Huerta, Michael, FlorenceHaseltine, YuanLiu, GregoryDowning, and BelindaSeto. 2000. “NIH Working Definition of Bioinformatics and Computational Biology  .” Available at file:///C:/Users/Harris%20Wiseman/AppData/Local/Microsoft/Windows/INetCache/IE/EW6CWVYN/workingdef.pdf

Huser, Vojtech, and James J.Cimino. 2012. “Precision and Negative Predictive Value of Links between ClinicalTrials.gov and PubMed.” 2012 American Medical Informatics Association Annual Symposium Proceedings  400–08. Available at https://knowledge.amia.org/amia-55142-a2012a-1.636547?qr=1

Ideker, Trey, TimothyGalitski, and LeroyHood. 2001. “A New Approach to Decoding Life: Systems Biology.” Annual Review of Genomics and Human Genetics  2:343–72.

Institute for Systems Biology. 2016. “Proteomics  .” Available at www.systemsbiology.org/research/proteomics/

Kitano, Hiroaki. 2002. “Computational Systems Biology.” Nature  420:206–10.

Kumar, Anil, RajeshPathak, SanjayGupta, VikramGaur, and DineshPandey. 2015. “Systems Biology for Smart Crops and Agricultural Innovation: Filling the Gaps between Genotype and Phenotype for Complex Traits Linked with Robust Agricultural Productivity and Sustainability.” OMICS: A Journal of Integrative Biology  19:581–601.

Laszlo, Ervin. 1972. Introduction to Systems Philosophy: Toward a New Paradigm of Contemporary Thought. New York, NY: Gordon and Breach.

Le, Tan. 2014. “Behavior and Brain Health  .” TEDxBrussels. Available at http://tedxtalks.ted.com/video/Behavior-Brain-Health-Tan-Le-a;Belgium

McGilchrist, Iain. 2009. The Master and His Emissary: The Divided Brain and the Making of the Western World. New Haven, CT: Yale University Press.

Meier‐Schellersheim, Fraser Iain Martin, and KlauschenRederick. 2009. “Multiscale Modeling for Biologists.” WIREs Systems Biology and Medicone  1:4–14.

Mesarovic, Mihajlo, SreesSreenath, and JackKeene. 2004. “Search for Organizing Principles: Understanding in Systems Biology.” Systems Biology  1:19–27.

Nature. 2016. “Systems Biology  .” Available at nature.com/subjects/systems-biology

Pecina‐Slaus, Nives, and MarkoPecina. 2015. “Only One Health, and So Many Omics.” Cancer Cell International  15:1–7.

Platt, Bettina, MarcoThiel, and JergenKurths. 2016. “Recognition of Early Stages of Alzheimer's Disease in EEG Recordings  .” Available at www.abdn.ac.uk/systemsbiology/research/eeg

Raven, John. 2016. “Genome Common Sense?” The Psychologist  29:87.

Rose, Steven. 2006. The 21st‐Century Brain: Explaining, Mending and Manipulating the Mind. London, UK: Vintage Books.

Sapolsky, Robert.2013. “Introduction to the Neurobiology of Aggression.” In Topics in the Neurobiology of Aggression: Implications to Deterrence  , edited by D.DiEullis and H.Cabayan. Strategic Multi‐Layer (SMA) Periodic Publication. Available at file:///C:/Users/Harris%20Wiseman/AppData/Local/Microsoft/Windows/INetCache/IE/1QRI2AXO/Sapolsky_2013.pdf

Thompson, Tony. 2010 “Crime Software May Help Police Predict Violent Offences  .” July 25. The Guardian. Available at http://www.theguardian.com/uk/2010/jul/25/police-software-crime-prevention

Wiseman, Harris. 2016. The Myth of the Moral Brain: The Limits of Moral Enhancement. Cambridge, MA: MIT Press.

Wurzman, Rachel, and JamesGiordano. 2011. “‘NEURINT’ and Neuroweapons: Neurotechnologies in National Intelligence and Defense.” Synesis: A Journal of Science, Technology, Ethics and Policy  2:T55–T71.