The first true primates appeared in the early Cenozoic Era, around 55 million years ago (mya), although some scholars peg the time earlier, around 66 mya (Larsen , 253–60). When they emerged, primates had a reduced sense of smell, were tree‐dwelling, agile, and smart. They began developing stereoscopic vision early, and eventually, color vision, which is unusual. Humans, who came much later, retain all these characteristics, and while they are no longer arboreal, they still have the shoulder girdle that proves they descend from apes who swung from limb to limb. Primate adaptations include dietary flexibility, significant parental investment of time and energy, and a variety of resulting specialized socialization patterns and forms of social organization. And, some have culture.

Primate sociality is widely seen as improving survival through the group's avoidance of predators, and providing more ready access to mates for reproduction. These advantages probably began to accrue in the evolution of the earliest monkeys. In the past 55 million years, or more, primates have developed many different types of communication patterns, foraging and feeding, and some have learned to hunt. Primate social patterns involve relatively frequent and intense social interaction and high affectivity, and the wide distribution of these traits in the biological order testifies to their antiquity. It is important to remember that when we speak of sociality, we are referring to observable behavior: numbers and types of individual primates that interact in specific ways that can be described, charted, and compared. We also infer from their behavior something about their emotional states from indications of demeanor and gesture, but we typically do not infer what they are thinking. On the other hand, the study of primate cognition is an area of enormous growth.

While sociality forms an important foundation from which to investigate the emergence of a capacity like morality in later hominins, it should be remembered that a great deal happened between the evolution of monkeys and hominins like us. The great apes and lesser apes (together, hominoids) emerged about 28 mya, and today's living great apes include the chimpanzee (Pan troglodytes and Pan paniscus), gorilla, and orangutan. The lesser apes include the gibbon. The capacity for culture probably flowered sometime after the emergence of the great apes, but to date, we have seen it primarily in the highest great apes—chimpanzees—and in humans, both living and extinct forms. Because behavior patterns that could be called “cultural” (“sweet potato washing”) have been observed in the Rhesus macaques, it is possible that some (perhaps not all) biological foundations of a capacity for culture existed before the great apes. Or, the macaques may have developed some facet of culture in an example of parallel evolution. In general, monkeys do not evidence culture, although they are socially organized in a wide variety of ways.

It is important to distinguish sociality from cultural capacity, although that can be a difficult task, and many writers of late have taken to using the term “sociocultural” to cover them both. However, society and culture are very different, and we assume that the biological bases for them are very different, as well, down to the neurological components and genomic segments. As we have seen, sociality emerged in the primates 55 mya, but culture in primates emerged only some time after the great apes, or about seven mya.

“Culture” refers to specific patterns of belief (mentation) that may have no relation to improved survivability, on the surface. Culturally based practices can be quite arbitrary and nonutilitarian, as the chimpanzees who decorate their ears with blades of grass (Main ), or hippies of the 1960s who wore bell bottom trousers. Furthermore, cultural patterns are just that: entire and inclusive systems of arrangements that can permeate every aspect of mental and material life, from the bifurcated divisions of dwellings in a village to parallel patterns of tattoos on women's thighs—as Lévi‐Strauss demonstrated in Tristes Tropiques ([]1973). Cultural patterns are displayed internally in the “way one thinks” and externally on cultural artifacts, and they vary substantially from social group to social group. While individual cultural elements may not enhance survivability, the sharing of an inclusive pattern of beliefs solidifies a sense of group belonging far more steadfastly than emotions expressed in social interaction of monkeys or apes. In a very real sense, people with the same culture “see the world” in the same way. It is difficult to imagine that anything could be more pervasive than primate sociality, but primate culture is. Culture evolved and flowered much later than sociality. The effects of human culture permeate far wider and more deeply than among chimpanzees.

Irrespective of its level of sophistication, culture has been demonstrated most clearly in chimpanzee behavior, and in the material, archaeological finds of Homo erectus, Homo heidelbergensis, Homo sapiens idaltu, archaic Eurasian versions of Homo (including Homo neanderthalensis and the Denisovans), and in later versions of Homo sapiens migrating out of Africa 50–60,000 years ago, who became us, Homo sapiens sapiens. A common ancestor to the higher primates must have had some rudimentary, biologically based capacity for culture.

Our main point is that primate sociality may well have emerged many millions of years before the capacity for culture evolved. Most living primates are far more “social” than they are “cultural,” with the exception of humans, for whom culture has grown and dominated social life to such an extent that it can now act reflexively and change our biology (Colagè ; Rappaport and Corbally ). To those who use the term “sociocultural,” this can be a shock. Primates were intensely social before a capacity for culture was cemented sometime after the great apes evolved, shortly (in evolutionary terms) before the chimpanzees split off from our line perhaps 6–7 mya. That makes the differential between sociality and culture as much as 48 million years! We were social long before we were cultural beings.

UTILITY AND NONUTILITY OF SOCIALITY AS A FOUNDATION FOR MORALITY

We raise the wide gulf between the emergence of society and culture to emphasize that analyzing primate sociality as a way of looking for precursors of moral thinking and “knowing good” may not be the most logical course. Our view is that morality is a capacity that arose around the time of Homo erectus, around 1 mya. We suggest that morality is younger still than cultural capacity, and much younger than ancient primate sociality. One of morality's most salient features—the development of reading, and therefore the potentiation of a reflective, moral person (Colagè )—arose only 6,000 years ago. Next to sociality at 55 mya (or more), that is but the blink of an eye.

To summarize, primate sociality emerged around 55 mya, perhaps earlier; primate cultural capacity was present from around 6–7 mya; and hominin morality emerged, we have proposed, at the time Homo erectus learned to control fire and the social context of the “human hearth” emerged, around 1 mya (Rappaport and Corbally ). Searching for a primary foundation of morality and “knowing good” in sociality may not therefore be the most obvious and fruitful investigative path. If models are necessary, then the model of culture, and of both fire use and stone tool construction, would be greatly preferable when searching for evolutionary origins of morality. Indeed, we place both the control of fire, and hand axe planning, construction, and use, in our cognitively based morality model (Rappaport and Corbally ).

The questions now are: Can the sociality of the great apes help us understand the emergence of the human capacity for morality more than the monkeys do, or at all? Does ape sociality lay a foundation for “knowing good” in later hominins? This is the starting point for much of today's work in the evolutionary study of morality's emergence. Indicators of the precursors to human moral thinking and behavior have been widely sought. Similarities have ostensibly been found in both the social behavior and the neuroscience of the great apes and their smarter cousins, extinct and living members of the human lineage. Once modern humans have achieved full adult cognition, they usually have some capacity for “knowing good,” although it has variable expression. Do other primates have this capacity? Probably most of them do not. Do culture‐bearers (after around 6–7 mya) have a capacity for morality and “knowing good”? Probably most apes did not and do not now because morality emerged relatively recently along the human lineage. We do not know for certain yet, but if modern apes represent earlier apes, then, we conclude, probably not. Do any culture‐bearers other than living humans “know good”? That remains an open question—ethological (as possibly in the study of dolphins), sociological, philosophical, and theological, but primarily, archaeological. Eventually, the sequencing and comparison of the genomes of living and extinct early humans will reveal which neurological components of morality emerged, and when.

At this point we should mention the enormous problem of intuiting the behavior and cognition of both other living primate species and extinct species who leave palaeo‐anthropological evidence for us to analyze. Indeed, humans have this particular problem with respect to their well‐loved pets, Canis lupus familiaris, which have been purposefully bred with specific traits for perhaps a million years, after Homo erectus learned to control fire and began his trek out of Africa to populate Eurasia. Our model suggests that the very long treks involved in Homo erectus’ colonization of the Old World outside of Africa required the control of fire, use of well‐made hand axes, and at least the partial domestication of the dog. Since that time, canines have been bred to react in ways that satisfy their masters, including sad (nonaggressive) looks, presumed emotions, wagging tails, and attentiveness to human emotions. We see the absence of these qualities in Canis lupus, the wolf, who show little inclination to please their human owners.

This analogy is especially important when we begin to infer feelings and cognitive states of animals as close to humans as the great apes. No human can watch a video of a grieving chimpanzee and not be moved. Still, we must ask ourselves what we are seeing and what we are reading into their behavior. In modern cognitive science, laboratory tests provide a wealth of information on human cognition, but among our close primate relatives, cognitive testing is much more difficult and requires creative experimental designs.

NEURO‐COGNITIVE INDICATORS FOR MORALITY AND KNOWING GOOD

The assumption in our approach is that morality relies on some preexisting human biology. However, given the vast amount of genomic material available for adaptation on the human lineage, and the line's inherent plasticity, we are not, at this time, convinced that a specific great‐ape precursor to morality existed in the past or that it exists today. We are not convinced by the presumed “indicators” of morality in living great apes. There was plenty of time and an abundance of adaptive biology on which natural selection could work, so the proposal that something crucial for morality's emergence must have existed before the human lineage, we feel needs very careful re‐examination and much more research.

We propose that morality is a human lineage specific (HLS) trait that should find a place in biologists Ajit Varki and Tasha Altheide's “Table 1. Some phenotypic traits of humans for comparison with those of great apes” (2005, 1747). On that table, there are categories for physical, physiological, and developmental characteristics, as well as specific examples of human behavior, cognitive capacity, communication, social organization, and culture. Terms that might indicate morality, moral thinking, or moral culture find no place on this list. The closest trait to a moral feature is “social conventions” under the category, social organization. From a certain perspective, these two biologists’ list of human phenotypic traits may reflect a prevailing assumption that morality is categorically an aspect of sociality. However, if it is, then why is it not present in the social categories on their list? We feel morality is indeed implemented using human social organization and social mores, but that morality exists at a higher and more complex level that we have called “suprasocial.” If morality were a fundamental part of ape sociality, then why do we not see its probable emergence, until a million and a half years after the origination of the genus Homo?

We hypothesize that for morality to emerge, early humans required a certain nurturant, emotionally and socially intensive, but adjudicative group cultural context that we have termed the “human hearth” (cf. Rappaport and Corbally ). We have defined the three components of morality that we are looking for, in terms of human, biologically based, neurocognitive capacities: (1) an ability to work mentally along a timeline; (2) an emergent, cognitive explanation‐maker, and so, an ability to generate explanations about how things work; and (3) an arbiter or evaluator, to sift mentally among options and make decisions. We have potentially located all of these cognitive requirements in the biology and culture of an early member of our species, Homo erectus. (1) A timeline is logically deduced from planned, multi‐stage construction of Homo erectus hand axes. (2) An explanation‐maker could be found in an emergent left hemisphere interpreter (LHI), which churns out explanations in modern humans (Gazzaniga ). An early form of the LHI is proposed for Homo erectus (Rappaport and Corbally ), who had a much larger brain and a higher neocortex ratio than the australopithecines who came before (Aiello and Dunbar ), as well as a fundamentally different ecological niche (Coolidge and Wynn , 116–20). (3) An evaluator and decision maker for both modern humans and, we propose, Homo erectus, implicates the ventromedial prefrontal cortex (PFC), amygdala, and anterior cingulate cortex, all of which are involved together in mediating some affective executive functions and are thought to play a role in decision making and evaluation (Gazzaniga, Ivry, and Mangun ). Note that decision making has a social and probably an emotional component, too, as we have hypothesized elsewhere (Rappaport and Corbally ). A neurological arbiter, or evaluating component, as a basis for morality finds some support even in early research results in genomics, as we will describe later, as part of “‘Knowing Good’ from the Perspective of New Sciences,” below. It is noteworthy that neuroscientist Michael Gazzaniga holds a holistic view of the human brain that appreciates its complexity and the discrete quality of its many components (especially “neural nets”) and capacities. He finds this entirely in line with the principles of biological evolution when he writes the following:

When we realize that specialized brain circuits arose through natural selection we understand that the brain is not a unified neural net that supports a general problem‐solving device. If we accept this, we can concentrate on the possibility that smaller, more manageable circuits produce awareness of a species’ capacities… holding fast to the notion of a unified neural net forces us to try to understand human conscious experience by figuring out the interactions of billions of neurons. That task is hopeless. ()

Before we suggest how Homo erectus, Homo heidelbergensis, and archaic Homo sapiens came to have morality and an ability to “know good,” and why we think they all did, let us look briefly at less satisfying models that go far back in time to sociality analogues to find moral capacity. We will first examine theories based on the social brain network, and then go on to review theories of human morality derived from studies on humans who cannot “know good” because of brain dysfunction. In the final sections, we will propose a model for morality and describe its functioning among early and modern humans. This includes a short narrative of “morality in action,” 900,000 years ago, in East Africa.

THE SOCIAL BRAIN NETWORK IN MODELS OF MORALITY AND KNOWING GOOD

The social brain network (SBN) is a well verified structure, composed of clearly identified brain components, including: (1) mirror neurons in the inferior frontal cortex, (2) motor cortex, (3) insula, (4) superior temporal sulcus, (5) amygdala, (6) fusiform face area in the ventral temporal lobe, (7) anterior cingulate cortex, and (8) prefrontal cortex (Grossman and Johnson ; Shoemaker ). Studies of brain lesions involving these components lead to the conclusion that the SBN is somehow involved in the feelings of empathy, compassion, and love, as well as altruism and a sense of fairness with others, because lesions create an absence of these feelings and their resulting behaviors (Shoemaker ; ) This research is groundbreaking in terms of seating empathy, nurturance, and the recognition of faces in well‐defined brain components. The question remains whether it demonstrates morality, either its presence or absence.

We agree that the human tendency to have these feelings is a laudable accomplishment for the human species, and, it is inferred, for other primate species, as well, to varying degrees. It reflects a long evolutionary history of 55 million years of social living in troops and finally in human bands and skyscrapers. We agree that the SBN is intimately involved in human emotional interaction, and that positive emotions such as those listed are central to a great deal of what we call “social behavior.” Indeed, if we are searching for morality, then Christianity's “golden rule” would find a fine, if partial, basis in this foundation of feelings and dispositions. Taking an additional step into the realm of what we would call “morality,” fairness, especially, would find a solid basis in the Old Testament and the ancient texts of other religions. Does “fairness” find a foundation in the SBN? Maybe, and if so, only partly, since the concept of fairness appears to us to signal something more. It is not that the feelings implicated in the operation of the SBN have nothing to do with the way we treat others. They simply may not be enough to add up to morality, or even its foundation, which finds better underpinnings in cognitive science, genomic science, and information science.

For a species and subspecies that clearly has moral capacity, such as Homo sapiens sapiens, the investigation of the SBN's functions are important, especially in documenting the absence of social feelings and dispositions in humans with brain lesions in its components. This work continues to be a fine contribution to neuroscience and medicine. There may be treatments someday that can ameliorate the absence of a fully functioning SBN, although the application of those treatments takes us further over into the realm of medical ethics and morality (cf. Tancredi ).

We wonder if the emotions and behaviors tested in developing the SBN model are sufficient to hold up against other species, living or extinct. Are the qualities described sufficiently precise to test in other species, and with positive results, to imply that a species therefore has moral capacity? This appears to us to be going too far. Important aspects of morality are missing from the social brain network and its associated emotions and dispositions. Together, expressing sympathy, exhibiting altruism (whatever that is), recognizing faces, and providing nurturance do not add up to morality. Shoemaker (), in adding one more feature to this list, may indeed cross over into features that we propose are implied by moral capacity. That feature is “envisioning outcomes of possible behaviors.” That one feature implies a timeline, without which morality makes no sense, now or in the past. Morality sifts information on past behaviors, evaluates them, projects possible consequences into the future, and makes decisions. This is close to, but not exactly, what Shoemaker describes, as the evaluation and adjudicative components are missing.

CAN WE KNOW ABOUT GOOD FROM NARCISSISTS, MACHIAVELLIANS, AND PSYCHOPATHS?

Shoemaker's work on the social brain network lies partially within a large body of literature on humans who cannot seem to “know good”—to think morally—or, they choose not to do so. Our fascination with these unfortunate individuals is understandable. Like voyeurs, we view ourselves but with “something missing,” and we count ourselves fortunate. Determining the quality of that “something else” has given rise to a fascinating branch of neuroscience focused on human deficits, rather than human gifts and capacities. The medical treatment of deficient individuals raises questions about free will, social conformity, and lifestyle options for people with no moral compass. They do exist.

Gazzaniga, a neuroscientist‐turned‐philosopher, has for decades researched “split‐brain” individuals whose left and right brain hemispheres cannot communicate, usually because of an accident or lesion (1999; 2006). He also raises fascinating questions about free will and brain function, for example, “When we become consciously aware of making a decision, the brain has already made it happen. This raises the question, Are we out of the loop?” Gazzaniga answers with insightful comments about humans as “responsible agents,” suggesting that no matter how much brain science we learn, people are still morally responsible for the actions they take. They retain decision‐making authority of their own lives. This is welcome reassurance from a scientist who knows so fundamentally about the workings of the human mind and brain. Gazzaniga's research has also given rise to knowledge of a left hemisphere interpreter, which we have found so essential in our model of morality's evolution. Other research by him points toward an arbiter or decision‐making mechanism in a discrete set of brain components (Gazzaniga et al. ). That satisfies yet another cognitive requirement for our morality model.

Other researchers have begun from a different perspective, analyzing patients who display behavior that initially appears to be defective in some aspect of moral thinking. The logic behind this large body of work appears to be to identify what makes humans morally defective, so they can be taught or assisted, or perhaps forced, to become morally whole. For example, in Laurence Tancredi's Hardwired Behavior; What Neuroscience Reveals about Morality (), the author considers a variety of serial killers and other violent offenders, pedophiles, sex addicts, gluttons, financial frauds, gamblers, and infidels. Their moral failure is assumed directly from their behavior, and Tancredi (a psychiatrist‐lawyer) ascribes their moral lapses to everything from nutrition to drugs, genetic abnormalities, traumatic brain injuries, hormones, and neural transmitters. In summary, much of what lies at the foundation of his clients’ bad behavior is brain‐biology‐gone‐bad. However fascinating these cases may be, it is never quite clear how all these details can be used to help achieve a person who knows moral “good”—although the implication remains that this material can do so.

Other students of poor moral judgment are more finely attuned to the ultimate utility of their work, especially the investigators of deficiencies in empathy as a supposed aspect of morality. Indeed, some authors claim that lack of empathy is the “most telling narcissistic trait” (Kreger ), but does this make empathy a foundational feature of morality? Narcissism appears throughout this vast literature on the roots of moral failure, with the presumed goal of fixing it, medicating it, or training the unempathetic. The autism epidemic has been especially powerful in calling attention to the need to train some children to be empathetic. The movement to aid the sufferers of child abuse has led to efforts to help parents with spouses who abuse others, especially emotionally. They argue that the lives and welfare of children depend on it, and this is certainly true. However, is someone who is unempathetic necessarily immoral? One could argue that this is somehow true, and it seems right, but does morality fundamentally depend upon empathy, even if they often go together? We think not. Nevertheless, the implications of both lack of empathy and moral deficiency go deeply into modern as well as ancient lives of all on the human lineage.

WHAT IS HUMAN PHENOTYPIC MORALITY, AND HOW DOES IT ALLOW US TO KNOW GOOD?

If studies of the SBN and morally deficient individual cases do not reveal human phenotypic morality's origins or full scope—and we do not believe they do—then what exactly are we looking for? One can glimpse a partial answer to this question in a table in Shoemaker's recent formulation (2017) which describes morality as a set of factors and then asks if the SBN allows chimps to do the following: (1) express empathy, (2) express altruism, (3) recognize faces, (4) provide nurturance, and (5) envision outcomes of possible behaviors.

We agree that chimpanzees can do all of these, except for the last. We are not asking whether chimpanzees have been trained or not trained, which might determine behavior that suggests the envisioning of consequences. We suggest that the higher apes do not have a sufficient grip on a timeline and their own place on it to envision the outcomes of their own or others’ actions. We understand that chimpanzees may remember “not to do” a particular behavior because of conditioning, but we do not understand from field studies that they step back and envision what might happen to themselves and others, and what the consequences or the implications would be for their social group. This is a step that chimpanzees have not taken. Harvard philosopher Christine Korsgaard writes,

Morality is not just a set of obstructions to the pursuit of our interests. Moral standards define ways of relating to people that most of us, most of the time, find natural and welcome.…The idea of self‐interest seems simply out of place when thinking about nonhuman action.…Nonhuman animals are not self‐interested. It seems more likely that they…act on the instinct or desire or emotion that comes uppermost…that is a different matter than calculating what is in your best interests and being motivated by a conception of your long‐term good. (2006, 101–03)

Let us now take a look at the features of human phenotypic morality that are most apparent in our species of modern Homo sapiens, and most consistent with our model of a social and cultural context we call “the human hearth,” which developed about 1 mya (possibly earlier), when Homo erectus began to use fire (Rappaport and Corbally , Table 1). It is important to be able to imagine the usefulness of the following characteristics to a species of early human surviving in a niche of scavenging and gathering, where naturally occurring C4 grasses were prone to catching fire (cf. Attwell, Kovarovic, and Kendal ). Conflicts in norms and values must have been constant in bands of 100–110 fully bipedal Homo erectus individuals, but sometimes the consequences of conflicts were sufficiently important to call upon an emergent capacity for morality. That capacity requires the following neurocognitive features: (1) a mental step both back and up; (2) an arbitration mechanism that operates along a timeline; (3) an evaluation using a valence from good to bad; (4) a regretfully dispassionate reasoning; (5) a tentativeness in a mental balancing act; (6) a sad rejection of “wantonness”; (7) a capacity for empathy with someone receiving moral judgment; (8) the experience of a burden; (9) resolution on the part of the group; and (10) hope and faith in the future on the part of the group.

Instead of beginning with a sociality that we have shared with other primates for at least 55 million years, we began with the cognitive requirements for today's humans to do “moral thinking,” even if it occurs in private. Then, we projected those cognitive requirements backward and asked ourselves when and how they most likely emerged, and in what species. It is important to remember that even when we are arbitrating a moral question with ourselves only, we must evidence a certain minimum set of cognitive criteria. The tradition of thought experiments and solitary moral questioning is well founded among philosophers since ancient times. We concluded that primate sociality provided a good, solid foundation and it remains important because morality is often implemented in a social context. However, sociality was not enough, and the ten essential features given above encompass the application of moral thinking in social context for both ancient and modern members of the genus Homo.

Before we describe our morality model for scientific investigation more fully, we introduce morality in its first context, at its roots, in a story that helps us imagine moral conflict in a band of Homo erectus. Readers will see our features of morality in action. The elements we have identified often stand in stark contrast to the output of studies on the SBN, investigations of psychopaths and Machiavellians, and research on nonhuman primates. After this “action example,” we will provide definitions and examples for our model. Readers will glimpse the features of morality in prehistoric context. They are strikingly similar to the modern context.

MORALITY IN ACTION, 900,000 YEARS AGO IN EAST AFRICA: THE STORY OF “BO”

Bo was hungry. His mother complained that he was always hungry these days, and she joked that he grew a little bit every day. He was already taller than his father, one of the band elders. His mother had another child and was busy feeding her, but his Nana, his mother's mother, slipped him morsels of meat and nuts when no one was looking.

With a stride that grew smoother as he grew taller, Bo rounded a copse of trees and headed along a worn path through the tall, dry, brown grass. He decided that morning to visit a spot where he knew some wild pigs lived. He hoped to be able to snatch a piglet, if he was lucky, or maybe he would run across the carcass of a larger pig that had been killed by one of the cats. This was dangerous business, as the cat might still be around, and he ought to take on such a thing with one of his age‐mates—Aba, or even Seer.

He stopped to think a moment, and grabbed a rock, turning it around in his smooth brown hand. He had only two summers of practice in making the knives that the adult men used to slice apart the remains of kills that other animals made. He might need a sharp cutter if he was lucky today, but he knew he didn't yet have the skill to make one. Still, if he came upon some meat and didn't have some way to butcher it, the band would never believe him. They would laugh at his story and tell him it wasn't true, it was just his wishing it to be true.

He tossed the rock aside and picked up another one that was already cracked. It occurred to him to strike it with the discarded rock, so he retrieved it, and struck the second rock—hard, with the full swing of his right arm. Unbelievably, it cracked again, leaving him with a sharp but jagged edge. It was unfinished, but it would have to do.

The sun grew higher in the sky as he neared a group of pigs foraging noisily off to his right in a stand of tall bushes where a small stream ran. Bo felt his heart begin to beat faster, and he slowed his approach. He saw no buzzards and assumed there was nothing for him to scavenge. He would have to be patient, and then very quick.

Bo took up a position downwind from the group of foraging pigs and waited. Silent, still, the sharp‐edged rock in his right hand, he waited, and finally, his patience paid off. A tiny piglet strayed from the group. Bo thought for a moment that the piglet would follow the smell of the group, like it should, but for some reason, it didn't. Stupid pig!

He pounced! He grabbed the piglet tightly and hit him hard in the head, whereupon the piglet fell limp. Bo felt no joy, only satisfaction that he would be roasting the piglet and telling the story of his accomplishment to the others—if the fire keepers had succeeded in keeping the flame from last week's lightning strike alive. Otherwise, he'd divide the meat between the families and give a bit to each one, to eat raw.

Bo walked back toward camp, the limp piglet in his grasp.

His friend Aba met him with a surprised expression. “You hunted alone?”

Bo nodded, the concern beginning to show on his face. He raised the half‐shaped stone tool as weak support for his actions. He had been ready, but only half‐ready.

Two elders followed Aba and confronted him. “You hunted alone?” one asked.

“I was hungry,” Bo replied.

“You should not hunt alone, away from the camp. We will need you in the future. You should not take a risk like that.”

Bo nodded, the enormity of his actions beginning to dawn on him. He might have wanted to satisfy an immediate hunger, but the group needed him more to feed them all. He understood that. It was an understanding that had begun to weigh heavily upon him in the past year.

“You could have died,” one of the elders said roughly.

Bo nodded again, more deeply than the first time.

The two elders withdrew to the side and conferred between them. Off to the right, he saw his Nana approach, and she joined them.

“Why didn't you call me to go with you?” Aba implored.

“I—”Bo glanced toward the elders, and saw his Nana speaking. Then she fell silent, and nodded.

The elders approached Bo, and the one who had not spoken turned to him. “You will forfeit the piglet,” he said with regret. “The portions will go to the other campfires. In the future, do not hunt alone. We cannot afford to lose you.”

Bo slumped, but he nodded. He knew he had made a mistake, one that he wouldn't make again. Next time, he would take Aba with him, and tell the elders where they were going. He still knew where the pigs could be found, and next time, he would take a bigger one.

HUMAN PHENOTYPIC MORALITY: A MODEL FOR INVESTIGATION

Now that we have seen one instance of how morality might have operated in a prehistoric band of Homo erectus, we bring our discussion up to date, placing it squarely in the modern world, while at the same time identifying analogous problems in the Homo erectus example. In the following ten essential features of morality, there is surely a great deal that can be investigated.

A mental step both back and up. The first essential feature of morality is an attitude or a stance, and not so much a predisposition but a mental vantage point. Granted, it is an imaginary vantage point, but one that serves quite well to accomplish some of the tasks of human phenotypic morality. We call this, “A mental step both back and up.” It could be called many things and has been, throughout recorded history. It is important to list this feature first, because human morality cannot usually be applied in the fog of daily interaction. It requires a distance that is usually symbolic with respect to someone receiving moral judgment, even ourselves. An elder, sage, or expert removes him‐ or herself from daily action and assumes a higher perspective. The judge does this. The priest does this. Parents, at one of those critical family discussions at a cleared dinner table, do this. Distance is created between the to‐and‐fro of rapid, normal, and often unconscious decision making, and the moral context that requires a more deliberate perspective. While children rough‐housing on a playground can decide, “That wasn't fair!” the moral context requires a more quiet, circumspect, and often symbolically formal assessment. The decision‐making context must slow down, become self‐conscious, and remove itself (often literally) from daily modes of decision‐making.

In Bo's story, we saw the elders step to the side to discuss Bo's possible infraction. We also saw an older woman, Bo's Nana (grandmother), join the two men. Our model of the first morality calls for the participation of multiple, elder individuals, both males and females, following closely the theories of menopause as an adaptation (Williams ) and the grandmother hypothesis (O'Connell, Hawkes, and Blurton Jones , ; Opie and Power ), which provide important roles for elder women. At the evolutionary stage of Homo erectus, females are almost as large as adult males (unlike the australopithecines, where sex difference in size was more extreme). An interesting feature of the story is that the young men (Bo and his friend Aba) do not rush to the conferring elders to state their case. They hold back. They see the two males and the elder female conferring. They respect the distance that the elders have established to confer about Bo's fate, and wait for a decision. This emphasizes the restraint often required in moral decision making, a quality highlighted in other features, below.

An arbitration mechanism that operates along a timeline. The second essential feature of morality that we identify is a cognitive requirement that calls for moral application to be along a timeline. This timeline is the same as the cosmic timeline, and it is important in identifying the multiple, cognitive, time‐keeping mechanisms now identified for modern humans (Madl et al. ). It is useful to think through the importance of time to human phenotypic morality. Time is one of the dimensions along which evaluation of human behavior occurs (the other being a valence from good to bad). Elders, sages, or experts look back at previous, similar instances and draw conclusions that affect current decision making and the future consequences of an application of morality. They consider the history of the person receiving moral judgment, and the history of others who have behaved similarly, or who might contemplate behaving similarly. In doing so, they cannot avoid confirming old or developing new aspects of a moral code that stands, like they do when they engage in moral decision making, above and apart from daily interaction. The temporal context, like the vantage point, is very broad, almost always culture‐wide, and in some cases humanity‐wide (as trials for war crimes). It tends not to apply to other animals, although separate codes can be developed to apply to them.

In Bo's story, the elders took a lengthy view of his place in the band, knowing that he was a finely developing youth who would be useful to many others in his ability to scavenge food. Bo, himself, had been realizing his own importance for some time, but not in a prideful way. It was a way that signaled gravity. The story reads, “It was an understanding that had begun to weigh heavily upon him in the past year.” This recognition was no doubt at the basis for his acceptance of the elders’ decision. Other youth, who were not as mature as Bo, might have challenged the elders’ decision, but Bo did not and neither did his friend Aba. To use Korsgaard's terminology (), Bo was constructing a self‐identity that included the consequences that morality incorporates. That self‐consciousness places Bo firmly on a timeline. Just as Bo could not yet form a good hand axe, he knew he would be able to do so in the future. He knew he had been, as the story states, “only half ready” as a youth with a hand axe with a jagged edge. He was utterly self‐aware of his infraction and its importance for the future. The jagged‐edged hand axe was a symbol of his lack of readiness, and he knew it.

An evaluation using a valence from good to bad. The third essential feature of morality is especially important for a species that is still evolving and gaining new capacities, like reading 6,000 years ago, or genetic protection against diabetes, which is still spreading in post‐agricultural human populations (Cochran and Harpending ). While the biological components of morality are quickly being identified, from brain parts to patterns in brain scans, morality is anything but a set of black‐and‐white injunctions, or a clearly defined cognitive and/or intellectual and/or emotionally based process. Indeed, we propose that morality, as a human capacity, is not completely “hard‐wired” yet, just as reading is not yet hard‐wired (Rappaport and Corbally ). If the human brain is not presented with the proper stimulus, reading is not learned. Some of the brain components involved in reading return to their more ancient functions, like reading faces (Colagè ). Similarly, some humans develop a moral sense very late, or seemingly not at all. Others develop it early and are called “precocious.”

The feature we call “An evaluation using a valence from good to bad” is one for which few authors have questioned its palaeoanthropological roots. The assumption is that if morality does anything, it provides a way for knowing “good” human behavior from “bad.” Still, we ask: Where did this continuum first come from? We propose that, if morality arose in Homo erectus, the “good” end of the continuum had to have been survival, and those things that supported life, itself, and the continuation of the group.

In the story of Bo, the elders appear to be applying a criterion based on actions needed to perpetuate the group. Bo might be hungry today, but years from now he will help to feed many other members of the band, as well as himself. The elders indicate that they “cannot lose him.” Here, the heavy hand of morality comes down on a youth's own freedom. Indeed, they constrain his freedom of action, while he, at the end, wriggles out of those constraints and decides how to get what he wants (a bigger piglet), but within the parameters set by the elders. Very clever!

Neuroscientist‐turned‐philosopher Sam Harris proposes that “well‐being” is a broadly tested concept that provides an avenue toward measurement of the relative value in moral systems. Like us, Harris sees a fundamental sameness underlying all moral systems, even when some evaluations of “good” and “bad” seem horrific. He proposes that “well‐being” gives an empirical handle on what is “moral good” and what is “moral bad.” He proposes that it is clearly measurable and because so, moral systems can be objectively compared (2010; 2011).

On the other hand, we suggest that these comparisons also fall back, as they did with Homo erectus, on common sense. If a moral decision furthers the gifts of life and well‐being, supports the continuation of the group, and helps to give all participants hope for the future, then it has served its purpose. The devil (not an arbitrary term here) comes in the details, and the differences between cultures are found in those details, too. When issues of the “lesser of two evils” or “the greater of two goods” come into play, we see the flowering of differences by culture. Whether a human action is morally good or bad in a global sense seems to fall back on “you know it when you see it,” and when you feel it, criteria.

We propose that the solution to this conundrum falls somewhere between objective, empirical testing and “gut feelings.” Morality is variable. Moral capacity is expressed differently through time and around the globe. It does not and cannot conform perfectly to each and every instance of humans getting into trouble. They are simply too adept and creative at finding ways to err. That may be why morality is not fully hard‐wired, and may never be. To tie it down to specifics would render humans as little more than instinct‐driven birds.

Besides, we propose that this variability and plasticity constitute one of moral capacity's most salient characteristics; it is reflected in so many of our ten features of morality. Knowing “right” from “wrong” emerges by filtering human behavior through the lens of a decision‐making capacity that works properly even if there are no concrete answers to right‐and‐wrong questions. Its emergence on the human lineage is a marvel of biological evolution.

A regretfully dispassionate reasoning. The fourth essential feature of morality involves the dilemma that emerges when moral reasoning is, as it should be, as dispassionate as possible and therefore fair, objective, and “at arm's length,” while, at the same time, the application of morality can evoke sadness and regret. We see this in the attitude of a judge sometimes, who regrets his or her need to rule on a case at all, and may regret the transgressor's actions, finding them “sad” from a natural or global perspective. An elder's, an expert's, or a parent's application of morality can be a responsibility that an individual regrets having, especially if it involves punishment of someone loved. If human suffering on all sides cannot be avoided, then the application of morality can be filled with regret for everyone, not only on the part of the individual applying morality, but on the part of recipients and onlookers, too. Moral judgments sometimes evoke tears in onlookers. Ruling on moral lapses is rarely an enjoyable task, from any perspective, although it may be necessary on the grounds of social justice, equity, humaneness, and a need for retribution, redress, or recompense. Being “dispassionate” at times of moral decisions, when one may feel compelled to be involved, argumentative, and volatile, is a restraint felt by all who have experienced moral decision making.

Bo's Nana (his grandmother) had the role of one of three elders in the band, and while she may have spoken for him because he was her grandson, she knew (and he knew) that the decision of the elders as a group was paramount because the decision represented the band. This conflict could well have caused regret in her and Bo, too, not to mention his mother. Parents can be those most often unable to apply moral rules, because of the pain they feel they will cause, but they are nevertheless called upon most routinely.

A tentativeness in a mental balancing act. The fifth essential feature of morality betrays a similar, inherent contradiction. The arrival at a decision of how to implement a moral finding and arrange for redress involves a sometimes complex mental balancing act, and it therefore has a tentativeness or uncertainty. This mental calculation is facilitated by the removal of moral decision making to a separate, if only symbolic, place. Time is often provided for this deliberation to occur. In a trial, there are usually at least two “removals”: the sequestration of a trial jury and the judge's often removing himself to rule on the case. There is yet another removal in making a decision about redress, sentencing. The flow of court‐based proceedings is often based upon these built‐in “time‐outs” to reconsider and think through the decisions being made.

The elders in Bo's band took their time, and tentativeness was an essential feature of their deliberation. They perhaps heard the reasoning of his Nana, who was one of them. They could well have vacillated, if only briefly. After all, it was only a small piglet, one could argue. And, after all, Bo was almost to the point where he could make a “man's hand axe,” so he was almost ready to take greater risks. The elders weighed all of these factors against the essential recklessness of Bo's actions, in scavenging alone.

On his way down the trail, Bo went through a number of these in his own mind. The individual receiving moral application sometimes confronts the same tentativeness. He considered that there were no buzzards in the area where the piglets foraged. That would signal a major kill, and the action of one of the large cats. A cat might still be in the area, quite able to protect its kill from scavengers—which is what Bo was. He also considered his preparation. He needed a “cutter,” but when he tried to fashion one, he knew it wasn't a very good one and he was only “half‐prepared.” The elders very likely considered all these factors, too, although they needed to make a strong statement that would end up preserving Bo's life and the band's existence in the future. They operated on an essential timeline in considering the gravity of Bo's infraction. It was serious, but it was best used as a “cautionary tale” for others, who could learn from Bo's experience without having to take the same risks.

Values are weighed against each other, punishments are considered, and conclusions are drawn that are sometimes very difficult to articulate. Therefore, moral reasoning and “knowing” or “finding good” often involve hesitancy or tentativeness. As much as a participant might want the process to be cut‐and‐dried, black‐and‐white, it rarely is.

A sad rejection of wantonness. The sixth essential feature of morality points to the assumption of adult moral thinking as a developmental accomplishment. We were surprised to see philosophical commentators on morality use the word “wantonness.” However, it is one of a set of terms used by both classical and modern philosophers repeatedly to distinguish between human moral awareness and the presumed insouciance of other animals. “Wantonness” stands for a recklessness that flows from the absence of a clear hold on human moral responsibilities, along a timeline and within the context of human society (Kitcher ; Korsgaard ; Frankfurt ).

In the story of Bo, the youth failed to stop and think before deciding to scavenge for a pig alone. His own friend was surprised and asked, “Why?” His reply: “I was hungry.” As he travelled the trail, he knew that his quest was “dangerous business” and that large cats might be nearby, but nevertheless, he persevered. In light of the dangers he could have confronted, including a mauling or death, it is difficult to avoid the conclusion that his behavior was “reckless.” That recklessness is akin to the “wantonness” that philosophers use to distinguish self‐conscious human thought from animal behavior.

Could we simply interpret Bo's behavior as a “youthful indiscretion”? By all means, but that conclusion emphasizes that he was not thinking like an adult human. He was being reckless, a type of behavior that conformed to his less‐than‐adult developmental stage. However, Bo's immediate recognition that he had a problem when he met his friend while coming back into camp, his acquiescence to the decision of the elders, and his consideration of his responsibilities for the past year—all these factors suggest that Bo was nearing adulthood, a time when recklessness—wantonness—insouciance—would have to fall away with the assumption of adulthood. It is a happy event, but a sad one, too, and this inherent conflict is usually captured in puberty rites worldwide. In symbolically spanning youth to adulthood, puberty rites allow the group to go on, in spite of the co‐existing sentiments of happiness and regret.

Conforming to adult expectations, when it would be so much easier to “let loose” and forget them, can cause sadness, but it also creates, on reflection, a sense of accomplishment. Adulthood has been achieved. So, Bo had reason to be satisfied with the elders’ harsh treatment of him. It signaled his impending maturity, and that was a fine replacement for the sadness of leaving a carefree youth behind and relinquishing the freedom of taking excessive risks with his own life. He could no longer do that because it mattered to the group whether he lived or died. The same type of concern often leads modern parents of young children to purchase life insurance.

A capacity for empathy with someone receiving moral judgment. The seventh essential feature of morality is usually considered a feeling or emotion, one that implies a sensed commonality with another human and even taking on the other's feelings as one's own. Empathy is very often included in formulations of morality, for example, in the models emerging from work on the social brain network. However, here, we specify a certain type of empathy, and that feeling is toward the individual receiving moral judgment. This kind of empathy is sometimes elicited for an individual feeling the weight of a moral decision, or it may not. In focusing on empathy for an individual being judged, we see the possibility of a broader understanding of the consequences of moral decision making for others in the social group besides the elders or experts implementing a moral decision. The understanding of the decision makers then spreads out and becomes the understanding of the group, in general. The implementation is noted, discussed, and becomes part of the culture. Bo's infraction and punishment would surely be the topic of discussion at every campfire that evening!

“Receiving moral judgment” refers to the actual implementation of a moral decision. This is the “action” portion of morality, and it can vary from a quiet prayer, to the death penalty, to nothing. If a punishment or redress is implemented, this can elicit a response in an elder, expert, or parent that involves “mirror neurons” (Rizzolatti and Craighero ; Keysers ) where the individual making a moral judgment experiences the pain of the individual receiving the moral judgment. The same effect can be elicited in onlookers. This feeling surely does not always attend moral judgment, but with the possibility of understanding another's suffering, the potential arises of a better understanding the basis for the moral decision—on the part of the judge, the judged, and others in the group. Like other characteristics we have noted above, there can be a sadness that attends this type of empathy. Feeling the emotions of another person means feeling the good and the bad.

In the story of Bo, the individual who was probably in the most conflicted emotional position was Bo's Nana. He was her grandson, and still, she was a member of the group of elders that made moral decisions, and she, like Bo, had to acquiesce to the “rightness” of the decision to punish him. This cannot have been a happy experience, but she, like Bo, could take satisfaction in the fact that the punishment was given to someone who was nearing adulthood and would have to consider his actions more broadly and carefully, for the sake of the group as well as his own sake. In Bo's nearing maturity, his Nana could find satisfaction in the situation, in spite of empathizing with him and taking some of his pain onto herself. She had reason to be proud.

The experience of a burden. The eighth essential feature of morality derives from many of the feelings and states mentioned to this point, but it highlights the willingness of experts, elders, and parents to take on a mentally and emotionally difficult role. All of the essential features together represent an assumption of moral responsibility and the task of thinking morally. This is most often experienced as isolation and as a burden. The sensation of a burden joins sadness, regret, and tentativeness for the individuals tasked to come to moral decisions.

In the story of Bo, it is noteworthy that his Nana did not come directly to him, but joined the other elders in a task she knew would be difficult. She represented Bo, to an extent, but in the role she played at the end, she represented the group. It took maturity to parse the difference between her two roles. This was not in any way enjoyable to her or to the others. The tone of the male elders was necessarily “rough” because their demeanor had to represent the displeasure of others in the band. If Bo were killed, they would lose a good provider for the future and the band would potentially be placed in jeopardy.

Resolution on the part of the group. The ninth essential feature of morality refers to the end product: the group's resolution. It is important to note that it is the resolution on the part of the group, not on the part of the individuals who make the moral decision or who receive its implementation. The individuals involved may remain torn, conflicted, angry, and disappointed for a long time to come. However, the group's level of tension is reduced. The rules have been clearly stated by example. There is no doubt that Bo made a mistake, and others should take heed and not make the same mistake. The survival of the group is paramount. The lesson is clear: no matter how talented and foolhardy a youth may be, he should follow the rules.

In Bo's story, he learns this lesson, but at the end he devises a way to get what he wants inside the strictures of the group. Next time he scavenges, he will take Aba with him, they will tell the elders where they are going, and they will get a bigger piglet! By the end of the story, Bo is already well on his way to integrating his moral infraction into his future hunting strategies. Again, as Korsgaard notes, “Moral standards define ways of relating to people that most of us, most of the time, find natural and welcome” (2006, 101). As a system that resolves norms, values, and the range of individual behaviors, morality works for the good of the group. Higher order infractions are often dealt with publicly, to inform all who watch what the rules are and why they exist. In Bo's story, the deliberation of the group of elders was private, but the announcement of Bo's judgment was public.

Hope and faith in the future on the part of the group. The tenth essential feature of morality also refers to the end product, and underscores the strong emotions involved in moral decision making, as well as its cognitive and intellectual difficulty. In human groups, morality does not constitute an emotionless, cold, “heartless” system of retribution. It takes human intellect and emotion and puts them to good use for the sake of the group. Decisions that call forth moral judgments are necessarily at a higher level than small infractions, like stealing a cookie from the cookie jar. For that reason, moral judgments crafted in isolation by elders, experts, and parents are often implemented in a public context (or, in a household, with all family members present).

Once moral decisions are made and announced, group members can return to their normal activities. They may not have agreed with the decision made, but usually they go along with it so they can focus on their daily existence. Moral decision making usually, eventually, imposes a quality of calm on others, but, as in Bo's case, it also points toward acceptable avenues for future action. By the end of the story, Bo was already “over” the decision and on to crafting a way to get what he wanted (another, bigger piglet, and praise for it), within the rules the elders set. Recall, at the beginning of Bo's story, what he wanted was group recognition, and he was going to use scavenging a piglet to get it. What he feared most was the group's scorn: “…if he came upon some meat and didn't have some way to butcher it, the band would never believe him. They would laugh at his story and tell him it wasn't true, it was just his wishing it to be true.”

Depression on the part of animals is well known, as it is among human children. In these cases, it is termed “a failure to thrive.” Among adult humans, depression can prevent the accomplishment of important daily activities that keep others alive and well. Therefore, it is to the benefit of the group that moral infractions be managed in a public framework, so that social justice can be served, rules can be clearly stated by example, and dysfunctional emotional states can be reduced. Humans must have hope for the future. They operate along a timeline and they keep an eye on the past and the future. Their handy LHIs (left hemisphere interpreters) constantly churn out explanations for the way things work (Gazzaniga ). If higher level moral infractions go unmonitored and unaddressed, then there is a real possibility of depression on the part of adults. Therefore, the implementation of a moral system—neurologically based but operating as a cultural system—is an investment in maintaining hope and faith in the future. Without hope and faith, humans fail to thrive.

We have explored in depth our model of ten essential features of morality. Together, they form a good foundation for understanding the biologically based capacity in humans that we call morality, and testing for it. We have discussed a variety of ways in which “knowing good” (as well as defining good, finding good in everyday action, and judging behavior along a continuum from good to bad) can be seen as an essential part of the moral consciousness of both individuals and the group. Morality from only a negative perspective is not enough; we cannot learn about “good” only from miscreants. We must rely on the examination of the successful operation of morality in intimate detail, in the lives of human beings. To date, literary artists and philosophers have been more adept at this than social scientists, although this is frequently true of faculties just emerging into the harsh glare of scientific examination.

The foundations of morality that we have discussed are biological, evolutionary, cognitive, neurological, and we will add, the focus of the emerging sciences of genomics, bioinformatics, and information science.

“KNOWING GOOD” FROM THE PERSPECTIVE OF NEW SCIENCES

To fully understand human phenotypic morality, we must explore its operation from the level of genes, to specific biological components and networks in the brain that allow humans to think morally, to its operation in groups of social beings. The work on morality within the field of genomics, bioinformatics, and information science will be an enormously complex task over the following decades, even centuries.

We have written elsewhere (Rappaport and Corbally ) that the biological components that form the basis for moral decision making will eventually be identified, their genomic basis better understood, and the timing of their evolutionary emergence will be clarified. Specific types of biological tissues can be traced back to their origination in protein synthesis and its regulation by other genes, its change through epigenetic factors, and its functioning in organ systems, including neural networks. Moral decision making will likewise eventually be traced to an underlying genomic substratum.

Does this imply that we can reduce human morality to genes alone? No. No matter how thoroughly we understand the biological basis of morality and how it is coded genetically, its context is always changing because culture is always changing. Humans have a near‐unique capacity to influence their own biology, and change things for the good of the species. Moral decision making is a variable capacity that relies on the evolution of a human being with an enormous degree of genomic and phenotypic plasticity. Each instance of moral decision making, while it may look like another, is never fully the same because culture has changed since the last similar case. The realization that moral decision making is always changing makes it all the more remarkable. Those readers interested in the initial identification of genes associated with human decision making, are referred to reports on a gene named miR‐941, which is a regulator gene. As such, it can be very powerful—both when it functions properly, and when it does not. It should be noted that the gene evolved de novo, out of “junk DNA,” which is unusual, since human lineage specific genes evolve more commonly from other genes (E. Harris ; Rappaport and Corbally, ; ScienceDaily ; Hu et al. ; Khaitovich et al. ).

Technically, consideration of morality from the point of information science follows genomics very closely, in the newly developing field of bioinformatics. All living species are related, and much of their genomic complements are similar, or the same. The regulation of their phenotypic expression is the origin of much of the variability we see. The analysis of the first chimpanzee genome involved creating an entirely new (and very large) computer program (Pollard ). Identification of uniquely human genes is as much a task in information science, as it is a topic of biological investigation.

These new sciences will provide unique perspectives on the “biological basis for knowing good,” but we should not be fooled that identification of a “morality gene” or a “God gene” is the ultimate goal. Moral functioning in human beings emerges out of layers of complexity that are, ultimately, always new, as the species seeks out new challenges and new environments.

As we see in the story of Bo, moral decision making involves emotions and cognitive capacities—to which we would add human perceptual capacities, too. Because moral decision making uses so many human biological systems, from intellectual to perceptual, we can envision the use of information science in organizing the vast amount of data involved in single instances of moral decision making, and from there to the accretion of moral codes. Information science and the biology of morality will be enormously helpful in identifying which kinds of information are important. With that greater knowledge base, humans can look forward to honing their skills of moral decision making, knowing that it will change, but remembering that it incorporates, biologically, a humanity and humaneness that we do not want to lose as the species confronts new challenges and even evolves, itself.

Notes

  1. A version of this article was presented at the 62nd Annual Summer Conference of the Institute for Religion in an Age of Science (IRAS) entitled “How Can We Know? Co‐Creating Knowledge in Perilous Times” held on Star Island, New Hampshire, from June 25 to July 2, 2016.

References

Aiello, Leslie C., and Robin I. M.Dunbar. 1993. “Neocortex Size, Group Size, and the Evolution of Language.” Current Anthropology  34(2):184–93.

Attwell, Laura, KrisKovarovic, and Jeremy R.Kendal. 2015. “Fire in the Plio‐Pleistocene: The Functions of Hominin Fire Use, and the Mechanistic, Developmental and Evolutionary Consequences.” Journal of Anthropological Sciences  93:1–20.

Cochran, Gregory, and HenryHarpending. 2009. The 10,000 Year Explosion: How Civilization Accelerated Human Evolution. New York, NY: Basic Books.

Colagè, Ivan. 2015. “The Human Being Shaping and Transcending Itself: Written Language, Brain, and Culture.” Zygon; Journal of Religion and Science  50:1002–21.

Coolidge, Frederick. L., and ThomasWynn. 2009. The Rise of Homo sapiens; The Evolution of Modern Thinking. Chichester, UK: Wiley‐Blackwell.

Frankfurt, Harry. 1971. “Freedom of the Will and the Concept of a Person.” Journal of Philosophy  (January): 5–20.

Gazzaniga, Michael S.1999. “The Interpreter Within: The Glue of Conscious Experience  .” Available at http://www.dana.org/Cerebrum/Default.aspx?id=39343#sthash.I7zCiFeL.dpuf

Gazzaniga, Michael S.. 2006. The Ethical Brain: The Science of Our Moral Dilemmas. New York, NY: Harper.

Gazzaniga, Michael S., Richard B.Ivry, and George R.Mangun. 2013. Cognitive Neuroscience: The Biology of the Mind, 4th ed. New York, NY: W.W. Norton.

Grossman, Tobias, and Mark H.Johnson. 2007. “The Development of the Social Brain in Human Infancy.” European Journal of Neuroscience  25:909–19.

Harris, Eugene E.2015. Ancestors in Our Genome: The New Science of Human Evolution. Oxford, UK: Oxford University Press.

Harris, Sam. 2010. “Moral Confusion in the Name of ‘Science’.” HuffPost Books  , The Blog. Original: 05/29/2010 05:12 am ET. Updated: May 25, 2011. Based on a 2010 TED conference presentation.

Harris, Sam. 2011. The Moral Landscape: How Science Can Determine Human Values. New York, NY: Free Press.

Hu, Hai Yang, LiuHe, KseniyaFominykh, ZhengYan, SongGuo, XiaoyuZhang, Martin S.Taylor, LinTang, JieLi, JianmeiLiu, WenWang, HaijingYu, and PhilippKhaitovich. 2012. “Evolution of the Human‐Specific microRNA miR‐941.” Nature Communications  3:1145. https://doi.org10.1038/ncomms2146

Keysers, Christian. 2010. “Mirror Neurons.” Current Biology  19(21):R971–73.

Khaitovich, Philipp, WolfgangEnard, MichaelLachmann, and SvantePääbo. 2006. “Evolution of Primate Gene Expression.” Nature Reviews Genetics  7:693–702.

Kitcher, Philip. 2006. “Ethics and Evolution: How to Get Here from There  .” In Primates and Philosophers: How Morality Evolved, edited by FransDeWaal, 120–39. Princeton, NJ: Princeton University Press.

Korsgaard, Christine. 2006. “Morality and the Distinctiveness of Human Action  .” In Primates and Philosophers: How Morality Evolved, edited by FransDeWaal, 98–119. Princeton, NJ: Princeton University Press.

Korsgaard, Christine. 2009. Self‐Constitution: Agency, Identity, and Integrity. Oxford, UK: Oxford University Press.

Kreger, Randi. 2012. “Lack of Empathy: The Most Telling Narcissistic Trait. Stop Walking on Eggshells.” Psychology Today  . Online blog, Jan 24.

Larsen, Clark Spencer. 2014. Our Origins; Discovering Physical Anthropology, 3rd ed. New York, NY: W. W. Norton.

Levi‐Strauss, Claude. [1955]1973. Tristes Tropiques, translated by John and DoreenWeightman. New York, NY: Atheneum.

Madl, Tamas, StanFranklin, JavierSnaider, and UsefFaghihi. 2016. “Continuity and the Flow of Time: A Cognitive Science Perspective  .” In Philosophy and Psychology of Time: Studies in Brain and Mind, Vol. 9, edited by BrunoMölder, ValtteriArstila, and PeterØhrstrøm, 135–60. Cham, Switzerland: Springer International Publishing.

Main, Douglas. 2014. “Some Chimps Are Putting Grass in Their Ears for No Particular Reason  .” Available at Smithsonian.com. June 30.

O'Connell, James F., KristenHawkes, and Nicholas G.Blurton Jones. 1999. “Grandmothering and the Evolution of Homo erectus.” Journal of Human Evolution  36:461–85.

O'Connell, James F., KristenHawkes, and Nicholas G.Blurton Jones. 2002. “Meat‐Eating, Grandmothering and the Evolution of Early Human Diets  .” In Human Diet: Its Origin and Evolution, edited by PeterUngar and Mark F.Teaford, 49–60. Westport, CT: Bergin & Garvey.

Opie, Kit, and CamillaPower. 2011. “Grandmothering and Female Coalitions: A Basis for Matrilineal Priority?  ” In Early Human Kinship: From Sex to Social Reproduction, edited by Nicholas J.Allen, HilaryCallan, RobinDunbar, and WendyJames, 168–86. Malden, MA: Blackwell.

Pollard, Katherine S.2009. “What Makes Us Human?” Scientific American  300 (May): 44–49.

Rappaport, Margaret Boone, and ChristopherCorbally. 2016a. “The Human Hearth and the Dawn of Morality.” Zygon; Journal of Religion and Science  51:835–66.

Rappaport, Margaret Boone, and ChristopherCorbally, SJ. 2016b. “The Emotional Brain Hypothesis: Emotional, Social, and Religious Vetting in the Evolution of Rational Decision Making and Scientific Modeling  .” In Issues in Science and Theology: Do Emotions Rule the World? edited by DirkEvers, MichaelFuller, AnneRunehov, and Knut‐WillySaether, 133–42. New York, NY: Springer.

Rappaport, Margaret Boone, and ChristopherCorbally, SJ. 2017. “Update: Our State of Knowledge of the Genomic Basis for Human Specialness, with Implications  .” Studies in Science and Theology16(2).

Rizzolatti, Giacomo, and LailaCraighero. 2004. “The Mirror‐Neuron System.” Annual Review of Neuroscience  27:169–92.

ScienceDaily. 2012. “Science News. New Brain Gene Gives Us Edge Over Apes, Study Suggests  .” Source: Dr. Martin Taylor, Institute of Genetics and Molecular Medicine, University of Edinburgh. Journal reference: Hu et al. 2012. Online, Nov 14.

Shoemaker, William J.2012. “The Social Brain Network and Human Moral Behavior.” Zygon; Journal of Religion and Science  47:806–20.

Shoemaker, William J.. 2017. “The Evolution of Hominins and the Biological Foundation of Morality.” Studies in Science and Theology  16(2).

Tancredi, Laurence R.2005. Hardwired Behavior: What Neuroscience Reveals about Morality. New York, NY: Cambridge University Press.

Varki, Ajit, and Tasha K.Altheide. 2005. “Comparing the Human and Chimpanzee Genomes: Searching for Needles in a Haystack.” Genome Research  15:1746–58.

Williams, George C.1957. “Pleiotropy, Natural Selection, and the Evolution of Senescence.” Evolution  11:398–411.