Notes on Utilitarianism by John Stuart Mill

Some Useful Terms

  • Ethics is the subfield of philosophy concerning the nature of right and wrong. 
  • Normative ethics is the subfield of Ethics concerning what standards to use when judging what we morally ought to do.
  • Consequentialism is a normative ethical theory that judges the rightness or wrongness of actions entirely on their consequences or effects.
  • Utilitarianism is a type of consequentialism that believes happiness and unhappiness ought to be maximized and minimized respectively.

Book Notes

Utilitarianism (1861) is the most famous book on the eponymous ethical theory. Due to its great influence on the study of ethics and short length of just under 100 pages (allowing for its continual use in undergraduate classrooms), it has maintained great relevance to the present day. It has played a key role in the history of consequentialist ethical theories and can be credited, in part, for their popularity.

Divided into five chapters, Mill describes what the theory of Utilitarianism is (and is not), how people might be motivated by it, his proof for it, and ends with an analysis of justice and its relationship with the theory. Throughout the first three chapters it is notable how much time Mill spends deflecting canards, or objections he does not consider to have merit due to their inaccurate assessment of what Utilitarianism is – some occurring before he even provides an outline of the theory itself.

This outline begins in chapter 2. The key principle Mill directs us to, is the much remarked upon Happiness Principle : Acts are right insofar as they tend to increase the overall happiness or decrease unhappiness. By happiness, Mill is eager to point out, he does not mean the trite notion of momentary bliss, but all the aspects of life that are satisfying or pleasurable. Additionally, in discerning what types of happiness are best, he uses a controversial criterion: of any pair of actions where everyone or nearly everyone who has tried both prefers one over the other, the preferred one is the one bringing greater happiness. 

This, Mill believes, demonstrates that so-called higher pleasures of mental, moral, or aesthetic quality are better than lower, sensation-driven pleasures. It also leaves philosophizing and intellectual thinking as some of the greatest pleasures around — quite convenient for Mill, given that this is what he spent much of his time doing outside of political advocacy.

Furthermore, Mill notes, other principles often embedded in moral language, such as veracity or virtue, still have purchase in Utilitarianism. However, these are secondary principles, which, while good guideposts to moral behavior, are not the ultimate deciding factors of right and wrong. The ultimate judge of rightness and wrongness is the degree to which happiness has been increased or decreased.

In chapter 3, Mill dedicates significant time to describing how Utilitarianism is not unique from most ethical theories in certain ways. The same psychological and social sanctions will be used to prompt people to perform moral actions. While it may take time before the tenets of Utilitarianism seep out into society through education and persuasion, the mental and social tools to prompt moral behavior are already there, even if what is considered moral is changing.

In chapter 4, we are asked to consider how Utilitarianism might be proved. As he notes, this is no direct proof but is the best that can be asked for a moral theory. Roughly it goes:

  1. Everyone desires happiness
  2. The only way to prove what is desirable is to observe what people desire
  3. A person’s happiness is thus good for that person
  4. Therefore, the general happiness is good to the aggregate of people

Despite being warned that this was not a direct proof of mathematical strength, it does still feel underwhelming. Specifically, it is peculiar that Mill thinks it logical that one person’s happiness being good for them entails increasing the aggregate amount of happiness being good for the aggregate of people. Such a logical connection requires some other assumptions about what the aggregate of persons means and whether or not something can be good for them. 

Through chapter 5, Mill considers the topic of justice. He searches for common attributes to conceptions of justice, and finds them to be grounded in a set of emotions that deal with self-preservation, some observed in other animals. These emotions, when constrained by social custom, motivate the creation of law. Mill points out, the etymology of justice demonstrates the deep connection it has to our legal foundations (Jus means law in Latin). But, he states that it is deeper than law, as the law itself can be unjust.

So, justice can be seen as the ways in which society protects our moral rights, sometimes through law. This means we all have a stake in the creation of just systems. Mill connects this to utility, and the happiness principle, by noting that just systems secure people’s basic security and alleviate many of the most basic concerns we have regarding harms that others might inflict upon us. However, justice is not systematic and it lies on top of some of our deeper intuitions concerning morality. At the base of our moral intuitions lies the notion of utility. Justice emerges from this. Furthermore, Mill argues, there are cases in which it would be moral to act expediently outside of what is just, yet within what is moral. This demonstrates that justice delineates a class of moral rules which emerge in societies to satisfy certain common emotions used for self-protection and fairness. This is less fundamental than the notion of what is moral, which Mills states is determined by the principle of utility.

Together, the chapters lay out a series of passages that contain many influential and compelling arguments in favor of, at the very least, a prioritization of happiness in any ethical system, if not adherence to Mill’s version of Utilitarianism itself. Mill’s work has been followed by a series of derivative ethical theories and has done much to advance the expanding moral circle, where greater moral concern is given to women, the impoverished, those in other countries, and non-human animals.

Bottlenecks to Progress in the Internet Age

I have been reading A New History of Western Philosophy by Anthony Kenny and it resurfaced thoughts that I have often had when learning about historical figures and everyday life in prior eras. In particular, how these figures were able to overcome the dual problems of censorship of political and religious elites and the limited availability of information will always fascinate me.

The lack of access to crucial historical texts was perhaps the major bottleneck which prevented philosophical progress in medieval Europe. In fact the capture of Constantinople by Ottoman forces in 1453 ended up being critical for the Renaissance. This is what forced the Greek scholars, who had kept the philosophy of Plato and other ancients alive, to flee to Italy, where Scholasticism (the rigid fusion of Christianity and Aristotelianism) dominated. The spark of new classics was enough to light the flames of new philosophies that burned the Scholastic tradition to the ground.

Think about that. Works of Plato, lingering somewhere in Byzantine libraries for hundreds of years, simply needed to be transported across the Meditteranean and communicated by the scholars who kept them to unleash a wave of progress the world is still reverberating from. Obviously there were many factors behind the Renaissance, but it is a remarkable feature of this time that a relatively small set of books could cause such massive intellectual changes. In part, this is because there simply wasn’t that much new stuff to read. Something coming out was a big deal. Even if it was a re-release. In fact, it wasn’t really until the 19th Century that it became impossible to read everything worth reading in most subjects.

Beyond the scarcity of written material, religious and political persecution has been another persistent feature of the Western Philosophical tradition’s opposition to progress. The political turmoil in the lives of almost every major Medieval and pre-Modern philosopher is striking. Each writer had to self-censor, and in many cases were forced to flee or outright killed. To name a handful:

Boethius (tortured and killed by the Ostrogothic King Theodoric)

Giordano Bruno (denounced and burned at the stake in Rome)

Baruch Spinoza (excommunicated and exiled from the Jewish community in the Netherlands)

John Locke (fled England to the Netherlands to avoid political persecution before returning)

In stark contrast to this is the extraordinary availability of information today and the ease with which new ideas can be articulated. This is perhaps the most remarkable fact about our era (and what makes you reading this possible at all). It also opens the question, why, since the invention and wide scale adoption of the internet, productivity and economic growth haven’t sped up more? One theory (articulated by Tyler Cohen), is that we have already taken much of the low hanging fruit that yielded the massive economic progress of the 1900s. Science, likewise, is using more people to make less progress than it did in the past. 

If this is true, then it seems that we hit a sweet spot for GDP growth and scientific progress somewhere in the 20th Century. Our intellectual and political climates were just good enough to unleash discoveries and inventions just out of reach of previous generations, but much easier to find than those to follow.

On the personal side, it might be hard to relate to GDP figures. But the relationship between personal productivity and economic productivity is a topic that still sometimes crosses my mind (despite how differently they may be defined). For myself, having been born in an age and place where the internet was nearly ubiquitous, and my capacity for distraction by it nearly endless, I wonder what its overall effect on our productivity has been.

On the one hand, learning has been unquestionably easier. Writing papers often includes of cycles of: typing, opening a new tab, searching Google, finding crucial information, and switching back to type my findings and analysis mere seconds later. This would have taken orders of magnitude longer in the pre-internet age but is now a seamless feature of student and writer’s lives. Educational content producers and random helpful figures on the internet are easily found and often filtered by how useful their information is. Finally, Wikipedia (which yesterday turned 20!) is always there to provide an overview on just about anything.

But that is helpful only when I am working. An expression which I have found most apt in describing my personal productive capacity is Parkinson’s law: “work expands so as to fill the time available for its completion”. The shorter the deadline, the more productive I will be to finish it. A longer deadline gives me time to slack off and fuel procrastination. And while procrastination has existed since the day man began working, the magnitude of its influence has grown larger than ever before.

The attractiveness of distractions has particularly grown as our attention has been commodified with a profit motive attached to our eyeballs. Devices and applications are extremely efficient, not at improving your overall well-being, but guiding your attention in whatever way software engineering teams see fit. This is a uniquely modern curse. 

To bring this back full circle, I must clarify that I would unquestionably submit to the current challenges of slowing growth and hyper-distraction rather than those of intellectual scarcity and persecution. We have traded away the incredibly cruel world of the past for good reason. 

However, we must think harder about the questions posed by the information age. How should one deal with the experience of information overload and the increasing complexity of decisions (particularly major life decisions)? How should we design our relationship with our technology to leave us well informed, more in control, and less distracted? How should we think about the economy and our role in it — particularly if much of the low-hanging fruit has been plucked, and humans (with the same brains and bodies) are demanded to jump higher than before in order to achieve the same GDP growth achieved in the past?

The curses of the past have been traded away for lesser, and in some ways opposite, curses of the present. Acknowledging them, and answering the questions they raise is something I will continue to attempt. Luckily, the internet has shown me that I am not alone.

Consciousness: Where it might not be

This is a part two in a series on consciousness

Continuing from last week’s post, I shall explore avenues on how exactly one can doubt the consciousness of objects you encounter. Again, by consciousness I mean any type of experience something or someone might have; or what it is like to be something.

From the birthplace of modern philosophy, we have irrefutable reasons to say conscious stuff exists (from Descartes) within those thinking the sentences ‘I think therefore I am’. Beyond the odd solipsist, most everyone agrees that it is also reasonable to assume that other people are conscious as well. Today, we further assume that dogs and other mammals are conscious. What about trees? Grass? Rocks? The Sun?

I have found, for most of my life, an obvious answer to these sorts of questions. While the exact nature of what consciousness is remains mysterious, it was obvious to me that it was a product of the brain. The mind is what the brain does, to use a neuroscientific quip. Consciousness is something like information being processed, or a byproduct of a working functional system. 

Yet I began to doubt these answers as I considered the unity of nature — the fact that all things, including our bodies, are made of the same particles that stars are made out of; emergent from the same quantum fields. The trajectory of history also seemed to point in the direction of decreasing human distinctiveness (from Copernicus to Darwin to Goodall to AlphaGo), an expanding circle of moral worthiness, and a wider range of animals considered conscious.

We're All Stardust | Stardust Meme on ME.ME
Source

So I investigated the actual premises — the underlying reasons — for a belief in non-consciousness. A starting point is noticing that human consciousness is profoundly altered by changes in the brain. This was noticed as far back as Roman physician Galen, who wrote about how gladiators who suffered head injuries were permanently psychologically harmed. This presaged the connections between brain activity and conscious states that modern neuroscience has done much to uncover.

From here, it could be assumed that the requirements for consciousness to exist are found in certain properties the brain has (whether as an information processor or for the functional roles it plays in living organisms). After all, if the brain is harmed, or is sedated, you lose consciousness (or at least the memory of it). Every theory of consciousness therefore gets selected first by whether it explains the consciousness of those who can say that they are consciousness. Right now that’s just us humans.

But the problem is that you can’t boot a restrictive theory of consciousness off the ground without some additional assumption: that anything that isn’t sufficiently similar enough to us humans isn’t conscious at all. Otherwise, there is no way to disprove countervailing theories of consciousness that describe non-human objects as conscious.

If you want to say consciousness emerges when brains, or similarly complex objects are formed, I can come along and say, “yes that is one example of consciousness, but consciousness also occurs when only relatively simple objects are present.” You have to fall back on an intuition that things that are not similar enough to us are not conscious. No matter what restrictive theory you have to explain consciousness, there is no way to refute a wider theory of consciousness without that intuition.

The following argument articulates how this line of reasoning works:

1. I am conscious. 

2. I can sense many things that are not similar to me (or the body I consider mine).

3. Things that aren’t (sufficiently) similar to me are non-conscious.

4. Therefore there are many things that are non-conscious. 

The argument relies on the intuition present in premise 3 to be valid (as well as a vague notion of similarity). Yet, every theory that excludes consciousness to any subset of things similar to us relies on it. Where this intuition arises from is of interest to me.

In next week’s post, I will investigate how this intuition might itself be emergent from the physicalist worldview, creating a circular argument.

Consciousness: Why people think it might not be everywhere

This is a part three in a series on consciousness

Last week, I introduced the intuition that things “that are not (sufficiently) similar enough to us are not conscious.” This intuition matters because, without it, there is no way to ground a restrictive theory of consciousness. Put another way, without this intuition, you would find it impossible to defend the position that anything at all is non-conscious. It is present, whether explicitly or implicitly, in every restrictive explanation someone gives for why consciousness is or isn’t present somewhere.

One could argue that the intuition could be ignored by instead falling back on some other defining feature of consciousness. For example, if you believe processing information is necessary for consciousness to exist, you might instead think the phenomenology grounding this belief (consciousness simply is information processing) justifies it, and thus justifies consciousness being restricted to information processors. Ostensibly, falling back on this belief could remove the need to rely on this intuition described above. However, this falls apart when you look at the details.

For one, there is not one definition of information processing. It could be that everything in the universe is describable as an information processor (perhaps in the way a particle or an object enacts the laws of physics in order to interact with surrounding objects). But, this ends up being an entirely non-restrictive theory. 

To counter this, one might make the definition of information processing could be made more restrictive. However, for any restrictive definition of information processing, the phenomenological grounding breaks down. It is possible for me to see how my consciousness might be, in some loose sense, information being processed. But, it is very unclear phenomenologically, why any one restrictive definition of information processing is the one correct definition. Then, without phenomenology to explain the choice of any specific restrictive definition of information processing, one would have to, again, fall back on the intuition that things that aren’t sufficiently similar to us humans are non-conscious (as that type of information processing would happen to occur in our brains but not everywhere). 

This brings us to the question: where does this intuition come from? Why believe that anything we experience is non-conscious? I believe that it is a consequence of our current physicalist worldview. If the things in our environment move like clockwork, as physics tells us, they can be predicted without any mention of consciousness. In this case, the fact that we are conscious at all is something special that needs to be explained. This explanation ends up usually being a restrictive theory of consciousness (X in the diagram below). Because most things in the universe aren’t like you, you can then use this theory to explain why these things are, in fact, non-conscious. This can be used to justify the version of physicalism you began with (the one which explains the world without reference to consciousness). This however, creates a circular chain of justification.

In next week’s post, I will conclude my thoughts on consciousness by addressing some critiques and discussing why I think this topic is relevant in the first place. 

Consciousness: The relationship with the current physicalist worldview

This is part four in a series on consciousness

Last week, I discussed how to justify any restrictive theory of consciousness (that is, any theory which says consciousness is not universal). I concluded that even if you try to ground your restrictive theory in your own phenomenology (or first hand experience), you still cannot do so without holding the intuition: things that aren’t similar to you aren’t conscious. I shall call this the “similarity intuition,” or simply “the intuition” in this post.

Put in argument form, here is a way you might try to avoid relying on the intuition.

  1. Consciousness requires X
  2. X doesn’t occur in things not similar to me
  3. Therefore, things that aren’t similar to me aren’t conscious

Now you rely on (1) instead of the intuition. But you still need a way to believe X is required. This could be done phenomenologically.

  1. My consciousness has certain essential properties that I can discover phenomenologically
  2. These properties are essential to any other consciousness
  3. These conscious properties can be mapped on to certain properties X, which are present in certain physical systems 
  4. Therefore, if X isn’t present in something, it is non-conscious

This argument appears to sidestep the intuition, but relies on it nonetheless. First, in (2) it assumes that properties essential to your consciousness are present in any consciousness. In other words, all consciousness must be similar to your consciousness; at least in so far as it has certain properties. 

The similarity intuition is more clearly present in premise (3). Any restrictive mapping of phenomenological property to a physical or mathematical system will require an intrinsically self-centered approach. This is because it consists of humans mapping their experience to their brain states. In order to justify this mapping, one has to rely on the intuition that other less restrictive mappings don’t describe consciousness. In other words, things not sufficiently similar to me (where phenomenological states are mapped to a physical system dissimilar to me) are not conscious.

One could argue more easily against the second main claim I introduced in last week’s post. Here, I linked the current physicalist worldview to this similarity intuition in a circular, self-justifying relationship. One could argue that physicalism is compatible with panpsychism, an expansive view of consciousness that sometimes describes consciousness as a physical property common to all particles or physical systems.

Moreover, some might claim that physicalism needn’t weigh in on the debate over exactly where consciousness exists at all. Simply put, the more dissimilar a physical system is from a human being, the less we know about whether it is non-conscious or conscious. 

If this were all that people claimed I would have less of a problem. But most physicalists do not only argue that there isn’t epistemic justification for believing that things dissimilar to us are or are not conscious. They don’t sit in a state of agnosticism about this. They believe that such things are, in fact, non-conscious (eg. rocks, plants, waterfalls, etc.). 

My claim is that there is an obvious connection between the common scientific-physicalist worldview, conceptualizing the world as clockwork, and the belief that most of the world is non-conscious. Furthermore, the similarity intuition is both justified by this worldview, and helps maintain it.

Some actual clockwork

I want to say here that science continues to be the best way we have for explaining much of the world. In countless ways it has made our lives easier to live. But it is also true that the questions scientists are asking do not try to answer what I am talking about. They usually ignore consciousness, and for good reason. Treating things in the world as clockwork puts us in a frame of mind to start making hypotheses, mapping out relations between cause and effect, and making predictions. This is an eminently useful endeavor. 

But success in treating objects in the world as clockwork should not permanently cloud our judgements about whether, at the ground level, everything in the universe actually is determined. And it certainly should not prompt us to permanently believe that consciousness is present only in systems similar to us; at least not without proper justification.

My attempt in this series on (non-)consciousness was to push back against a common dogma and identify a common intuition justifying physicalism. I don’t know how many readers I have convinced of this, but I hope to have at least pushed the conversation forward a little bit.

Best,

Alexander Pasch

Book Review: Homo Deus – A Brief History of Tomorrow by Yuval Noah Harari

Rating: 4 out of 5.

Previous week’s post

Some authors are capable of bringing so many disparate ideas to the table that you begin to wonder where the limits of their creativity lie. A few are able to turn the tangle of ideas they introduce into a coherent, compelling synthesis. Yuval Noah Harari has shown in Homo Deus, that he is not only capable of doing this for human history (in his widely appreciated Sapiens), but for the human future as well. Where Homo Deus falters most is perhaps in its repetition. The first two sections do contain several informative strands of thought, but it takes too long to reach the meat of the work: the section concerning the future of humanity. 

Harari’s thesis will certainly be controversial to many readers. He claims that the past has seen the religions of old replaced by the story of humanism: concerned fundamentally with the experiences of human beings themselves. The liberal variety of humanism, which dominated the 20th Century and lives on in democratic societies today, is now under threat by improving technology.

Giving value to individuals makes sense when you need them to fight wars, run factories, and participate in a growing economy. Through artificial intelligence and genetic engineering, advanced machines and enhanced humans will likely be capable of these tasks in the future. This will result in the breakdown of the liberal humanist story. Harari suggests two alternatives: a form of techno-humanism, valuing the experiences of technologically modified humans, or Dataism, valuing the free exchange of information above all else.

Concerned more with convincing than assuaging the reader, Harari relies on analogies from the past and present. He starts by chronicling the shift away from the pre-agricultural human worship of animals. Non-coincidentally, this shift occurred precisely when farmers began to domesticate animals. Suddenly, animals were seen either as a means for human gain or ignored completely. 

Furthermore, rulers were often given divine status precisely to justify the unequal value given to them and provide a structure that society could operate under. When capitalism and mass-mobilization required that men perform additional economic and military duties, they were given more inherent value. When mass-mobilization required these men to leave the factories in World War I, the women who replaced them also gained inherent value in the eyes of society.

These and other examples lead Harari to the conclusion that history describes a web of stories that humans tell one another to justify their actions. These stories are not feeble bits of imagination. The stories we tell ourselves about Jesus, capitalism, science, France, and others, direct the lives of billions of people, altering the world in their wake. For readers of Sapiens, this will be a familiar concept. 

When the peaks of intelligence become uncoupled from regular human beings, the value we give humans will certainly change.

Harari makes it clear that the dominant story of our age, liberal humanism, is under threat. When the peaks of intelligence become uncoupled from regular human beings, the value we give huamans will certainly change. There is certainly evidence that technology will profoundly alter the value structures we now cling to. Liberal humanism, which values every person’s experiences enough to allow them to vote and speak their mind, is likely to change.

The question Harari leads the reader to ponder is what story or value structure will come next? Harari suggests that the likeliest successor is Dataism, or the belief in the value of connecting bits of information. I think this is questionable. Harari does little to convince me that we will be walking away from fundamentally valuing certain conscious experiences themselves. Perhaps this is because this work takes the contemporary materialist line on consciousness (which I will critique in a post next week). Regardless, I think the experiences of the most powerful beings around will likely dictate the value structure society operates under. 

I do accept the likelihood that increasing information flow between people, cyborgs, and machines would usually provide net benefits to society at large. But I think every improvement to society will emphasize the amazing states of consciousness and harmony provided by increased information flow (as Harari in fact emphasizes to defend Dataism). This is different from valuing information flow a priori. The quadrillionaire cyborgs of tomorrow (perhaps a future Elon Musk) will likely not be pleased if increased informational flow leads to their suffering and ultimate destruction.

Whether Dataism pans out or gets panned by its cyborg critics, Homo Deus will certainly expand your conception of what the future will look like and where we’re heading as a civilization. It stands as a creative, albeit lengthy, successor to Sapiens.

-Alexander Pasch

Want a book reviewed? Suggest one here!

Follow me on Twitter to keep up to date with the latest posts and discussions

Book Review: The Closing of the American Mind by Allan Bloom

Rating: 4 out of 5.

Reading and digesting The Closing of the American Mind is an undertaking that few books of its length (just under 400 pages) provide. For the curious and engaged, every page is an ineluctable and fulfilling series of passages. Together they chart an illuminating course through the history of philosophy and its connection to the modern American university, in all its eccentricities. For the statistically minded and skeptical audience, there is undoubtedly concern with the confidence and ease with which Bloom draws connections between philosophical movements, university structure, and American culture circa 1987. Yet, the purpose of the book is not to scientifically demonstrate a connection between any single philosophy and a specific outcome. Rather, it is to construct a comprehensive picture of the ideal role of higher education, and how this role has been jettisoned. The problem Bloom articulates is that the university no longer fosters great intellectual minds curious about great philosophical movements of the past and capable of integrating them into modern culture through a sense of perspective. This requires a broad exploration, the sort of which could yield book-length responses to every chapter if not page.

At its core, the book attempts to demonstrate the philosophical nature of the failure in American higher education to properly educate its intelligent students in those subjects making up the Liberal Arts. Pervading universities is a derivative of democratic minded thinking: extreme openness; relativism; an abandonment of any point of view, which paradoxically leads to a closemindedness. This cultural relativism eats away at its very intellectual grounding until even reason and rationally are sacrificed. What remains is a view of every intellectual and cultural system as fundamentally illegitimate.

Boom makes clear that education must strive to remove prejudices. This must be done with a true openness that acknowledges the reality of knowledge and ignorance, providing a path to increase the former and decrease the latter. Universities must show students where their point of view originates, in order to properly facilitate learning ideas within and outside their tradition. The alternative, relativism, is self-contradictory and impoverishes the mind. The relativistic maxim that everything fundamentally is equally untrue or amoral (neither moral or immoral), leaves minds far out to sea, unaware of which direction to row. 

It would be futile to delve, even briefly, into each of the specific topics Bloom discuses. However, broadly speaking, he begins by describing the problem of relativism, articulates a broader malaise of the student body, connects the Enlightenment and continental philosophy (Nietzsche being particularly prominent) with contemporary American nihilism, chronicles the philosophies of learning leading to and constituting the university (from the endlessly curious soul of Socrates, to the Enlightenment creation of the modern university, to the counter-enlightenment worries of corruption of the soul found in Swift, Rousseau, Kant, Heideger, and others), and ends by delving into the university’s structure and purpose in a modern era. There is more than enough to disagree with, but never is Bloom’s work anything but engaging.

The most striking feature of this work is how acutely it describes the malaise of the American university I felt, as a student, some 30 odd years later: The hyper-compartimentalization of university departments, questionable campus political movements and the administration’s inability to stand up to them, the racial self-segregation on campus, the splintering of student’s families, the prevalence of a self-contradictory relativism, and a dearth of perspective and deeper sense of philosophical grounding. This is simultaneously heartening and to a greater degree depressing. We live in an age of increasing division pulling America apart. In-so-far as we can look at prior periods and exclaim truthfully, “They had the same problems we do and they stuck together long enough to produce me. Why can’t I do the same until my next of kin come about?”, we can be have faith in perseverance itself.

Yet the fact that students, specifically in the humanities and social sciences, are continually let down does not strike confidence in the state of American culture. If our university educated students have little to no sense of purpose, place, or history, and fall into a nihilistic relativism, shouldn’t we expect our society to deteriorate? As costs rise, student debt alongside; as diplomas matter more the liberal arts education they purport to be attached to; as grade inflation increases and standards fall, universities won’t stagnate, but deteriorate. It remains to be seen whether the university can revive itself, particularly as centers of philosophical investigation designed to permeate into the culture a sense of meaning, groundedness, perspective, and honest open-mindedness. Bloom strikes an solemn tone as he concludes, remarking that “the future of philosophy in the world has devolved upon our universities,” and the quality of their stewardship is in doubt. Had Bloom seen America from the vantage of 2020, it is hard to imagine this doubt would not balloon.