Why I am not a meat eater

I often find it interesting to find points where people (whether myself or others) want to hold on to two conflicting ideas. One such conflict is apparent in the views Americans hold toward eating meat. On the one hand, 89% have a diet that includes meat. On the other, high majorities oppose the cruelest practices of factory farming, such as breeding chickens that grow so quickly they become crippled under their own weight. 

These two views don’t necessarily need to contradict. It could be that someone thinks it’s ok to eat meat but that they should always avoid eating meat raised under horrific factory farming practices (this applies to fish too).

In fact, this was something I still believe in theory and used to try to practice — buying grass-fed, organic, cage-free, and humanely raised meat and animal products. I thought that this was enough to entirely reconcile my disgust for factory farming with my desire to eat meat. But I began to realize that this was not the complete solution I had hoped for. 

First, because I still labeled myself a “meat-eater,” I felt this label gave me a license to eat meat and animal products at restaurants or as ingredients in other products. This was despite having no idea where their meat came from. Given that 99% of animals are factory farmed, it’s safe to assume that the animals I was eating were very likely farmed that way. 

Second, I didn’t really understand what most food labels meant. But, as this informative Vox article goes into, these labels don’t guarantee the ethical treatment of animals.

For example, these are free-range chickens…

Third, many US States have introduced ad-gag laws, which make it illegal to document practices at factory farms, even if such practices would be considered animal abuse off the farm. This means that it is extremely difficult to properly document the extent of the problems on farms of all types. 

Therefore, it seems rational to believe that the default option for most farms, absent public oversight or scrutiny, is to ignore animal welfare concerns in a simple desire to reduce costs. In the long run, any farm that doesn’t do that is more likely to be overtaken by a farm that does. Further, without good evidence to the contrary, one’s default assumption should be that any meat they buy comes from a farm following the default option: one that follows the same cost-cutting, cruel, factory farming practices that exist throughout the country.

This is why I am not a meat-eater.

However, I know my concern for animals isn’t exclusive to me and people I know who don’t eat meat. Many meat-eaters make a point to tell me this. 

But I often find them living a performative contradiction. I know very few meat-eaters who don’t eat meat at restaurants. I know very few meat-eaters who understand what the labels on their food actually mean. And I don’t know any meat-eaters who go out and research the conditions of the farms they eat meat from.

These are just anecdotes, but I’d bet almost all ethically concerned meat-eaters, end up supporting factory farms anyway because of these factors.

I don’t eat meat and try to avoid animal products in general, because I know I’ll eat factory-farmed products when I do. This isn’t to say that prioritizing eating ethically raised meat doesn’t make a difference, but with the current seemingly omnipresent reality of factory food production, eating animal products at all usually ends up perpetuating the problem of animal suffering.

However, it’s important to note that immediately becoming vegan is not the only way to help improve animals’ lives. It also should be said that excessive guilt is not healthy either. I do not eat vegan meals all the time and understand the difficulty of making animal welfare a priority.

That being said, there is a range of actions that one can take to help animals. This includes: not eating meat if you don’t know its source, cutting the worst raised animals from your diet (chickens and eggs first), going vegetarian, or donating to animal welfare charities (you can see this video for ideas).

Also, if you eat meat you could ask yourself the following questions:

  • Do I know where my meat comes from? 
  • Have I seen any or read about any of the farms where the animals I eat are raised?
  • Are the animals I eat being raised and killed in a way I’m ok with? How could I find out?
  • What measures could I take to ensure I don’t eat factory-farmed products or reduce the amount significantly?

Note: I am trying to grow the readership of this blog. You can help by forwarding this to friends and family or anyone you think might find it interesting.

A Poem

Today I’m going to do something a bit different from usual. Here is a short poem I wrote about the way the world speaks to us and what to do when we listen properly.

The Speaking World
Alexander Pasch

The world does speak to us
through pain and bliss and song.
Its lyrics are sensation felt;
interpretation's wrong.

Through this we know of goods
which mindful moments show.
Bring these goods together now
In harmony and flow.

Thoughts on gratitude

Over the last several years, I have often reflected on the high levels of cynicism in my adolescence. This cynicism seemed to be a poor psychological response to being confronted with general insecurities and academic challenges in high school. I now find that emphasizing the cruel and capricious parts of reality is not something that demonstrates strength or coolness, but usually is a misallocation of attention.

This has occurred as I have begun to read and hear more about the power of gratitude. It turns out fostering a gracious and thankful mind is one of the best ways to battle depressive thoughts and feelings. It positions you to see the world in a way that emphasizes the positive aspects of life and experience more of the great things the world has to offer.

This is one thing that troubles me about the polarized, combative, and extreme media we consume today. It has almost become a cliché to say that we are being pulled, as a society, and by the current state of media incentives, to be less grateful, angrier, and less mentally stable. I have counteracted this in my personal life by focusing my attention on people who themselves focus their attention on the positive aspects of the world while remaining educated and realistic. Such figures include Tyler Cowen and Will MacAskill. This is no universal remedy, but something I recommend every infovore to do themselves.

However, as much as I would like to say that I have left the cynic behind, I still feel his grasp, pulling me back to my younger years. The most convincing idea that he left is the notion that I will inevitably miss certain parts of reality if I fail to fully grasp the terrible aspects. And to fully grasp the terrible aspects of life, I must experience misery myself. If not, I will remain ignorant and naïve.

Perhaps there is some truth to this, but the role of the cynic as a truth-seeker is easily overstated. I must understand that this part of my mind is in competition with all the other unique sub-personalities and thought patterns floating in my mind.

Furthermore, so long as I exist in this world, I will have the emotions of sadness, anger, fear, and grief to show me the negative side of reality. But this does not mean I should prioritize the negative and positive emotions equally. There are some experiences that are better than others, and many experiences worth being ignorant of. Knowledge, as it turns out, is not a master virtue that lies above all other considerations.

This Thanksgiving, I will remind myself again that gratitude is one of those “better experiences”, not only in the moment you experience it, but because of all of its positive externalities too. I wish you all the best of luck harnessing its power in your lives as well.


How to Increase Your Computer Efficiency With Shortcuts

Everyone knows the scene. The computer hacker, often with a ridiculous time constraint, is tasked with breaking into a virtual environment or stopping others from breaking into theirs. Often typing streams of green text, their fingers caress the keyboard in a seamless blur, putting thought into action and controlling the computer with simultaneous grace and speed. 

You might not be able to do this, but you still can be more efficient when you use a computer.

This is not reality. Computer programmers and hackers do not take seconds to hack into defended systems on the fly. Writing sophisticated code takes time and effort.

However, efficient programmers, software engineers, and hackers are still much more adept at manipulating their computers than the average person. Watching them can be reminiscent of the movie scene with the hacker. Screens shifting here and there. Text appearing, being highlighted, and moved around rapidly. 

Many of the lessons they follow are not exclusive to programming. Rather, anyone who wants to be able to more quickly plant their ideas onto the screen can and should take the time to learn how to become more productive on the computer. 

The following are three tips that I have learned over the past few years to decrease the amount of time I take doing menial tasks on computers and more effectively plant my ideas onto the screen. 

Some of you may already know these tips, but even if you do, it is doubtful you use them to their maximum effectiveness. In line with my previous article on personal productivity, I wanted to provide additional ideas to improve it. I hope they will help you.

  1. Shift your mindset around computer use

The first step to increasing computer efficiency is to shift your mindset around how to use a computer. Your goal should be to reduce the amount of time you spend using your mouse and shift that time towards the keyboard, and to shortcuts specifically. 

When you learn how to accomplish a computer task with a shortcut rather than a mouse, you significantly reduce the input time between a thought you have and an output. 

Usually the steps are as follows: have a thought about what you want to do on the computer, start moving the cursor over to a part of the screen associated with that, finish moving the cursor across the screen (sometimes after over or under shooting the place you wanted it go), click or double-click, repeat step 1. 

If you know a keyboard shortcut, you replace the time spent moving your cursor across the screen with a rapid keyboard input.

While there is a learning curve (and it will take time upfront to learn the shortcuts), combined, the millions of times you might end up using shortcuts will save countless hours in the long run (8 days per year by one estimate)! 

  1. Memorize your shortcuts (and prioritize the important ones)

Once you convince yourself of the merits of using your mouse less, the obvious next step is to learn the shortcuts to all the tasks you want to keep on doing. 

To do this, you should prioritize learning shortcuts by their relative importance to your daily tasks on the computer. 

For example, if you spend a lot of time writing text, learn ctrl-left/right arrow and ctrl-shift-left/right arrow (option instead of ctrl on mac). If you find yourself taking a lot of screenshots, learn windows-shift-s (cmd-shift-4 on Mac). And no matter what you do, you should be using alt-tab and ctrl-c, x, v.

Below is a list of the most important shortcuts I use. 

*Note the use of the shift key in shortcuts: making it either go the opposite direction it otherwise would (1) or highlight text in a document (2). 

E.g. (1): alt-tab vs alt-shift-tab (cmd-tab / cmd-shift-tab on macs)

E.g. (2): ctrl-shift-left arrow selects the word to the left, while ctrl-shift-right arrow selects the word to the right (option-shift-left arrow and option-shift-right arrow on macs)

General Computer UseWindows ShortcutMac Shortcut
Undo / RedoCtrl-z / ctrl-yCmd-z / cmd-y
Switch applications (cycle to the right / left)
Alt-tab / alt-shift-tabCmd-tab / cmd-shift-tab
Find specific text on the page Ctrl-fCmd-f
Take a picture of a part of your screenWindows-shift-sCmd-shift-4
Open the task managerCtrl-shift-escape Cmd-option-escape
Browser ShortcutsWindows ShortcutMac Shortcut
Go to the next / previous submission form in browsersTab / shift-tabTab / shift-tab
Switch browser tabs (to the right / left)Ctrl-tab / ctrl-shift-tabCtrl-tab / ctrl-shift-tab
Document ShortcutsWindows ShortcutMac Shortcut
Copy / Cut / PasteCtrl-c / ctrl-x / ctrl-vCmd-c / cmd-x / cmd-v
Move cursor to previous / next word  (While typing)Ctrl-left arrow / ctrl-right arrowOption-left arrow / Option-right arrow
Select previous / next wordCtrl-shift-left arrow / ctrl-shift-right arrowOption-shift-left arrow / option-shift-right arrow
Move to beginning of previous / next paragraph Ctrl-up arrow / ctrl-down arrowOption-up arrow / Option-down arrow
Select previous / next paragraph Ctrl-shift-up arrow / ctrl-shift-down arrowOption-shift-up arrow / option-shift-down arrow
  1. Practice, practice, practice 

Regardless of what shortcuts you memorize, their ultimate usefulness will emerge only when you integrate them seamlessly into your everyday keyboard use. To do this, you must actually practice them. Imagine their use case when you first see and learn them. As quickly as you can after that, start applying them before you forget. Like a new word, a shortcut can become an integral part of your vocabulary or forgotten forever. It just depends on how frequently you use it. 

So, in the moments after you learn the shortcut, it is therefore crucial that you actually try it out for yourself. You can do some practice here. Even if it initially feels like it’s out of your way, I promise it will be worth it! 

Are there any I missed? If so, please comment below or send me an email at alexanderpasch@gmail.com.

Notes on Utilitarianism by John Stuart Mill

Some Useful Terms

  • Ethics is the subfield of philosophy concerning the nature of right and wrong. 
  • Normative ethics is the subfield of Ethics concerning what standards to use when judging what we morally ought to do.
  • Consequentialism is a normative ethical theory that judges the rightness or wrongness of actions entirely on their consequences or effects.
  • Utilitarianism is a type of consequentialism that believes happiness and unhappiness ought to be maximized and minimized respectively.

Book Notes

Utilitarianism (1861) is the most famous book on the eponymous ethical theory. Due to its great influence on the study of ethics and short length of just under 100 pages (allowing for its continual use in undergraduate classrooms), it has maintained great relevance to the present day. It has played a key role in the history of consequentialist ethical theories and can be credited, in part, for their popularity.

Divided into five chapters, Mill describes what the theory of Utilitarianism is (and is not), how people might be motivated by it, his proof for it, and ends with an analysis of justice and its relationship with the theory. Throughout the first three chapters it is notable how much time Mill spends deflecting canards, or objections he does not consider to have merit due to their inaccurate assessment of what Utilitarianism is – some occurring before he even provides an outline of the theory itself.

This outline begins in chapter 2. The key principle Mill directs us to, is the much remarked upon Happiness Principle : Acts are right insofar as they tend to increase the overall happiness or decrease unhappiness. By happiness, Mill is eager to point out, he does not mean the trite notion of momentary bliss, but all the aspects of life that are satisfying or pleasurable. Additionally, in discerning what types of happiness are best, he uses a controversial criterion: of any pair of actions where everyone or nearly everyone who has tried both prefers one over the other, the preferred one is the one bringing greater happiness. 

This, Mill believes, demonstrates that so-called higher pleasures of mental, moral, or aesthetic quality are better than lower, sensation-driven pleasures. It also leaves philosophizing and intellectual thinking as some of the greatest pleasures around — quite convenient for Mill, given that this is what he spent much of his time doing outside of political advocacy.

Furthermore, Mill notes, other principles often embedded in moral language, such as veracity or virtue, still have purchase in Utilitarianism. However, these are secondary principles, which, while good guideposts to moral behavior, are not the ultimate deciding factors of right and wrong. The ultimate judge of rightness and wrongness is the degree to which happiness has been increased or decreased.

In chapter 3, Mill dedicates significant time to describing how Utilitarianism is not unique from most ethical theories in certain ways. The same psychological and social sanctions will be used to prompt people to perform moral actions. While it may take time before the tenets of Utilitarianism seep out into society through education and persuasion, the mental and social tools to prompt moral behavior are already there, even if what is considered moral is changing.

In chapter 4, we are asked to consider how Utilitarianism might be proved. As he notes, this is no direct proof but is the best that can be asked for a moral theory. Roughly it goes:

  1. Everyone desires happiness
  2. The only way to prove what is desirable is to observe what people desire
  3. A person’s happiness is thus good for that person
  4. Therefore, the general happiness is good to the aggregate of people

Despite being warned that this was not a direct proof of mathematical strength, it does still feel underwhelming. Specifically, it is peculiar that Mill thinks it logical that one person’s happiness being good for them entails increasing the aggregate amount of happiness being good for the aggregate of people. Such a logical connection requires some other assumptions about what the aggregate of persons means and whether or not something can be good for them. 

Through chapter 5, Mill considers the topic of justice. He searches for common attributes to conceptions of justice, and finds them to be grounded in a set of emotions that deal with self-preservation, some observed in other animals. These emotions, when constrained by social custom, motivate the creation of law. Mill points out, the etymology of justice demonstrates the deep connection it has to our legal foundations (Jus means law in Latin). But, he states that it is deeper than law, as the law itself can be unjust.

So, justice can be seen as the ways in which society protects our moral rights, sometimes through law. This means we all have a stake in the creation of just systems. Mill connects this to utility, and the happiness principle, by noting that just systems secure people’s basic security and alleviate many of the most basic concerns we have regarding harms that others might inflict upon us. However, justice is not systematic and it lies on top of some of our deeper intuitions concerning morality. At the base of our moral intuitions lies the notion of utility. Justice emerges from this. Furthermore, Mill argues, there are cases in which it would be moral to act expediently outside of what is just, yet within what is moral. This demonstrates that justice delineates a class of moral rules which emerge in societies to satisfy certain common emotions used for self-protection and fairness. This is less fundamental than the notion of what is moral, which Mills states is determined by the principle of utility.

Together, the chapters lay out a series of passages that contain many influential and compelling arguments in favor of, at the very least, a prioritization of happiness in any ethical system, if not adherence to Mill’s version of Utilitarianism itself. Mill’s work has been followed by a series of derivative ethical theories and has done much to advance the expanding moral circle, where greater moral concern is given to women, the impoverished, those in other countries, and non-human animals.

How I (Try to) Stay Productive: A List of Tips

Jump to the list of tips

As I wrote about last week, the internet age has given us countless devices and apps designed to distract. It still is sometimes hard to distinguish where exactly an activity transforms itself from useful, or harmlessly entertaining, to full on distracting. However, what seems clear to me, is that for most people using modern technology, this line is often crossed. Given this, I thought that it might be useful to write about what I do to prevent distraction and increase productivity in my life. 

I remember when I first started realizing that technology was going to be a serious problem for my academic prospects. In middle school, I often would procrastinate writing papers until the middle of the night before they were due. At this time I did not have a computer of my own, so in some sense I thought it a treat that I was able to use the family computer on a weeknight. I would usually end up watching Netflix until my monkey brain finally ceded control sometime in the wee hours of the morning and I began writing. 

Reflecting upon those experiences at the time, I knew it was a problem. I knew it was going to make it more difficult to succeed come high-school, but I didn’t have a ready blue-print to deal with the problem. I was also too confident in my own capacity for self-restraint to seriously ask for help.

Since then, I have gone through numerous strategies to help curtail the negative influence of technology in my life. Today, I rely on a combination of certain habits and certain restrictions on sites. The following are, I believe, the most important features of my current system.

I have a set-up where I keep my computer in the place where I do most of my work. I, with almost no exceptions, keep this laptop there and do not bring it to where I sleep and do much of my reading. I bring my phone with me, but try to keep it apart from where I am sleeping (or at least on the opposite side of the room if I need it for an alarm). I am still working on improving my phone habit, however.

In terms of technical steps to prevent distraction, I found a few important apps and features in iOS 14.3 that are particularly useful. For my computer running Windows 10, I use an app called Cold Turkey to block every website on a list across every browser. It works by forcing you to install the Cold Turkey extension on each of your browsers in order for them to launch. You can then make schedules both for when websites on this list are blocked, and when you are able to edit this list. I have found this to be very effective at preventing me from accessing certain sites (eg. Youtube, Reddit, and News sites) that I would gravitate to when bored and get sucked into. 

On my phone I have a rather draconian system. The first thing I use is the inbuilt Content Restrictions settings in the iPhone settings. Here I only allow access to sites that I have whitelisted. These are Wikipedia, Google, and a handful of others. I have purposefully forgotten the content restrictions passcode so that I would need to reset it with my AppleID to change these settings. 

This doesn’t work by itself for me because this doesn’t prevent you from installing apps that you can use to easily evade the content restrictions. In response I have deleted all apps that are somewhat distracting and purposefully forgotten my AppleID password so it is more difficult to reinstall them. Hopefully, in the future Apple makes it easier to self regulate your usage and harder to bypass your restrictions. I know my complicated setup isn’t for everyone, but you can still use the productivity features offered in iOS in less extreme forms.

Here is a summary of my productivity tips:

Habits and Home Setup Tips

  • Separate your workspace from your sleeping and resting space.
  • Keep your computer in a different room from where you sleep.
  • Charge your phone in a different room (or at least the opposite side of the room) from where you sleep and where you work.
  • When reading, keep devices in a different room or put them where you can’t hear them.
  • Set time in your day when you can’t use the internet, particularly at night

Tech Tips

  • Use Cold Turkey (or Self-Control for Mac) to block sites or apps that you think are distracting on your PC. Or, only allow yourself to use certain apps at specific times.
  • Use Content Restrictions on iOS to either block all non-whitelisted sites, or block specific sites you find distracting.
  • Turn off notifications from apps that you use too much.
  • Delete apps that you can’t stop using or can’t stop from distracting yourself.
  • If you need a draconian measure, forget your passcodes and passwords that allow you to change these settings or download distracting apps.

Most importantly, I’ve found that this is a continuous process. You will not find the perfect setup for yourself immediately. The most important thing is that you don’t give up. Instead, accept incremental progress as you learn more about yourself and your habits.

Good Luck,

Alexander Pasch

Bottlenecks to Progress in the Internet Age

I have been reading A New History of Western Philosophy by Anthony Kenny and it resurfaced thoughts that I have often had when learning about historical figures and everyday life in prior eras. In particular, how these figures were able to overcome the dual problems of censorship of political and religious elites and the limited availability of information will always fascinate me.

The lack of access to crucial historical texts was perhaps the major bottleneck which prevented philosophical progress in medieval Europe. In fact the capture of Constantinople by Ottoman forces in 1453 ended up being critical for the Renaissance. This is what forced the Greek scholars, who had kept the philosophy of Plato and other ancients alive, to flee to Italy, where Scholasticism (the rigid fusion of Christianity and Aristotelianism) dominated. The spark of new classics was enough to light the flames of new philosophies that burned the Scholastic tradition to the ground.

Think about that. Works of Plato, lingering somewhere in Byzantine libraries for hundreds of years, simply needed to be transported across the Meditteranean and communicated by the scholars who kept them to unleash a wave of progress the world is still reverberating from. Obviously there were many factors behind the Renaissance, but it is a remarkable feature of this time that a relatively small set of books could cause such massive intellectual changes. In part, this is because there simply wasn’t that much new stuff to read. Something coming out was a big deal. Even if it was a re-release. In fact, it wasn’t really until the 19th Century that it became impossible to read everything worth reading in most subjects.

Beyond the scarcity of written material, religious and political persecution has been another persistent feature of the Western Philosophical tradition’s opposition to progress. The political turmoil in the lives of almost every major Medieval and pre-Modern philosopher is striking. Each writer had to self-censor, and in many cases were forced to flee or outright killed. To name a handful:

Boethius (tortured and killed by the Ostrogothic King Theodoric)

Giordano Bruno (denounced and burned at the stake in Rome)

Baruch Spinoza (excommunicated and exiled from the Jewish community in the Netherlands)

John Locke (fled England to the Netherlands to avoid political persecution before returning)

In stark contrast to this is the extraordinary availability of information today and the ease with which new ideas can be articulated. This is perhaps the most remarkable fact about our era (and what makes you reading this possible at all). It also opens the question, why, since the invention and wide scale adoption of the internet, productivity and economic growth haven’t sped up more? One theory (articulated by Tyler Cohen), is that we have already taken much of the low hanging fruit that yielded the massive economic progress of the 1900s. Science, likewise, is using more people to make less progress than it did in the past. 

If this is true, then it seems that we hit a sweet spot for GDP growth and scientific progress somewhere in the 20th Century. Our intellectual and political climates were just good enough to unleash discoveries and inventions just out of reach of previous generations, but much easier to find than those to follow.

On the personal side, it might be hard to relate to GDP figures. But the relationship between personal productivity and economic productivity is a topic that still sometimes crosses my mind (despite how differently they may be defined). For myself, having been born in an age and place where the internet was nearly ubiquitous, and my capacity for distraction by it nearly endless, I wonder what its overall effect on our productivity has been.

On the one hand, learning has been unquestionably easier. Writing papers often includes of cycles of: typing, opening a new tab, searching Google, finding crucial information, and switching back to type my findings and analysis mere seconds later. This would have taken orders of magnitude longer in the pre-internet age but is now a seamless feature of student and writer’s lives. Educational content producers and random helpful figures on the internet are easily found and often filtered by how useful their information is. Finally, Wikipedia (which yesterday turned 20!) is always there to provide an overview on just about anything.

But that is helpful only when I am working. An expression which I have found most apt in describing my personal productive capacity is Parkinson’s law: “work expands so as to fill the time available for its completion”. The shorter the deadline, the more productive I will be to finish it. A longer deadline gives me time to slack off and fuel procrastination. And while procrastination has existed since the day man began working, the magnitude of its influence has grown larger than ever before.

The attractiveness of distractions has particularly grown as our attention has been commodified with a profit motive attached to our eyeballs. Devices and applications are extremely efficient, not at improving your overall well-being, but guiding your attention in whatever way software engineering teams see fit. This is a uniquely modern curse. 

To bring this back full circle, I must clarify that I would unquestionably submit to the current challenges of slowing growth and hyper-distraction rather than those of intellectual scarcity and persecution. We have traded away the incredibly cruel world of the past for good reason. 

However, we must think harder about the questions posed by the information age. How should one deal with the experience of information overload and the increasing complexity of decisions (particularly major life decisions)? How should we design our relationship with our technology to leave us well informed, more in control, and less distracted? How should we think about the economy and our role in it — particularly if much of the low-hanging fruit has been plucked, and humans (with the same brains and bodies) are demanded to jump higher than before in order to achieve the same GDP growth achieved in the past?

The curses of the past have been traded away for lesser, and in some ways opposite, curses of the present. Acknowledging them, and answering the questions they raise is something I will continue to attempt. Luckily, the internet has shown me that I am not alone.

Consciousness: What it is and why it matters

This is part one in a series on consciousness

I’ve desired for some time to begin writing about my view on philosophical topics in an approachable but serious manner. With the advent of a new year, I figured I would now begin publishing weekly posts in this vein, starting with a series of posts on consciousness. 

By consciousness, I mean something quite basic: the fact of experience or what it is like to be something (in Thomas Nagel’s sense). I do not mean self awareness, the capacity to reflect, to report, or remember. On the other hand, non-consciousness is simply the absence of consciousness. I do not use unconsciousness here, because it often relates to parts of the human mind that lie out of reach of consciousness, but I am talking about more than just us.

My rationale for writing about consciousness is two-fold. First, consciousness is in many ways foundational to everything we care about, especially in ethics, another topic of great interest to me. Understanding how widespread consciousness is, is crucial for developing our moral frameworks (lest we vivisect dogs again because we believe they are soulless) and general theories of the world. Second, I believe the current dominance of a physicalism (the belief that physical matter is the only fundamental substance that exists) that sees non-conscious things as the default is misguided. I find that this version of physicalism rests on shaky premises which I wish to attempt to investigate and challenge. 

That challenge is what I will begin in this post. The fact of positing that something is non-conscious at all is an odd endeavor. It involves using your own conscious states to try and demarcate what, outside your own mind, is not conscious. 

When you imagine a rock, or another entity you believe to be non-conscious, you are using your consciousness to imagine or sense it. You cannot go a further step to imagine its non-consciousness, for imagining entails consciousness of some sort. Instead, what appears to occur is that you are unable to utilize your theory of mind on such an object, and perhaps with the assistance of other beliefs (consciousness requires a brain or information processing or movement) you then have the thought: “this rock is not conscious”. 

However, when you drill down on these thoughts, they become difficult to justify. The capacity to use your theory of mind does not determine whether or not any given person, animal, or object is conscious. We can imagine what a dead person would be thinking, while failing to imagine what a bat is feeling. Furthermore, investigating other beliefs about what is and isn’t consciousness often relies on the premise: things that are not sufficiently like me are not conscious. 

This is what I shall write about in next week’s post.

-Alexander Pasch

Consciousness: Where it might not be

This is a part two in a series on consciousness

Continuing from last week’s post, I shall explore avenues on how exactly one can doubt the consciousness of objects you encounter. Again, by consciousness I mean any type of experience something or someone might have; or what it is like to be something.

From the birthplace of modern philosophy, we have irrefutable reasons to say conscious stuff exists (from Descartes) within those thinking the sentences ‘I think therefore I am’. Beyond the odd solipsist, most everyone agrees that it is also reasonable to assume that other people are conscious as well. Today, we further assume that dogs and other mammals are conscious. What about trees? Grass? Rocks? The Sun?

I have found, for most of my life, an obvious answer to these sorts of questions. While the exact nature of what consciousness is remains mysterious, it was obvious to me that it was a product of the brain. The mind is what the brain does, to use a neuroscientific quip. Consciousness is something like information being processed, or a byproduct of a working functional system. 

Yet I began to doubt these answers as I considered the unity of nature — the fact that all things, including our bodies, are made of the same particles that stars are made out of; emergent from the same quantum fields. The trajectory of history also seemed to point in the direction of decreasing human distinctiveness (from Copernicus to Darwin to Goodall to AlphaGo), an expanding circle of moral worthiness, and a wider range of animals considered conscious.

We're All Stardust | Stardust Meme on ME.ME

So I investigated the actual premises — the underlying reasons — for a belief in non-consciousness. A starting point is noticing that human consciousness is profoundly altered by changes in the brain. This was noticed as far back as Roman physician Galen, who wrote about how gladiators who suffered head injuries were permanently psychologically harmed. This presaged the connections between brain activity and conscious states that modern neuroscience has done much to uncover.

From here, it could be assumed that the requirements for consciousness to exist are found in certain properties the brain has (whether as an information processor or for the functional roles it plays in living organisms). After all, if the brain is harmed, or is sedated, you lose consciousness (or at least the memory of it). Every theory of consciousness therefore gets selected first by whether it explains the consciousness of those who can say that they are consciousness. Right now that’s just us humans.

But the problem is that you can’t boot a restrictive theory of consciousness off the ground without some additional assumption: that anything that isn’t sufficiently similar enough to us humans isn’t conscious at all. Otherwise, there is no way to disprove countervailing theories of consciousness that describe non-human objects as conscious.

If you want to say consciousness emerges when brains, or similarly complex objects are formed, I can come along and say, “yes that is one example of consciousness, but consciousness also occurs when only relatively simple objects are present.” You have to fall back on an intuition that things that are not similar enough to us are not conscious. No matter what restrictive theory you have to explain consciousness, there is no way to refute a wider theory of consciousness without that intuition.

The following argument articulates how this line of reasoning works:

1. I am conscious. 

2. I can sense many things that are not similar to me (or the body I consider mine).

3. Things that aren’t (sufficiently) similar to me are non-conscious.

4. Therefore there are many things that are non-conscious. 

The argument relies on the intuition present in premise 3 to be valid (as well as a vague notion of similarity). Yet, every theory that excludes consciousness to any subset of things similar to us relies on it. Where this intuition arises from is of interest to me.

In next week’s post, I will investigate how this intuition might itself be emergent from the physicalist worldview, creating a circular argument.

Consciousness: Why people think it might not be everywhere

This is a part three in a series on consciousness

Last week, I introduced the intuition that things “that are not (sufficiently) similar enough to us are not conscious.” This intuition matters because, without it, there is no way to ground a restrictive theory of consciousness. Put another way, without this intuition, you would find it impossible to defend the position that anything at all is non-conscious. It is present, whether explicitly or implicitly, in every restrictive explanation someone gives for why consciousness is or isn’t present somewhere.

One could argue that the intuition could be ignored by instead falling back on some other defining feature of consciousness. For example, if you believe processing information is necessary for consciousness to exist, you might instead think the phenomenology grounding this belief (consciousness simply is information processing) justifies it, and thus justifies consciousness being restricted to information processors. Ostensibly, falling back on this belief could remove the need to rely on this intuition described above. However, this falls apart when you look at the details.

For one, there is not one definition of information processing. It could be that everything in the universe is describable as an information processor (perhaps in the way a particle or an object enacts the laws of physics in order to interact with surrounding objects). But, this ends up being an entirely non-restrictive theory. 

To counter this, one might make the definition of information processing could be made more restrictive. However, for any restrictive definition of information processing, the phenomenological grounding breaks down. It is possible for me to see how my consciousness might be, in some loose sense, information being processed. But, it is very unclear phenomenologically, why any one restrictive definition of information processing is the one correct definition. Then, without phenomenology to explain the choice of any specific restrictive definition of information processing, one would have to, again, fall back on the intuition that things that aren’t sufficiently similar to us humans are non-conscious (as that type of information processing would happen to occur in our brains but not everywhere). 

This brings us to the question: where does this intuition come from? Why believe that anything we experience is non-conscious? I believe that it is a consequence of our current physicalist worldview. If the things in our environment move like clockwork, as physics tells us, they can be predicted without any mention of consciousness. In this case, the fact that we are conscious at all is something special that needs to be explained. This explanation ends up usually being a restrictive theory of consciousness (X in the diagram below). Because most things in the universe aren’t like you, you can then use this theory to explain why these things are, in fact, non-conscious. This can be used to justify the version of physicalism you began with (the one which explains the world without reference to consciousness). This however, creates a circular chain of justification.

In next week’s post, I will conclude my thoughts on consciousness by addressing some critiques and discussing why I think this topic is relevant in the first place.