Thoughts Had while High on Coffee near DuBois, Pennsylvania

Thoughts Had while High on Coffee near DuBois, Pennsylvania

by Jasper Gilley

I’m currently sitting in the passenger seat of a Honda Odyssey on a road trip from Chicago to eastern Pennsylvania. Since some of my highest-quality blog posts have been incubated in similar situations (see Highway Rest Stops, From Worst to Best), I thought that such success might be replicated at the moment, which is why I originally got out my laptop right now. Unfortunately, no unified topic on which a blog post might be written comes to mind, so in place of hard-hitting journalism like the aforementioned post about highway rest stops, you’ll have to be content with a compilation of random thoughts akin to those I often have on such road trips. Kind of like stand-up comedy in its spontaneous jumps from topic to topic, but less funny. Perhaps a coherent subject will emerge, but that remains to be seen. Worst case scenario, you’ll get a taste of what it is like to be in the brain of Jasper during a road trip when there’s nothing else to think about; that is, however, pretty worse as far as case scenarios go.

Also, I had three or four cups of coffee about an hour ago. This might already have been discernible. The hotel at which I stayed last night had, in addition to the usual “regular” and “decaf”, a “robust” blend of coffee, which I tried and liked. However, my present state of cardiovascular over-stimulation leads me to wonder what exactly was done to that coffee in order to make the term “robust” applicable. I haven’t done cocaine yet in my life, but TV would lead me to believe that the experience is comparable to that which I’m currently experiencing.

As I was writing the above paragraph, I passed an encampment of shacks which initially seemed too trashy to be inhabited, but presently appeared to have smoke issuing from the chimney of one of them. Very quintessentially Appalachian. The following may sound classist, but it’s actually anti-classist: little terrifies me as much as the idea of being the person living in that shack. Not because it’d be a terrible life experience – on the contrary, it not being a terrible life experience is precisely what’s terrifying. If you’re the person living in the shack that I initially thought too trashy to be habitable, the only reason you’d have to be conscious of the fact that you’re living a terrible life is the fact that you’re by an interstate on which nice cars occasionally drive by. So, you’d certainly be jealous of the people who are driving by, but really only because they have structurally sound houses and nice cars in which to drive. You have no idea why those people have structurally sound houses and why you don’t (if you did, you wouldn’t be living in said shack.) You probably also have no idea what the Opium Wars were, why/where/when they happened, or how they might relate to you, someone who likely has no reason not to overdose on fentanyl. This just sounds like the most awful thing in the world to me. What’s even worse is the idea that someone – maybe they actually exist, maybe they’re just the product of a thought experiment – is thinking the same thing about you.

That was a depressing train of thought.

Anyway, I’m a mile away from the exit for DuBois, Pennsylvania. There are a lot of trucks on the road out here by DuBois, Pennsylvania, and their continued existence/proliferation is rather baffling to me. Trucking seems like a vastly inefficient method of shipping goods, given the combination of their physical inefficiency (one truck engine pulls a lot less freight than, say, a train engine does) and their manual labor intensiveness. Let alone the fact that they all still use an internal combustion engine. I suppose a self-driving, electric truck is definitely better, but having such discrete, relatively small vehicles for shipping goods still seems suboptimal. I’m certainly aware that trucks’ existence is subsidized by the federal government in the form of highway upkeep (and also gas subsidies!), but it still seems like a pretty 1970 way of doing things. Technological progress disparity is absolutely at play here. After all, Saturn V had been built and extensively utilized by 1970. It’s weird that a system as massive, complex, and elegant as Saturn V would be fully-formed by the mid-1960s but that gas-burning trucks (essentially unchanged since the 1930s) would still be in use after one-fifth of the twenty-first century has elapsed.

I’ve been thinking lately that I’d love to try and write emotionally non-ambiguous nonfiction. Or maybe a much more abstract version of fiction. Or maybe a version of fiction with greatly increased information content. I read in a Paul Graham essay or tweet somewhere that the reason he doesn’t read much fiction anymore is because its information content is generally much lower than that of nonfiction (that is, on any given page, there’s simply less to process.) It seems like we’re actually in somewhat of a golden age for nonfiction – not for nonfiction books, but for nonfiction blog posts and/or internet-distributed essays. As far as I know, however, there’s no similar internet-native textual medium for fiction. So maybe that’s what I’d like to work on. It might end up being more like poetry (not Robert Frost-style poetry as much as T.S. Eliot-style poetry), but that wouldn’t necessarily be a problem. This is because I personally have basically 0 idea how to write characters (that is, story-type characters, not like Chinese characters), so anything fiction-esque that I write would necessarily be more abstract, if still emotionally non-ambiguous.

There’s still a bit of the Robust Coffee from the hotel sitting in a paper travel cup next to me. Since beginning the Word document in which this blog post is contained (as of this writing) – this is getting very meta – my coffee-induced delirium has subsided, which is tempting me to have some more coffee, because it’s not bad coffee and I like the taste of coffee, even if it’s bad coffee, which this coffee really isn’t. At the moment, I’m trying to figure out whether or not I enjoy being very high on coffee, in general. I usually think I do, especially if I have something productive to do, because coffee is great at making you be more productive. But I certainly wasn’t enjoying being high on coffee half an hour ago. Maybe this was because I didn’t have anything to do? I was, after all, sitting in the passenger seat of a Honda Odyssey several miles outside of DuBois, Pennsylvania.

Here’s an insight that sounds like it was incubated when I was high not exclusively on coffee: one might view the life and relationship to coffee that white-collar workers have as being one of aliens who are condemned to consume a certain drug every day in order to alter their mental state and achieve things they’re not biologically meant to achieve. That is, white-collar workers regularly bludgeon their minds into doing unnatural things (like working 9 to 5) by consuming psychotropic substances, which happen to be known as the benign “coffee.” Or, as the case may be, no-collar (e.g., sweatshirt-wearing) college students on winter break bludgeon their minds into the unnatural? act of writing silly blog posts by overdosing on the potent, psychotropically active drug known as Robust Coffee™.

It’s now been a couple of hours since the time of my initial coffee dalliance that originally inspired this post. While I’m still in a state of being affected by the coffee, I’m not sure I can any longer claim to be high on it. This combined with the fact that I’m now no longer terribly nearby to DuBois, Pennsylvania forces me to conclude the two criteria specified in the title of this post – namely, being high on coffee and being near DuBois, Pennsylvania – are no longer really true. As a result, subsequent thoughts had by me will not be Thoughts Had while High on Coffee near DuBois, Pennsylvania; they’ll just be Thoughts Had. Hence, the necessity of hereby concluding this post.

Why People Enjoy Impeding Human Progress

Why People Enjoy Impeding Human Progress

Nietzsche wrote in Thus Spoke Zarathustra of die Letzte Mensch and die Übermensch – the last man, and the ultimate man. The last man is the impulse, both in society and in individuals, to stagnate, to fail to desire, and to seek the cheap half-pleasure that comes from thinking the same things as others and as one’s temporally-static self. The ultimate man represents the burning willingness to seek betterment, to desire more, and to live.

On first reading, the notion that societies and/or the people within them could actively strive against their own betterment seems preposterous. It is certainly counterintuitive. But it’s actually very prominent – in some ways, the rule, not the exception. This post is an exploration of the factors that might go into creating real world last men.

Every society and every individual is either engaged in an internal struggle between these two impulses or in a state of utter concession to the last man. One of the most beautiful aspects of the film Interstellar is this struggle’s centrality to its plot. Nietzsche’s Letzte Mensch is embodied in the citizens of Earth, who insist that the Moon landings were faked, who are content to practice agriculture, and who fail to be capable of saving themselves from self-wrought existential calamity. Die Übermensch is represented not even by any particular character, but by the overriding spirit with which the main characters carry out their mission of redeeming those on Earth (it is important to note that every character involved in the saving of humanity makes a crucial, self-defeating error at some point.) One of the most fascinating aspects of the film is that there is no public support for the space program which eventually saves humanity. Thus, Earth’s citizens are in many ways already dead at the beginning of the film, and NASA’s saving of humanity is tantamount to their resurrection.

Real-world humanity, of course, is embroiled in its own instance of climate change which threatens to (at the very least) necessitate significant modifications to the way in which humans survive. Interstellar’s elegance, however, lies not so much in its parallels to real-world climate change so much as in the analogical framework it provides with which to assess many areas of humanity’s being. Read between the lines just a little, and it’s disturbingly clear the extent to which the day-to-day actions/thoughts of even the most scientifically-minded of us mirror those of Interstellar’s plebeians. The latter’s insistence on the Moon landings’ fraudulence, for instance, could be viewed as similar to many real-world humans’ belief in entirely empirically unjustified superstitions, such as: the (shockingly pervasive) aversion to human-genetically modified foods, belief in conspiracy theories like that of the flat Earth (to cite a particularly widely ~circulated~ example), or belief in self-abnegating health fads, such as many diets, fat jigglers, etc. Meanwhile, the blindness to existential catastrophe exhibited by Interstellar’s plebeians can be compared not only to real-world humanity’s considerable ambivalence to climate change, but to blindness to smaller-scale detrimental phenomena such as the Victorian era fad for arsenic-laced wallpaper, antibiotic resistance, or the truly, purely illogical aversion to civilian nuclear power. In short, we tend to care about avoiding GMOs instead of avoiding antibiotic resistance and about saving the whales instead of saving the humans. A particularly spectacular moment of such silliness was on display during a webcast Tesla shareholders’ meeting when a PETA activist asked executives from the company doing the single most to save humanity from climate change if leather would be removed from their cars’ seats anytime soon.*

Ultimately, however, what appears to be a monumental defect in humanity’s brain programming when viewed from a macroscopic perspective turns out to be actually quite understandable when viewed microscopically. Perhaps unsurprisingly, issues like nuclear power, GMOs, and saving the whales take precedence in the public forum simply because they present more concrete bogeymen than do the other issues mentioned. Because these fallacious views are powered by nothing other than the evolutionary brain programming inherent in every human, perhaps the only way to improve matters along these lines in the short term is providing better science education.

The crux of the problem of public attitudes towards scientific/technological progress therefore resides not even in anything anthropogenic so much as in the natural selection-designed reward functions upon which our brains operate. Firstly, this should serve as a reminder that enlightenment (e.g., sentience) is a retroactive hack applied to very primitive biological structures. Further, there seem to be a very narrow set of parameters under which this hack can even be applied – enlightenment has historically flamed up and died down again many times (compare the inhabitants of classical Athens to those in 1000 AD, or consider the fact that China had vast ocean-faring capabilities during its early dynastic period and then proceeded to lose it, or the fact that the Vikings reached North America in the pre-Medieval period and then proceeded to never return.) It is by no means unthinkable that the unmistakable current batch of enlightenment should die out, perhaps permanently (as noted above, unenlightenment of some form or another is constantly brewing in every corner of human society.) Therefore, if the barrier to permanent enlightenment is ultimately the structure of the human brain, why not change said structure?

This could be accomplished in one of several ways. For instance, one could enable the human brain to have instant auto-installing access to all of the world’s information via a brain-computer interface (Wikipedia is a trove of the world’s information, but it must be manually installed by the user via the process of what we term learning, which is difficult, tedious, and time-consuming.) It also bears reiterating that since intelligence (e.g., learning) was never really a selected-for trait in humans’ evolutionary history, humans’ brains’ reward function doesn’t give out direct hits for participating in as much. If one changed the human brain’s reward function (either crudely or subtly) so that humans got a hit of dopamine whenever they learned something new, one could vastly change the ways that humans spend their time and energy.

Of course, to do so is to make an implicit (or maybe explicit) value judgement about the value of the ways in which we should spend our time. While this is obviously something that many a Luddite will bemoan, it’s really worth noting that if we don’t make those value judgements for ourselves, they’ll be made for us – either in the form of sensory input manipulation a la social media (go on Instagram and try to avoid being told how to spend your time!) or in the form of time value incentivization (something that’s always been done by every economy, pre- or post-capital.) Indeed, giving humans the ability to change that for which what their brain’s reward function selects might eventually be viewed as the point at which humans became fully human! (Or, at least viewed as a significant milestone before which living would seem incomprehensible to someone born after, just like the evolution of brains greater than the limbic system, for instance.)

Until then, however, I’m not actually terribly optimistic about the ways in which human society will progress, given the increasing manipulation of the empirically experienceable world by many of the platforms through which we choose to look at it. I think there’s often a sort of zeitgeist of every invention, in which its specific solution becomes increasingly necessary/obvious only in the years preceding its release. Consider (for example) the rise of electric vehicles – the problem they solve has slowly gone from bad but not urgent (and therefore something that only holier-than-thou hippies care about) to dire. Correspondingly, the proposed solutions have gone from absurd and moralistic (“save the whales!”) to legit (electric vehicles, solar power.) I think the problem of sensory input manipulation is probably just getting to the stage of absurd/moralistic solutions (GDPR, “you own your data”.) The zeitgeist for climate change solutions is just beginning (insofar as real solutions will be deployed at scale starting now and running through the next 10-20 years) and the zeitgeist of sensory input manipulation is still a good 30-40 years away, at least. Within those 30-40 years, any number of very problematic things could happen, made worse by the only proposed solutions being the absurd/moralistic ones. (On a related note: I’d really love it if the era of brain input awareness brought an end to people’s affinity for absurd/moralistic solutions. This seems like it’d be possible with the tech necessary to bring about brain input awareness, but I feel like it’s almost too much to hope for. Maybe that’s just me being old, though.)


*I’ve always cringed when people refer to Millennials as “cynical”, because the only justification you could possibly have for this claim is entirely superficial. I guess you could say that someone is cynical (albeit wrong) who insists that the moon landings were faked, or that GMO foods are inherently bad for you. But, especially in the case of the slightly more plausible latter example, “cynicism” is really not much more than a repudiation of the values espoused by previous generations. This is only really cynicism when viewed from the perspective of someone older. On the contrary, I think many millennials who gladly participate in what I’ll term bougie activism are very credulous. Cynicism is by definition contrarian, so a generation with strong, uniform moral views (like Millennials) cannot possibly be cynical. Victorian morality was cynical, until it wasn’t.

Experience Machine Ethics

Experience Machine Ethics

The following post was originally written as a paper for a class I took at Northwestern University this past spring.

 

One of the most enduring themes in science fiction has been that of an experience machine: a system that gives its users immersive, customized experiences on demand. From Star Trek to The Matrix to X-Men to Doctor Who, viewers and readers have for decades struggled with the implications of the immense, potentially double-edged power such machines would wield. Since the dawn of the 21st century, however, such dialogues have taken on a new urgency, fueled by the rise of sophisticated digital systems that, while by no means capable of creating entirely immersive customized experiences, have begun to be used in ways which would suggest them to be a sort of proto-experience machine: consider recent developments in virtual reality, gaming, and customizable digital entertainment, for instance. Therefore, to give thought to the philosophical, ethical, and practical ramifications of the genesis of experience machines at this point in time is no longer analogous to speculating about the implications of interstellar human travel — it is probably more analogous to mid-20th century philosophers considering the potential implications of a future humanity connected to a universal communications network.

This paper will seek to examine the ethical problems that would be presented by the ultimate emergence of a complete experience machine, and subsequently present a code of ethics intended to guide institutional responses to these ethical problems. As we will see, the experience machine will ultimately represent not just a mind-boggling new technology, but a critical milestone in the macro-scale development of humanity.

 

One of the first commentators to seriously consider the implications of the genesis of an experience machine was Robert Nozick, in a brief portion of his 1974 book Anarchy, State, and Utopia. Nozick immediately points out in this book perhaps the most obvious ethical dilemma that would be created by experience machines: people might very well have no reason to leave the experience machine. After all, why should one waste one’s time living in the real world (where many things are difficult and unpleasant) when one could simply live in a world that feels real in which things are guaranteed to be easy and pleasant? It is perhaps likely that even Nozick understates the sheer magnitude of this potential problem. Much of the first several paragraphs of Nozick’s treatment of the experience machine concern whether one should plug in to it. Ample evidence, however, shows that humans reliably and voraciously take any opportunity they can to escape reality: the average American, for instance, escapes from reality for over five hours per day, by watching television in its various forms. And this is simply escape from reality using a medium as relatively primitive as a moving picture screen — imagine how much more seductive a complete experience machine would be. Ultimately, the ethical problem of people would never leave the machine leads to a variety of sub-problems, too: why reproduce with a human (a very complicated affair) when you can simply have the experience of sex in a machine on demand? Why work, or do any of the real-world things that can be rewarding, but demand effort? These questions are largely practical ones, but they can easily become ethical issues, too, when a utilitarian moral lens is applied to them; issues that coming generations of humans will almost certainly need to grapple with.

Another potential ethical problem associated with the experience machine is the way it might affect how humans think. If one spends all day in a pliable world built to bring one happiness, how will that person begin to view the real world differently? In the real world, for instance, one often must interact with difficult individuals, a class of person that would not exist in the experience machine. It is not unforeseeable that people used to interacting with experience machine “people” would have no patience whatsoever for difficult real-world individuals, lacking the (important) skills necessary for conflict resolution, mediation, et cetera. More insidiously, someone who spent a lot of time in the experience machine would almost certainly be used to sex on demand, without any way of navigating the complicated (but incredibly important) topics like consent that surround real-world sex. A humanity plugged into the experience machine en masse might therefore be a humanity full of sexual assaulters with nonexistent people skills.

The experience machine might also generate perverse financial incentives. In the modern world, the primary method of disseminating technological advances has been (and will almost assuredly continue to be) capital markets. Depending on the market structure used to deliver experiences to the end user (e.g., if experience machines are themselves sold, or if time in them is sold), the companies selling experiences might very well have an incentive to keep the user in the machine as much as possible — potentially at the cost of the user’s physical/emotional health. It is not hard to imagine an experience machine in which experiences are designed to keep the user using the machine, much like today’s social media platforms are designed to keep the user clicking, scrolling, and liking. Just as Facebook has deployed advanced behavioral psychology to make its product as addicting as possible, so might experience machine companies make experiences as addicting as possible (and imagine how much more addicting an experience machine might be than a social networking website!) The only foreseeable solution to this perverse incentive structure would be selling experience machines themselves directly to the end user, so that revenue for the experience machine company is not exactly directly correlated with its product’s addictiveness. However, depending on how expensive each experience machine is, this business model might not make sense.

A final ethical problem likely to arise from the genesis of experience machines is their potential usage for unpleasant experiences. One can easily imagine experience machines replacing waterboarding as the world’s interrogation technique of choice. The possibilities along these lines are incredibly frightening: imagine being trapped in a nightmare both fully real and from which one cannot possibly wake up. Not much more need be said about this possibility, but it is worth bearing in mind that experience machines could be used as the ultimate torture as well as the ultimate entertainment.

 

To guide mass usage of experience machines, a code of ethics is in order. The following one is largely motivated by the aforementioned ethical problems associated with experience machines, and seeks to balance consideration of what is right with what is feasible. It is primarily addressed to corporate makers of experience machines and government regulators.

 

The Mostly Comprehensive Experience Machine Code of Ethics

 

  1. No entity — government, corporate, or otherwise — should seek to impose a limit on the amount of time individuals can use the experience machine or seek to alter the way in which the user desires to use the experience machine, with the following exceptions.
    1. Interference is permissible if the user displays signs of experience machine-induced mental illness. Every experience machine should come with a built-in evaluator of the emotional/psychological/mental health of its users that uses machine learning algorithms to detect behavior known to be anomalous. If the evaluator raises any red flags, the user should be gently returned to the real world, and designated for psychiatric help.
    1. Experiences that are very likely to make their experiences dangerous to others in the real world may be pre-censored, so that no user may obtain that experience. Such experiences may include those likely to alter the user’s underlying psychology in a detrimental way.
  1. Full cognizance of the power of the experience machine should be incorporated into the machine’s design and operating norms.
    1. Participation in an experience machine should require express written consent.
    2. Participants should never be held in an experience machine against their will.
    3. Every experience machine should have a “kill switch” that, when activated, immediately returns the user to the real world.
    4. Participants should have full veto control over whatever experiences they may be having. While machine-generated experiences are permissible, they should always be able to be overridden by the user.
  2. Experience machines should be operated with awareness of the financial incentives they may generate.
    1. Experiences should be entirely anonymous. Under no circumstances should data from the experience machine be mined and re-sold to advertisers, or provided to government agencies.
    2. The utmost efforts should be taken to promote individual ownership of experience machines, potentially even in the form of a government subsidy.

 

While some aspects of this code of ethics may seem self-explanatory, others are likely to require extensive normative justification and descriptive explanation. What thus follows is a clause-by-clause discussion of the values represented in the code of ethics as well as how those clauses are likely to interface with the ethical problems laid out at the beginning of this paper.

 

There can be said to be two core values underlying the Mostly Comprehensive Experience Machine Code of Ethics. The first is techno-liberalism: that is, an enduring commitment to liberal values in light of new technological developments. The second is anti-parochialism: an unwillingness to view technological developments outside of their historical context (in the case of the experience machine, that of the history of entertainment, very broadly.) These core values are elucidated upon in the upcoming clause explanations to which they are most relevant.

The first clause of the Mostly Comprehensive Experience Machine Code of Ethics is largely founded in the idea that the right of human individuals to make choices should be respected, in accordance with fundamental liberal values. It thus fails to delineate more than the two specific cases in which oversight of individuals’ usage of the experience machine is permissible. While it may seem that this first clause sets a dangerously high bar for interference in such a manner (to the point that it would likely not actually prevent the first ethical problem — over-usage — from occurring), it should be noted that to take a more conservative approach to the experience machine’s usage is to take a strong stance on the moral value of entertainment in general. Any argument that might be given in favor of restricting usage of the experience machine (such as the fact that it wastes time, isn’t productive, etc.) could also be applied towards restricting usage of Netflix, for example. It is important to note that the experience machine would be the terminal innovation in entertainment, not an entirely new class of entertainment — this is the core insight of anti-parochialism as applied to the experience machine. That is, the experience machine would unite and improve upon many different aspects of entertainment that humans currently consume — including television, video games, and pornography — and yet those current iterations are distributed more-or-less uninhibited (largely due to contemporary society’s adoption of liberal values.) The class of ethical problems at which the first clause is aimed, therefore, are likely to be unsolved in the era of the experience machine.

The second clause of the Code of Ethics is largely motivated simply by concern for the well-being of experience machine users, as well as by techno-liberalism. With regards the likelihood of implementation: subclauses a., c., and d. seem very likely to be implemented naturally by market incentives, whereas, especially if the technology becomes widespread, subclause b. seems rather unlikely to be adhered to. As long as there are governmental regimes with a need for extracting information from prisoners, the experience machine seems likely to be used as the ultimate enhanced interrogation technique, unfortunately. Nonetheless, there is undoubtedly value in making a normative statement on the subject. This code of ethics’ prohibition of experience machine torture again rests in techno-liberalism: respect for the human rights of detainees, even in the era of experience machines. Hence, the subclause’s inclusion, despite the relative unlikeliness of its implementation.

Finally, the third clause of the Mostly Comprehensive Experience Machine Code of Ethics is motivated directly by a techno-liberal commitment to preserving the sanctity of human minds — thus, the clause’s efforts to combat addiction financially. If experience machines are too expensive for most people to afford, but cheap enough that most people could afford to rent time in them, their individual purchase might be subsidized by the government (for qualified buyers, at least.) Some might balk at this proposition, given its nontraditional use of public funds. However, this likely understates the extent to which the experience machine would be the ubiquitous terminal innovation in entertainment. The ability to have customized, seamless entertainment on demand might be so compelling that access to it might begin to be seen as a fundamental human right — in which case its subsidization might well be seen as perfectly sensical. Thus, subsidization of access to the experience machine would kill two birds with one stone: it would ensure that the experiences are not made unnecessarily addicting, and that most people would have equal access to it.

 

It is hardly deniable that the above code of ethics, if implemented, would do little to hamper usage of the experience machine, potentially even usage en masse. What if, one might argue, virtually everyone plugged into the experience machine, and was never heard from again in the base reality? Wouldn’t that cause problems both ethical — billions of people would live in a “false” reality — and practical — no one would be around to feed everyone?

Firstly, it must be pointed out that the potential practical problems associated with this possibility are ultimately inferior to the ethical ones, since it is naïve to think that the free markets would not supply a method of (for instance) supplying the plugged-in humans with nutrients, given the scale of economic demand that would exist for such a service. The ultimate question is thus whether a scenario almost akin to a self-imposed version of The Matrix (in which billions of humans are permanently in a near-vegetative state while plugged into the experience machine) is ethically desirable.

To reach such a conclusion, it is helpful to once again invoke anti-parochialism. As mentioned in an earlier section of this paper, the average Westerner dedicates a very significant amount of time to entertainment — an amount that our not-so-distant ancestors (in addition to citizens of third-world countries) would find obscene. Since the experience machine will really just be the consummate version of the entertainment we now consume, those of us in 2018 might do well to remove the proverbial log from our own eye before attending to the speck in our distant successors’. That is, given our contemporary culture of entertainment — we spend some 33% of our waking hours consuming it, after all2 — one might begin to suspect that detractors wouldn’t really have an ethical objection to plugging into the experience machine (even for long periods of time) if they had the option to.

Secondly, if every human plugs into the experience machine on a quasi-permanent basis, they would spend their lives living in a world explicitly designed to bring them happiness. Since the experience machine could (in principle) emulate the base reality to the smallest iota, one would have no reason to believe it could do any worse at bringing happiness to the user than the base reality. Therefore, since it is a commonly held principle that one should generally respect others’ pursuit of happiness (it is, after all, enshrined as a human right in such terms in the United States’ Declaration of Independence), who is to say that their neighbor has no right to pursue happiness by permanently plugging into an experience machine?

Finally, to assign superior ethical value to the base reality is to fundamentally misunderstand some of its attributes. In order to make the argument that a virtual reality has less inherent value than the base one, one must recourse to denigrating it on the basis that it is only made “real” in the user’s mind. But the same can easily be said of the base reality! One can interpret Descartes’ famous observation cogito, ergo sum as consisting precisely of the recognition that reality is only real insofar as we make it so. In the Critique of Pure Reason, Kant likewise observed that “all appearances are together to be regarded as mere representations and not things in themselves, and accordingly that time and space are only sensible forms of our intuition, but not determinations given for themselves or conditions of objects as things in themselves. To this [idea] is opposed transcendental realism, which regards space and time as something given in themselves.” If one were to spend one’s life, therefore, from birth in an experience machine identical to reality, then removed from it in middle age and forced to spend the rest of one’s life in the base reality, the results would be no different than if the realities had switched places. More concisely, both the base reality and a virtual one would be based entirely on axioms — admittedly, axioms of varying obviousness — but all axioms nonetheless. It is therefore extremely difficult to maintain in a philosophically consistent manner the position that it is unethical for the world’s humans to plug into the experience machine en masse.

 

Upon its genesis, the experience machine is likely to be a technology that introduces new ethical problems, compounds old ones, and forces a reconsideration of many aspects of what it means to be human. A code of ethics, therefore, that puts the experience machine in its appropriate historical context and keeps in mind what so many humans hold to be the most important liberal values is of paramount importance in bringing this new technology to a stable, safe, and powerful fruition.

Nonetheless, it is perhaps somewhat paradoxical to speak of putting the experience machine in historical context, for while the ways in which humans use it may have historical context, the technology itself most certainly will not. It has been put forward as the terminal innovation in entertainment — but entertainment itself may prove to have an expiration date sometime in the not-too-distant future. As we continue to discover the intricacies of the human brain, and as digital systems continue to grow in sophistication and scale, the two seem likely to join in an ever closer union, to the point where no distinction between man and machine can truthfully be drawn. It is not unlikely that those future humans, capable of editing their brain-function at will, would have little use for something as pointless as entertainment. Therefore, the experience machine may represent the final technological invention of the pre-cyborg era, and thus should be viewed in the most serious of lights. It might be said that as humanity gradually assembles the parts for a complete experience machine, it will simultaneously assemble a monument to the twilight of its primitive era.

Transcendental Idealism

Transcendental Idealism

I was writing a paper for a class during the past academic quarter, and while researching for said paper, I came across an idea originally posited by Kant called transcendental idealism. Here is Kant’s definition of transcendental idealism, from the Critique of Pure Reason:

I understand by the transcendental idealism of all appearances the doctrine that they are all together to be regarded as mere representations and not as things in themselves…consequently, we can only cognize objects in space and time, appearances. We cannot cognize things in themselves.

I interpret this rather confusingly-worded paragraph as essentially consisting of the recognition that, since we can only perceive the universe through the medium of our senses (e.g., we can’t feel the universe directly), everything that to us forms reality can only be said to exist in our brains. While this may seem like an insight that could even be described as trivial, I believe it is widely applicable to a pretty wide range of important phenomena, perhaps precisely because it constitutes a way of looking at reality that is inherently (and understandably) foreign to us.

For instance, this notion can go a long way towards explaining the sometimes seemingly bizarre qualities that mass interpersonal interactions can take on. Transcendental idealism would hold that the people in our lives only are made real to us as representations in our brains, meaning that we have a natural inclination to fail to cognize the inherent humanity of strangers and those with whom we have spent little time.

As an example of why this matters, consider the now-ubiquitous interpersonal interactions on social media. A Twitter user spouting views you (perhaps rightly) perceive to be ignorant is just that, for all you know — a Twitter user, not a person. As a result, one is inclined to treat such Twitter users very differently than we would treat a close friend who espoused similarly ignorant views. Most people would simply explain this discrepancy by pointing to the fact that you’re “closer” to your close friend, and thus more tolerant of their views. In reality, I think, there’s more to it than that: there’s a fundamental difference in the way that you view their existence.

Transcendental idealism can thus explain a lot of why mass politics can be so rabid and/or unintellectual. It’s easy to berate those awful Wall Street investment bankers when they’re just despicable, selfish people with no concern for others. Likewise, it’s easy to trim the welfare budget when its recipients are just lazy people mooching off the government. The thing is, that those bankers do care about strangers just about as much as you do, and that welfare recipients are really just as self-serving as you (in aggregate, at least.) In other words, this blog post might as well be an advertisement for the school of classical economics that views all humans as rational, profit-maximizing actors. What’s actually really counterintuitive about this economic approach is that it treats other people in a vastly different way than we are accustomed to in real life. This is part of why I think that the viewing of life through an “economic” lens, as useful (and ultimately humanitarian) as it is, is really quite rare.

A slight tangent to back up my thesis about economics being humanitarian: it took until the 1970s or so for crime to be viewed in economic circles as nothing more than the rational actions of self-interested actors (usually those of lesser means.) Until then, policymakers (who by definition were not the ones committing crime) tended to view crime as an example of moral failure, which is understandable, since they couldn’t really cognize the fundamental humanity of criminals. Gary Becker’s famous paper which popularized this economic view of crime was thus incredibly counterintuitive, revolutionary, and humanitarian (and ultimately exemplary of the liberal worldview.) My guess is that the United States’ failure to treat crime in such an economic manner accounts for 90% of the problems inherent in its criminal-justice system.

Unfortunately, this also sheds light on just why the economic view is generally un-pervasive: the illiberal view (that is, the viewing of humans as anything other than such) is supported by almost every interaction we have! It’s particularly bad in online interactions, but it’s also the case in most in-person interactions in Industrialized society. You have no communication whatsoever with 99% of the people with whom you interact on a day-to-day basis beyond looks/stereotypes, etc. Therefore, it’s understandably difficult to think about an entire group of people with whom you have little interaction as fully human. As usual, it’s helpful to remember that humans were evolutionarily designed to live in tight-knit groups of ~100 people, and that the way we currently interact with each other bears almost no resemblance to this.

Is there a solution in sight to this massive systemic problem with every interaction we have? I’d like to think that the advent of brain-computer interfaces will facilitate the development of some technology analogous to the Vulcan mind-meld from Star Trek, and that that technology will finally force us to cognize the humanity of every other human. Knowing humans, though, we’ll probably find some way to create an even bigger problem out of this potential solution. A lot of people thought the internet would perform some function analogous to this, and while they may have been right in some ways, they were very wrong in others.

A few final things: it could be pretty easy to take some aspects of this post out of context, and cite this as an example of my extreme moral depravity. Even if you don’t do this, you might have a hard time stomaching my assertion, for example, that both bankers and welfare recipients are (in aggregate) rational, profit-maximizing, fundamentally human, humans. Either way, you’ve missed the entire point of this post, and you should read it again.

Also, sorry if this post read somewhat like a slightly more intellectual version of “can’t we all just get along?” I maintain a high level of derision for people who legitimately believe this, and fail to undertake any sort of a rigorous analysis of just exactly why people never do seem to get along. Be aware that this post was simply an attempt to explain in analytic terms an aspect of the world that I’ve been thinking about lately, not any sort of a normative statement.

Also, sorry for the long hiatus prior to this post. I haven’t stopped maintaining this blog, and hope to deliver more posts shortly.

Thermoeconomics

Thermoeconomics

by Jasper Gilley

The house in which I grew up has a driveway, which slopes down from the house to the street at a very modest incline of perhaps 1 or 2 degrees below the horizontal. Ordinarily, this is almost unnoticeable, but a few times every year, someone would use a garden hose at the top of the driveway. On such occasions, the water would travel down the ~30 feet of driveway at an average rate of something like 2 inches/second, taking sometimes meandering and sometimes direct paths to get to the drain at the curb. Observing this process was inevitably a source of infinite entertainment for me, noting the water’s progress and sometimes aiding it in its quest.

As it turns out, mathematicians, physicists, and computer scientists are equally infinitely entertained by observing and modeling analogous such processes, which are confusingly termed in their disciplines gradient descent. Whether the gradient being descended is a map of differential temperature zones, a literal map of mountains’ elevations, or a map of mechanical strain as a function of applied stress, gradient descent is a powerful method for optimizing systems.

A sidenote: the mathematical construct used to analyze the “slope” of multidimensional functions is simply called the gradient. If you remember high school calculus, the gradient is literally just the multidimensional analog of the derivative. That is, if the derivative is the slope of a single-variable function, like so:

…then the gradient is the slope of a multidimensional function, like so:

This means that the seemingly complicated mathematical process of gradient descent can be visualized in an extremely intuitive way:

This is literally just the same process as water flowing down the driveway. The best part about the gradient operator, though, is its symbol, which is known as the Nabla symbol:

You’d be lying if you said that wasn’t the most elegant mathematical operator you’ve ever seen.

Anyway, one of the fields to which the gradient is most applicable is thermodynamics. (The term thermodynamics literally means “changing temperatures” – thermo = temperature, dynamics = the study of changing systems – which necessitates the existence of temperature differences, which can be expressed either by the derivative or by the gradient, depending on the system’s dimensionality.) When temperature differences exist, work can be done, via what is known as a “heat engine”:

How this diagram should be viewed is that the heat engine exploits the difference in temperature between the “hot reservoir” and the “cold reservoir” to generate some form of energy, which is in the diagram the blue right outflow. However, not all of the temperature difference can be converted to energy – by necessity, some of the temperatures end up equalizing (this is formally known as the Second Law of Thermodynamics.)

Here’s the interesting part: in economics, a firm can be viewed as exactly analogous to a heat engine.

Let me break this down. Let’s say you’re a consumer looking to buy an iPhone. You have determined that you’d rather have an iPhone than an extra $500 in your bank account (or $1000, if you’re in the market for an iPhone X.) So in the diagram above, you’re an individual with too much cash. You pay Apple $500 (the red arrow), which Apple counts as revenue. Apple then distributes about $250 to a large number of people in China (this is the Expenses arrow), who build your iPhone and ship it to you. (Essentially, these people have determined that they’d rather have about $2.50 than an hour of their life, so they’re the people with too little cash.) Apple then pockets the other $250 (the Profit arrow.)

If you just consider this one exchange, you’re looking at a single-variable difference in supply and demand. So it could be effectively analyzed using single-variable calculus (e.g., the derivative.) Of course, thousands of people are doing the same thing as you every minute, which is why Apple has about $285 billion laying around. If you imagine every individual having their own unique point in xy-space, you might begin to be able to visualize it more like this:

The pink points are people who are buying iPhones, and the blue points are the people who are producing them. Apple simply exists to facilitate cash exchange between the two groups of people. What is really interesting, of course, is that this map is constantly changing as people transact. So immediately after buying an iPhone, your own pink point goes back down to 0, but as your iPhone gradually becomes obsolete, your point begins creeping back up.

You can also look at it in the reverse way:

That is, the individual with too much supply (in the case of Apple, of labor, or any of the materials needed to make an iPhone, like glass, steel, or microchips) gives said supply to the firm, which uses its unique advantages (which I have termed proprietaries) to bring the product to the consumer. Proprietaries essentially are whatever makes a business more valuable than its competitors: usually some combination of unique technology, network effects, economies of scale, and/or branding. It’s incredibly important to note that in the above diagram, the blue consumer gets more product than the red producer put in, since proprietaries add to the value of the end product. That is, in the case of an iPhone, the producer gives $250 worth of product to Apple, and Apple gives $500 worth of product to you. So Apple’s proprietaries (namely, mostly branding and economies of scale) are worth exactly $250 per unit sold. Or, they’re worth $285 billion in total, which is obviously a lot.

Looking at economic transactions through a thermodynamic lens also yields an explanation of what is sometimes known as Schumpeter’s Law of Creative Destruction. As transactions occur, variance in the gradient field of supply and demand tends to cancel out, as discussed above. But a firm’s intrinsic value derives from its providing a conduit for supply/demand value equalization. This explains why firms inevitably have a limited lifespan – the process of their cashing in on value alters the value differential landscape, eventually destroying future value, unless the firm adapts to accommodate the landscape. (Viewed through this lens, being an entrepreneur is nothing more than being able to recognize supply/demand value differentials and creating a channel to equalize them.)

What’s really interesting is the manner in which the proprietaries are created. Namely, the cash value of creating the proprietaries is not equal to the cash value of the proprietaries themselves! So when Steve Jobs (or some unknown designer) designed the Apple logo, for instance, they didn’t do anything other than some clicking in Adobe Illustrator. Even when you factor in the cost of paying that designer and the cost of Adobe Illustrator, there’s no way it adds up to the literal monetary value that that logo has provided Apple over the years. So that designer literally created value ex nihilo! This might be the first time the analogy between firms and heat engines breaks down, because a fundamental tenet of all physics is that energy (the analog to value) can neither be created or destroyed. Barring hitherto-undiscovered quantum mechanical subtleties making the conservation of energy not exactly true, this is a prime example of the fact that humans are not quite the same as hydrogen atoms. Perhaps the real question, however, should be why humans aren’t the same as hydrogen atoms – because for most of human history, economic growth was stagnant, and the analogy would have applied all but perfectly.

Fiction, Reality, and Nihilism – Part II

Fiction, Reality, and Nihilism – Part II

by Jasper Gilley

Since writing the last post, I have realized that music videos are perhaps the best example of just absurdly blatant emotional non-ambiguity. Watch, for example, the music video for Taylor Swift’s You Belong With Me: it’s a fantastic video, in my opinion, but it’s incredibly emotionally non-ambiguous.

Anyway, since the last post, Fiction, Reality, and Nihilism (which can now be retroactively labeled as Fiction, Reality, and Nihilism – Part I), I’ve had some more thoughts on the same subject. To start with, a few loose ends.

I realized that dreams, especially of the deep REM sleep sort, are almost without exception emotionally non-ambiguous. Even if it’s not a dream about anything that in your waking life you’d consider emotionally non-ambiguous, your experience during the dream is reliably anything but nihilistic. I honestly have no idea why this would be the case – is it just a recharging mechanism, or a hint at the broader neurochemical structure and meaning of dreams? Considering the fact that we have almost no idea why dreams occur or how they work, (or what they were doing, to paraphrase Stonehenge from Spinal Tap), I don’t know if anyone is really qualified to speak to this.

Secondly, I think that the emotional non-ambiguity of novels in particular, and of fiction more generally, is related to what I consider to be a fallacy regarding the rationale for consumption of fiction. I was in a class about a week ago, when the professor, a reasonably famous literary critic, suggested that the reason one should read realist novels such as Anna Karenina is to gain experience looking at the world through different human lenses, thus becoming a wiser human. I agree with this idea, but is it really justifiable to claim that this is the only, or even the pre-eminent reason, one should read novels? I think not: for this argument really only applies well to  realist works like those by Tolstoy and Dostoevsky, and not as well to absurdist or surrealist works such as Waiting for Godot, Don Quixote, or The Phantom Tollbooth (which happens to be one of my favorite books.) Does reading The Lord of the Rings really give you life experience? I’d argue that a large part of the value of a book derives not from its practical applications, but from its intrinsic emotional non-ambiguity. To humanities scholars, who represent a discipline currently in a bit of an existential crisis, this might seem a less compelling argument, but I think modifying the rationale for the humanities to fit a more essentially utility-maximizing STEM-y framework destroys some of the humanities’ value. We established in the last post that telling emotionally non-ambiguous stories through book, music, or art appears to be one of the most basic human needs. I think the humanities exist simply to fulfill that need, not to necessarily help us in real life.

To illustrate this point, I will point to the TV show The Office. Of TV shows that have aired in recent memory, I think it’s probably the closest in artistic approach to the realist novels. For it is a show about essentially lifelike characters in a humdrum office in a humdrum town doing a humdrum job. All of the episodes center around essentially mundane scenarios that would be very likely to occur in real life. I think one of the big giveaways is that the show has no background music, so that when something mundane happens, you’re not told how to feel about it – just like in real life. The very lifelike nature of the show is, I think, why some people (such as my mom) “just don’t get it.” If you only watch a bit, it can seem entirely emotionally ambiguous, and who would consume emotionally ambiguous fiction?

I think the brilliance of the show, though, resides in its ability to generate emotional non-ambiguity out of realistic scenarios. This allows for greater subtlety than the show would be allowed if the viewer’s disbelief had to be more strongly suspended. Also, The Office’s lack of an obvious intended emotional state heightens its subjectivity, another essential ingredient in subtle art. For these reasons, I argue that anthropologists of the year 3000 will watch The Office both as a tool for learning what life was like in the year 2010 and as a tool for gaining entrance to the psyche of people in 2010.

This also gets into the difference between abstract and representational art. Both are (or should be) emotionally non-ambiguous, with their only difference being the amount left up to the interpretation of the observer. It’s not remotely obvious, for example, how a Jackson Pollock piece is supposed to make you feel, whereas one can discern instantly what reaction cheap art like Thomas Kinkade’s is supposed to elicit from you. To be fair, it’s not at all obvious how the Mona Lisa is supposed to make you feel, even if it’s not technically abstract art. So perhaps the distinction between abstract and representational is almost synonymous with the distinction between good and bad art – and perhaps it would be more accurate to consider the Mona Lisa a form of abstract art.

Finally, on the post Fiction, Reality, and Nihilism [Part I], an astute commenter commented:

Isn’t nihilism when you think life has no meaning? Just because something is emotionally ambiguous doesn’t mean it does not have meaning. One could say that it is existentialist in that you can make your own meaning from the ambiguity. Isn’t it even more meaningful when you choose what the meaning is rather than have it on a silver platter?

It seems that an important question is whether or not the terms “meaning” and “emotional ambiguity” (or lack thereof) are synonymous. As a thought experiment, consider what one might mean when one says one’s life has meaning. For a very blanket example, consider the stereotypical millennial who decides that their meaning in life is to follow their passion (whatever their passion might be.) Then this millennial’s meaning is their passion. But “passion” is a very emotionally charged word, to the extent that one might as well define it as “that which one perceives as emotionally non-ambiguous.” A musician’s passion, for instance, is the music, the whole point of which is to be emotionally non-ambiguous. So for all intents and purposes, “meaning” is the same thing as “emotional non-ambiguity.”

So a definition of nihilism as “the belief that life has no meaning” is essentially the same thing as “the belief that life is emotionally ambiguous.” Analogously, existentialism might be alternately defined as the belief that humans must choose their own means of viewing the world in an emotionally non-ambiguous fashion. Of course, if the world is fundamentally emotionally ambiguous*, this makes no sense. But since perception is the intermediary between humans and the world, if one chooses to view the world in an emotionally non-ambiguous fashion, can it really be said to be emotionally ambiguous?

Maybe so. It depends on how completely one can enforce the dogma of emotional non-ambiguity. Perhaps the important thing is not that one actually believes the world to be emotionally non-ambiguous, but that one strives to believe as much.

This is a very Nietzschean conclusion, but I’m not sure I agree with it. Honestly, I have no idea where this post went. It didn’t really answer any questions, but maybe the point of thinking about these kinds of things isn’t to generate answers but to raise questions.


*Emotion is a very anthropogenic construct. Why would the universe have any non-anthropogenic emotional bias?

Fiction, Reality, and Nihilism

Fiction, Reality, and Nihilism

by Jasper Gilley

Here is a video of the Star Wars throne room scene without music. Please watch it:

The first time you watch this video, it’s hilarious. The second time, kind of hilarious, but not as much so. The third time, it gets a little disquieting.

With music, the scene is triumphant and fun and mildly funny and you can kind of forgive the extreme cheesiness. Once the novelty of there being no music in the above video wears off, the scene is about ten times cheesier than it otherwise would be, but it’s also morally and emotionally ambiguous. We don’t know if Han and Luke and Chewie are the good guys or the bad guys; whether they’re getting a reward from the Rebel Alliance or the Galactic Empire.

In other words, the scene would be a lot more like real life. Because for almost all real-life political award ceremonies, nobody cares. Can you name the most recent recipient of the Congressional Medal of Honor? Even if you do care (or perhaps because you care), someone is bound to see it in precisely the opposite manner. Which award is “good” and which award is “bad”: the Congressional Medal of Honor or the Order of Lenin? For most of the 20th century, the population of the world would have been very close to evenly split on this question (the population of the world that cared, at least.) An awareness of this reality is, by some definitions, the definition of nihilism.

I think the role of film scores gets at the heart of the difference between real life and entertainment, be that entertainment film, literature, or even (sung) music. In entertainment, there is an emotional non-ambiguity from which the consumer derives enjoyment. Specifically, I would argue that the consumer derives enjoyment from the process of observing a universe which has an unmistakable emotional direction and pretending that their life does too.

Consider the genre of romantic comedies. At their core, such films inevitably possess a very blatant emotional non-ambiguity: two likable characters fall in love and live happily ever after. Everyone sees themselves as likable and thus identifies with the characters, pretending that they are a character and that their life is equally non-ambiguous. Ultimately, the comedy aspect of romantic comedy is just orange juice to help the vodka go down – that is, it is designed to distract moviegoers from just how non-ambiguous the film is (and possibly also to help make it socially acceptable for men to see the film.)

Romantic comedies are an extreme example. But most everything else that one might term entertainment ultimately relies upon emotional non-ambiguity. The non-ambiguity of action movies: person blows up bad guys, is hero. Watcher identifies with hero and feels heroic. Love songs (especially country love songs) employ the exact same mechanism as romantic comedies, usually sans comedy. Party songs (such as Taio Cruz’s Dynamite, for example) are designed to give their listeners a little taste of non-ambiguous party whenever they are listened to. Many books are emotionally non-ambiguous, as anyone who has read teen fiction will know. Even bad paintings do something similar. Consider the following painting by Thomas Kinkade:

Could it be any more emotionally non-ambiguous? (I sincerely apologize for exposing you to such bad art, by the way. I can’t look at this painting for too long without wanting to vomit.) Even Thomas Kinkade’s slogan (“the Painter of Light™”) screams non-ambiguity. On an unrelated note, how bad could your taste in art possibly be if you think buying a painting from a painter who has a slogan is a good idea? On another unrelated note, here is a chart I made which sorts artists by quality and abstraction:

However, not every work of fiction fits nicely into this mold, especially when you start considering high-quality fiction. Romeo and Juliet, for instance, certainly has a lot more going on in it than a simple tale of emotional non-ambiguity. But anyone who has seen it performed will certainly attest that it is emotionally non-ambiguous, albeit in a much more subtle way than The Hunger Games. Likewise, The Matrix employs the standard action movie emotional non-ambiguity (as discussed above), but builds on that core with what I believe to be a first-rate consideration of the relationship between humans and machines, so that I believe it to be the defining film of its era (that of the computational-technology revolution.) Not everyone who saw The Matrix picked up on such subtleties, obviously, but the majority certainly picked up on its emotional non-ambiguity, leading to its box office success.

By no means should this blog post be read as a dismissal of the value of emotionally non-ambiguous entertainment. If The Matrix were not pleasurable to watch, there would have been little audience to ponder its thoughtful message, let alone finance its $63 million budget. Indeed, I believe this to be the formula upon which all great art (be it film, music, or literature) is built. Emotionally non-ambiguous content provides the audience; creative individuals provide the artistic quality. This is why I believe that the contemporary music that will be remembered by posterity will not be that which is written with the intention of preservation for posterity. Much contemporary “classical” music, for instance, is not obviously pleasurable to listen to. (Footnote: I use quotes around the term classical in this sentence because I consider classical music to be by definition that which is good enough to be remembered by posterity.) Most EDM, however, is. The pleasurable base (and the audience and thus financing it provides) can be built upon by creative individuals to create meaningful (but still emotionally non-ambiguous) music. Indeed, genuinely great music inextricably intertwines the two.

Hamilton epitomizes this phenomenon. It is undeniably founded in the blatantly emotionally non-ambiguous tradition of Broadway musicals. But it uses this foundation to create something fantastic, not to mention incredibly popular (along the way, it also does a much better job of telling history than most history-books do.) This is also probably why the genre of opera has served and continues to serve as an important catalyst for new musical developments: the emotional non-ambiguity is largely taken care of by the story, allowing composers great freedom in how they choose to present the story. I make no distinction, by the way, between Hamilton and opera. Listen to The Barber of Seville and Hamilton and tell me without prejudice that the former has any greater emotional depth than the latter. (For those hung up on the fact that Hamilton is partially rapped, consider the fact that much of The Barber of Seville consists of proto-rap-like recitativo, spoken words set to music which help to advance the plot.)

Unfortunately, unlike fiction, real life is emotionally unambiguous rarely, if ever. As argued at the beginning of this post, even in those exceptional circumstances when somebody believes there is a villain or hero, there is inevitably somebody else who believes the opposite. And no real-life relationship is anything close to as perfect as those depicted in romantic comedies. That reality is fundamentally emotionally ambiguous, I think, is the fundamental insight of nihilism – no more, no less. Oddly enough, to a society as saturated in emotional non-ambiguity as contemporary human society is, the fundamental emotional ambiguity of real life begins to seem exceptional. How might this saturation affect us psychologically? As far as I know, not much thinking has been done on the subject.

Voyager: Making America Great Again Since 1977

Voyager: Making America Great Again Since 1977

by Jasper Gilley

Of the many shocking things associated with the 2016 American presidential election, not the least was Donald Trump’s campaign slogan: Make America Great Again. All else about the election be as it may, I find this slogan fascinating because it expresses a very prevalent sentiment in contemporary politics, whether implicit or explicit. No matter the policies that one advocates, insinuating that the application of said policies will return America to some mythologized great past gives one’s policy recommendations a strong emotional urgency. Using the word great to describe a nation, especially in a temporally dynamic context, begs the question: what makes a nation great?

Unfortunately, the answer to this question is almost always defined in a way that invokes pathos much more than logos.Make America Great Again was an effective slogan for the Trump campaign because it allowed voters to conjure up a nostalgified memory of a bygone era of their choice, and believe that Trump would bring back said era. Even if politicians answer the question what makes a nation great? directly, answers diverge on each side of the aisle. Republicans might say that liberty and unbridled capitalism make a nation great, whereas Democrats might say that a nation is great which ensures the welfare of its citizens.

Is there any comprehensive definition of what constitutes a great nation? I will argue that a comprehensive summary of America’s greatness, at least, lies in a school bus-sized metal contraption which is currently 13.15 billion miles away from the United States itself.

I am referring to the Voyager 1 unmanned space probe. Along with its sister probe, the confusingly named, Voyager 2, it was launched in 1977 to make scientific observations about the Solar System’s outer planets – namely Jupiter, Saturn, Uranus, and Neptune. Here is a photo of Voyager:

My bad; wrong Voyager. The Voyager from the 20th century (as opposed to the 24th) looks like this:

A good bit less exciting, but on the upside, it’s not fictional. Anyway, the Voyager probes were the first to get real scientific data about Jupiter and Saturn, and the first to visit Uranus and Neptune at all. Along the way, they snapped some iconic photos that you’ve almost definitely seen without realizing which probe took them. Here’s a photo that Voyager 1 took of Jupiter:

Here’s a photo that Voyager 2 took of Saturn:

And here’s a photo that Voyager 2 took of Neptune:

It’s worth noting that any decent photos you see of Uranus or Neptune were taken by Voyager 2 because no other probes have been there since. Finally, here is the iconic Pale Blue Dot photo, which was snapped by Voyager 2 as it left the inner Solar System. Earth appears as the little point in the middle of the rightmost streak of light:

After concluding their observations of the gas and ice giants (the latter being a recently-coined term for Uranus and Neptune), the Voyager probes headed for interstellar space. Of the two probes, Voyager 1 is going faster, so that on August 25th, 2012¹, it became the first terrestrially-made object to cross the heliopause and actually enter interstellar space. Here is the link to a ridiculously cool NASA site that displays in real-time how far the Voyager probes are from the Earth and the Sun, along with their velocities, and how long it takes light to travel from them to Earth (as of the writing of this post, the better part of a day in the case of Voyager 1.)

All this is very cool, you might say, but how is it relevant to us back on terra firma who must contend with the incessant flux of human politics? For one, adherents to the Make America Great Again emotional dogma might consider that there is right now an American flag speeding towards the stars 190 times faster than a bullet train. If humanity were to drive itself to total extinction (by irreversibly making Earth inhospitable for life, say), billions of years in the future, long after the death of anyone who might have even known of the existence of humans, there will still be an American flag speeding around the Milky Way galaxy. Indeed, there is so much space in between the objects in the galaxy that neither Voyager probe is even remotely likely to crash into anything, ever. So when the last star fizzles out after the heat death of the universe in 10^100 years or so², the Voyager probes will still be zooming around empty space with their American flags, long after every life-form that ever lived in the universe is dead. To me, that is more than enough to pronounce the United States a permanently great nation. It is certainly a better criterion for being a great nation than whatever Donald Trump or anyone else has in mind.

Unfortunately, Voyager isn’t a very good compass to dictate concrete policies in most areas that need policy-dictating. Perhaps, though, it should simply keep us aware of how cosmically ephemeral we are, let alone our governments’ policies.

I usually hate it when science is hijacked and used to draw questionable conclusions about the social sciences or the humanities (social Darwinism comes to mind as an extreme negative example of this phenomenon.) Forgive my apparent hypocrisy on this matter by noting that I’m not attempting to advocate a policy agenda, but simply arguing that an awareness of a particular astronomical reality should factor into our collective emotional orientation. Which, it could be argued, is the root of whatever malaise, if any, that may be pervading the US at this point in time.


1 – Weirdly, Neil Armstrong also died on August 25th, 2012.

2 – This is an unimaginably huge number. For comparison, there are about 10^50 particles in the universe.

The End of Academia

The End of Academia

by Jasper Gilley

“Mathematics is, to a large extent, the invention of better notations.” – Richard Feynman

Elon Musk recently started his most ambitious company yet. Considering he is currently the CEO of a company that seeks to make humanity a multi-planetary species, a company that wants to halt anthropogenic climate change, and a company that wants to eliminate urban traffic congestion, such a statement should (at the very least) elicit incredulity.

Elon’s new company, Neuralink, seeks to develop a brain-machine interface (BMI) – a device that would allow the human brain to communicate directly with computers (and vice versa.) I’ll let Wait But Why explain:

Everyone working on BMIs is grappling with either one or both of these two questions:

1) How do I get the right information out of the brain?

2) How do I send the right information into the brain?

The first is about capturing the brain’s output—it’s about recording what neurons are saying.

The second is about inputting information into the brain’s natural flow or altering that natural flow in some other way—it’s about stimulating neurons.

These two things are happening naturally in your brain all the time. Right now, your eyes are making a specific set of horizontal movements that allow you to read this sentence. That’s the brain’s neurons outputting information to a machine (your eyes) and the machine receiving the command and responding. And as your eyes move in just the right way, the photons from the screen are entering your retinas and stimulating neurons in the occipital lobe of your cortex in a way that allows the image of the words to enter your mind’s eye. That image then stimulates neurons in another part of your brain that allows you to process the information embedded in the image and absorb the sentence’s meaning.

Inputting and outputting information is what the brain’s neurons do. All the BMI industry wants to do is get in on the action.

The potential implications of the mass commercialization of BMI technology are incredibly significant. We could, for instance, alter our brains at will, consume experiences-on-demand (à la the Holodeck or the Matrix), and merge our consciousness with AI. These possibilities, however, are really too enormous for us to fully comprehend, especially as so much of the underlying technology has yet to be developed. I would actually argue that, because of the speculative nature of such possibilities, reading speculation on them gives you a better insight into the emotional predispositions of the speculator than the actual content itself. Thus, for a non-emotion-based discussion, it may be more helpful to consider a better-defined consequence of the advent of BMIs.

Notations

If you found a 10-year-old and asked him to evaluate the following mathematical expression, he’d probably be stumped:

However, most 10-year-olds probably have an intuitive understanding of the concept of area, and he’d probably be able to find the area of a triangle (which is essentially the same thing as evaluating the above integral.) Over the next eight years of his life (at least), this 10-year-old will dedicate a substantial amount of time to learning ways to express intuitive facts about the structure of the universe (which he already knows) in terms of abstract notations like the above integral symbol. If he’s on a standard math track, he’ll learn how to evaluate the above integral during his senior year of high school. But it’s important to note that he won’t have learned anything new about the structure of the universe − he’ll simply have learned a new way of doing an old trick.

Consider a slightly more concrete example. If I asked you to give a parameterization of a circle with radius 1, you’d probably do one of three things:

  1. You’d not know what a parameterization is
  2. You’d know what a parameterization is, but you wouldn’t know the parameterization of a circle with radius 1 off the top of your head
  3. You’d tell me that a circle with radius 1 can be parameterized as p(t)=<cos(t),sin(t)>.

If either of cases 1 or 2 applied to you, the equation p(t)=<cos(t),sin(t)> would mean very little. After reading case 3, you now know that it is a parameterization of a circle with radius 1, but it is by no means obvious why that is so. More importantly, it’d be an abstract fact residing in your brain, not an intuitive reality, and you’d probably forget it in 30 seconds.

Now consider the following GIF:

The elegance of this GIF lies in the way it translates abstract notation (such as cos(t)) into intuitive reality (that is, concept.) Obviously, students don’t learn math by GIFs − but doing lots of problems effectively achieves the same goal of giving one an intuitive understanding of a physical reality.

The same could be said of writing. Memorizing verb conjugations isn’t fun or interesting or intellectually stimulating by itself, but when you stop having to think about verb conjugations (because they’re intuitively obvious), you can begin using them to communicate (that is, transfer neuron firing patterns from your brain to someone else’s.) Essentially, mathematical notation is to physical reality as language is to ideas.

This is why math and writing are so reviled by some students. It’s not that they “don’t like school” or any other explanation they or society might believe. It’s simply that they still see notations as something to be mastered for their own sake, and are responding rationally to their worldview (the human brain is built not to memorize mathematical notations or language conventions, both of which are arbitrary, but to have ideas and observe the universe, both of which are very non-arbitrary.)

The Death of Notations

At its core, a brain-machine interface will be a device that implants neuron firing patterns from one brain into another. Fundamentally, ideas and intuitions about the universe are nothing more than neuron firing patterns − when both you and I understand the parameterization of a circle, identical (or at least similar) patterns of neurons fire in our brains. Therefore, a BMI connection would allow a math professor to give a farmer a perfect understanding of the parameterization of a circle, instantly, without any arduous math classes (which the farmer would probably be rather loth to undergo.) Likewise, an economist could give said farmer an understanding of inflation. More weirdly still, a musician could give the farmer the experience of listening to Tchaikovsky’s Sixth Symphony. This would probably be the most mind-bending for our poor farmer, because he previously did not realize what classical music is. He might have previously considered it a method of entertainment for the bored urban elite, but he ended up experiencing the suicide note of a homosexual man in late 19th-century Russia. I know from personal experience that one simply does not look at reality the same way again after listening to the entirety of Tchaikovsky’s Sixth Symphony.

If you can learn without studying, why would you have it any other way? That is, if you could intuitively know that the integral from 0 to 10 of the function 2x is equal to 100, and (of course) the significance of such a computation, why would you memorize the (ultimately mechanical) algorithm that derives that result or the (ultimately arbitrary) notation used to symbolize it?

I think that there is no good reason for you to do so. While all but the most diehard math fans will rejoice at this pronunciation, you quickly run into thorny problems when you start considering the implications of BMIs for your discipline of choice. Would there be no reason for literature fans to read Crime and Punishment, or music fans to listen to Tchaikovsky’s Sixth Symphony, or art fans to see Guernica?

Ultimately, it depends on the underlying difference between Guernica and the Fundamental Theorem of Calculus. The point of something subjective, like art, literature, or music, is that everyone experiences it in a different way. Two people looking at Guernica very well may draw very different nontrivial conclusions about it − and that is precisely why it is a great painting. For the most part, however, you either understand the Fundamental Theorem of Calculus or you don’t. So, in a BMI-equipped world, you might use BMIs to learn calculus, but still see Guernica in person, because any experience you get of Guernica via a BMI will ultimately be someone else’s, not your own.

That being said, perhaps there is a nontrivially subjective element to math/science/social science, etc., especially once you start attempting to produce new math/science/social science (that is, do research.) Perhaps, then, the mathematicians of the future will learn the Fundamental Theorem of Calculus using a BMI, but learn to prove Fermat’s Last Theorem more traditionally (whatever “traditionally” means in the context of math.) Or, better yet, perhaps they’ll be forced to prove Fermat’s Last Theorem from scratch (remind me not to be a future mathematician if that’s the case.)

One way or another, the advent of the brain-machine interface will bring about the biggest changes to human education, communication, and consumption of art since the printing press first appeared nearly 600 years ago.


If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


Featured image is the painting Guernica by Pablo Picasso.

An Exception to AI Exceptionalism

An Exception to AI Exceptionalism

by Jasper Gilley

It’s not often that as much is made of a nascent technology as has been made in recent years of artificial intelligence (AI.) From Elon Musk to Stephen Hawking to Bill Gates, all the big names of technology have publicly asserted its incomparable importance, with various levels of apocalyptic rhetoric. The argument usually goes something like this:

“AI will be the most important technology humans have ever built. It’s in an entirely different category from the rest of our inventions, and it may be our last invention.”

I usually love thinking about the future, often making it a habit, for instance, to remind anyone who will listen that in 50 years, the by-then-defunct petroleum industry will seem kind of funny and archaic, like the first bulky cell phones.

When I read quotes like the above, however, I feel kind of uncomfortable. In 1820, mightn’t it have seemed like the nascent automation of weaving would inevitably spread to other sectors, permanently depriving humans of work? When the internal combustion engine was invented in the early 20th century, mightn’t it have seemed like motorized automatons would outshine humans in all types of manual labor, not just transportation? We’re all familiar with the quaint-seeming futurism of H.G. Wells’ The War of the Worlds, Fritz Lang’s Metropolis, and Jules Verne’s From the Earth to the Moon. With the benefit of hindsight, it’s easy to spot the anthropomorphization that makes these works seem dated, but is it a foregone conclusion that we no longer anthropomorphize new technologies?

Moreover, I suspect that when we put AI in a different category than every other human invention, we’re doing so by kidnapping history and forcing it to be on our side. We know the story of the internal combustion engine (newsflash: it doesn’t lead to superior mechanical automatons), but we don’t yet know the story of AI – and therein lies the critical gap that allows us to put it in a different category than every other invention.

By no means do I endeavor to argue that AI will be unimportant, or useless. It’s already being used en masse in countless ways: self-driving cars, search engines, and social media, to name a few. The internal combustion engine, of course, revolutionized the way that people get around, and could probably be said to be the defining technology of the 20th century. But I will bitterly contest the exceptionalism currently placed on AI by many thinkers, even if one of them is His Holiness Elon Musk. (Fortunately for me, Elon Musk is famous and I’m not, so if he’s right and I’m wrong, I don’t really lose much, and if I’m right and he’s wrong, I get bragging rights for the next 1,000,000 years.) I will contest the exceptionalism of AI on three fronts: the economic, the technological, and the philosophical.

The Economic Front

 

       

 

As you can see on these graphs, human total population and GDP per capita has been growing exponentially since around 1850, a phenomenon that I termed in a previous post the ongoing Industrial Revolution. I further subdivided that exponentialism into outbreaks of individual technologies, which I termed micro-industrializations, since the development curve of each technology is directly analogous to the graph of GDP per capita since 1850.

Since micro-industrializations occur only in sequential bursts during exponential periods (such as the ongoing Industrial Revolution), it would be fair to infer that they have common cause: in the case of the Industrial Revolution, that cause would be the genesis of what we would call science. Though the specific technologies that cause each micro-industrialization might be very different from one another (compare the internal combustion engine to the Internet), since they have common cause, they might be expected to produce macroeconomically similar results. Indeed, this has been the case during the Industrial Revolution. Each micro-industrialization replaces labor with capital, in some form (capital is money invested to make more money, which is a shockingly new concept in mass application.) In the micro-industrialization of textiles, for instance, the capital invested in cotton mills (they were expensive at the time) replaced the labor of people sitting at home, knitting. This is absolutely an area in which AI is not exceptional. Right now, truly astonishing amounts of capital, invested by companies like Google, Tesla, Microsoft, and Facebook, threaten to replace the labor of people in a wide variety of jobs, from trucking to accounting.

Of course, if job losses were the only labor aspect of a micro-industrialization, the economy wouldn’t really grow. In the Industrial era, inevitably, the jobs automated away are more than accounted for by growth in adjacent areas. Haberdashers lost their jobs in the mid-1800s, but many more jobs were created in the textiles industry (who wants to be a haberdasher anyway?) Courier services went bankrupt due to the Internet, but countless more companies were created by the Internet, more than absorbing job losses. It’s too early to observe either job losses or job creation from AI, but there are definitely authoritative sources (such as Y Combinator and The Economist) that seem to think that AI will conform to this pattern. AI will have a big impact on the world economy – but the net effect will be growth, just like every other micro-industrialization. Economically, at least, AI seems to be only as exceptional as every other micro-industrialization.

The Technological Front

But Elon Musk isn’t necessarily saying that AI might be humanity’s last invention because it puts us all out of a job. He’s saying that AI might be humanity’s last invention because it might exterminate us after developing an intelligence far greater than our own (“superintelligence,” to philosopher Nick Bostrom.) If this claim is true, it alone would justify AI exceptionalism. To examine the plausibility of superintelligence, we need to wade deeper into the fundamentals of machine learning, the actual algorithms behind the (probably misleading) term artificial intelligence.

There are three fundamental types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. The first two generally deal with finding patterns in pre-existing data, while the third does something more akin to improvising its own data and taking action accordingly.

Supervised learning algorithms import “training data” that is pre-categorized by a human. Based on this training data, if you feed it more data, the algorithm will tell you which category the additional data falls into. Examples include spam-checking algorithms (“give me enough spam and not-spam, and I’ll tell you if a new email is spam or not”) and image-recognition algorithms (“show me enough school buses and I’ll tell you if an arbitrary image contains a school bus.”)

Unsupervised learning algorithms import raw, uncategorized data, and categorize that data independently. The most common type of unsupervised learning is clustering algorithms which break data into similar chunks (e.g., “give me a list of 1 billion Facebook interactions and I’ll output a list of distinct communities on Facebook.”)

Reinforcement learning algorithms require a metric by which they can judge themselves, and by randomly discovering things that allow them to improve their performance on the metric, they gradually eliminate the randomness, becoming “skilled” at whatever task they have been trained to do, with “skilled” being defined as better performance on the given metric. Recently, Elon Musk’s OpenAI designed a reinforcement learning algorithm that beat the world’s best human players at Dota 2, a video game: “tell me that winning at the video game is what I am supposed to do, and I’ll find patterns in the game that allow me to win.”¹

The first two types of algorithms aren’t terribly mysterious, and rather obviously won’t lead, in and of themselves, to superintelligence. When superintelligence arguments are made, they most frequently invoke advanced forms of reinforcement learning. Tim Urban of the (incredibly awesome) blog Wait But Why tells an allegory about AI that goes something like this:

A small AI startup called Robotica has designed an AI system called Turry that writes handwritten notes. Turry is given the goal of writing as many test notes as fast as possible, to improve her handwriting. One day, Turry asks to be connected to the internet in order to vastly improve her language skills, a request which the Robotica team grants, but only for a short time. A few months later, everyone on Earth dies, being killed by Turry, who simply is doing what it takes to accomplish her goal of writing more notes. (Turry accomplished this task by building an army of nanobots and manipulating humans to take actions that would, unbeknownst to them, further Turry’s plan.) Turry subsequently converts the earth into a giant note-manufacturing facility, and begins colonizing the galaxy, to generate even more notes.

In the allegory, Turry is a reinforcement learning algorithm: “tell me that writing signatures is what I am supposed to do, and I’ll find patterns that allow me to do it better.”

Unfortunately, there are two technical problems with the notion of even an advanced reinforcement learning algorithm doing this. First, gathering training data at the huge scales necessary to train reinforcement learning algorithms in the real world is problematic. At first, before they can gather training data, reinforcement learning algorithms simply take random actions. They improve specifically by learning which random actions worked and which didn’t. It would take time, and deliberate effort on the part of the machine’s human overlords, for Turry to gather enough training data to determine with superhuman accuracy how to do things like manipulate humans, kill humans, and build nanobots. Third, as Morpheus tells Neo in The Matrix, “[the strength of machines] is based in a world that is built on rules.” Reinforcement learning algorithms become incredibly good at things like video games by learning the rules inherent in the game, and proceeding to master them. Whether the real world is based on rules probably remains an open philosophical question, but to the extent that it isn’t (it certainly isn’t to the extent that Dota 2 is), it would be extremely difficult for a reinforcement learning algorithm to achieve the sort of transcendence described in the Turry story.

That being said, these technical reasons probably don’t rule out advanced AI being used for nefarious purposes by terrorists or despots. “Superintelligence” as defined in the Turry allegory may be problematic, but insofar as it might be something akin to nuclear weapons, AI still might be relatively exceptional as micro-industrializations go.

The Philosophical Front

Unfortunately, a schism in the nature of the universe presents what is perhaps the biggest potential problem for superintelligence. The notion that superintelligence could gain complete and utter superiority over the universe and its humans relies on the axiom that the universe – and humans – are deterministic. As we shall see, to scientists’ best current understanding, the universe is not entirely deterministic. (The term deterministic, when applied to a scientific theory, describes whether that theory eliminates all randomness from the future development of the phenomenon it describes. Essentially, if a phenomenon is deterministic, then what it will do in the future is predictable, given the right theory.)

Right now, there are two mutually incompatible theories to explain the physical universe: the deterministic theory of general relativity (which explains the realm of the very large and very massive) and the non-deterministic theory of quantum mechanics (which explains the realm of the very small and very un-massive.) So, at least some of the physical universe is decidedly non-deterministic.

The affairs of humans, however, are probably equally non-deterministic. Another interesting property of the universe is the almost uncanny analogy between the laws of physics and economics. Thermodynamics, for instance, is directly analogous to competition theory, with perfect competition and the heat death of the universe being exactly correspondent mathematically. If the physics-economics analogy is to be used as a guide, at least some of the laws governing human interactions (e.g., economics) are also non-deterministic. This is by all means something that stands to reason. Physics becomes non-deterministic when you begin to examine the fundamental building blocks of the universe – that is, subatomic particles like electrons, positrons, quarks, and so on. Economics would therefore also become non-deterministic when you examine its fundamental building blocks – individuals. It would take a philosophical leap that I don’t think Musk And Company are prepared to make to claim that all human actions are perfectly predictable, provided with the right theories.

If neither the universe nor its constituent humans are perfectly predictable, a massive wrench is thrown in the omnipotence of Turry. After all, how can you pull off an incredible stunt like Turry’s if you can’t guarantee humans’ actions?

The one caveat to this line of reasoning is that scientists are actively searching for a theory of quantum gravity (QG) that will (perhaps deterministically) rectify general relativity and quantum mechanics. If a deterministic theory of quantum gravity is found, a deterministic theory of economics might also be found, which a sufficiently powerful reinforcement learning algorithm might be able to discern, using it for potentially harmful ends. That being said, if a theory of quantum gravity is found, we’ll probably be able to build the fabled warp drive from Star Trek, so we’ll be able to seek help from beings more advanced than Turry (plus, I’d be more than OK with a malign superintelligence if it meant we could have warp drives.)

So if AI is only as exceptional as every other micro-industrialization, where does that leave us? Considering that we’re in the middle of one of the handful of instances of exponential growth in human history, maybe not too poorly off.


I’d love to hear your thoughts on this post, whether you agree or disagree with it. Please feel welcome to publish them in the below comment section!

If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


1 – OpenAI

Featured image is of HAL 9000, from Stanley Kubrick’s 1968 film 2001: A Space Odyssey.