Indra’s Net via Nonlinear Optics: DMT Phenomenology as Evidence for Beamsplitter Holography and Recursive Harmonic Compression

[Epistemic Status: Speculating on a key implementation detail within the paradigm of the Brain as a Non-Linear Optical Computer (BaaNLOC) – specifically, how the optical function of beam splitting could be used to compose the contents of a conscious simulation scene with principles of cel animation and holography. In particular, this may explain both how local phenomenal binding is implemented as well as the uncanny sense of being a multitude that are common on DMT-induced states of consciousness; featured image source]

Alternative Title: One Screen, Many Contributors: Explaining the One + Many nature of Experience with Non-Linear Optical Circuits

Background Readings – and key takeaways of each:

  • The Constructive Aspect of Visual Perception (by Steven Lehar): We learn that vision is a constructive process that uses bottom-up and top-down resonance as its generator. Of special note: a gestalt (when features become “more than the sum of their parts”) has spectral properties: it resonates in a specific way as a combination of frequencies and can click with, interface with, and even drag other gestalts. Waves inside gestalts collide with each other in a way that conveniently (and efficiently) abstracts its symmetries (e.g. how the “reverse grassfire algorithm” can be used to abstract the symmetries of shapes).
  • The Brain as a Non-Linear Optical Computer: Reflections on a 2-Week Jhana Meditation Retreat: Where I introduce the overall picture of BaaNLOC based on phenomenological observations I gathered at a Jhana retreat. The core idea is that the world simulation is rendered using optical elements (cf. Ising Machines: Non-Von Neumann Computing with Non-linear Optics). I hypothesized that there is a trade-off between how much we can experience sensations in a localized way vs. experiencing frequency-domain information. Jhana absorption is akin to pushing all of the information to the frequency domain: you’re a vibration rather than a location. We can hypothesize that the sense of simultaneity and non-locality comes from us being a standing wave pattern trapped in Total Internal Reflection (TIR) in the brain. The quality of experience, especially pertaining to each of the Jhanas, can be described in terms of an optical circuit that modulate the consonance, dissonance, and noise signature of gestalts, each of which is an optical “soliton” within the larger TIR pocket that delimits a moment of experience. Jhana meditation involves, among other things, interacting with gestalts in such a way that you harmonize them, and eventually build up to a level of coherence that allows the entire world simulation to achieve (one of several types of) global coherence.
  • The Electrostatic Brain: How a Web of Neurons Generates the World-Simulation that is You (by Fakhri, Percy, Gómez-Emilsson): “We propose that objects in your world simulation are made of patches in the neuronal lattice with distinct electrostatic parameters. The interaction of light with matter is governed by the material’s electrostatic parameters permittivity and permeability. Light propagates undisturbed through a uniform medium but reflects and refracts when these properties vary spatially, which is the principle behind how lenses manipulate light.” In other words, a theory for how “phenomenal objects” (and gestalts more broadly) acquire their solidity and individuation at the implementation level. Waves inside each gestalt behave differently than “outside” (but still within the world simulation) of them, due to literal electrical properties modulating the speed of wave propagation.
  • DMT and Hyperbolic Geometry: The core ideas to import deal with how DMT hallucinations can be explained in terms of a field of experience with an energy function: the simultaneous maximization of how “recognizable” and how “symmetrical” (both being “energy sinks”) a gestalt is. DMT energizes the world simulation, and the hallucinations we experience are downstream of the system trying to get rid of this excess energy. A psychedelic trip, therefore, is explained in terms of thermodynamics and as an annealing process that may, along the way, favor hyperbolic (and non-Euclidean, broadly) geometry. The world becoming a kind of kale surface (cf. “worldsheet“) is the result of the system “stitching together” an excess number of gestalts (that fail to dissipate quickly; cf. tracer effects). The gestalts are all trying to predict each other in a process of energy minimization that may do some useful compute along the way if we figure out how to harness it properly (cf. Cub Flipper’s recent ideas on the matter).
  • From Neural Activity to Field Computing: The key takeaway here is that we can modulate the topology of a field by parametrizing a network of coupled oscillators in such a way that you can “tune into” the resonant modes of the system and in turn interact with the field in a coherent way. If the field responds to the oscillators in a physical way (e.g. interpreting the oscillators as electrical in nature, and the field as the shape of the magnetic field, as one of many possible examples) attractors of the system of coupled oscillators may in turn instantiate specific and predictable topological structures in the field. The way this is relevant to the current post is that we see how e.g. electric oscillations (in gestalts) can create genuine boundaries in a field and allow entire regions to “behave as one” in turn.
  • Cel Animation as a Key Metaphor to Model DMT Hallucinations: This may be the most important background read – it outlines how both Laser Chess and Cel Animations can be used as system metaphors for how a wave-like non-local experience can interface (and be part of) a system with “classical” local parts. In the case of Laser Chess, we have a game where there is a local “classical” step (moving a piece) and then a non-local “holistic” state (shining the laser and seeing what standing wave pattern emerges as a result). The brain’s “slow” neural activity might be “placing” the classical optical elements as constraints at millisecond-speed, for then a “global” and “near instantaneous” interference pattern that solves the path integral of all possible trajectories within the pocket to take over as a global ultra-parallel medium of compute. In turn, Cel Animation (the way cartoons used to be made; transparent sheets that depict what is changing and leave everything else intact) can be used as a metaphor to describe how “awareness wraps around and moves around” in a field of gestalts. Our world simulation is akin to a projector that shines on a 3D diorama populated with holograms. The experience is the emergent light-field that stabilizes when light is shined on this diorama. Typically our diorama has a clear center, but depending on the kind, alignment, vibration, and symmetries of the gestalts present, more than one, or even no, “phenomenal center” might emerge: the light does not need to converge at a point, even if it usually does.
  • The Emergence of Self-Awareness: Conscious Holography as an Evolved Hardware Accelerator: Finally, this recent video explains how dimensionality reduction implemented at a physical level (with e.g. holograms a quintessential example) could be associated with moments of experience via a precise computational role of consciousness. Namely, we’re conscious because dimensionality reduction in holograms feels like something, and evolution found really good use for this physical process. That is, coordinating information in sensory fields of different dimensionalities in order to construct a coherent internal state that efficiently and accurately encodes both information types. This is reasonable because holographic compressions, at a physical implementation level, are a kind of distributed spatial knowledge that uses path integrals and superposition to encode large amounts of information. We could make the case that at the point of dimensionality reduction is when “reality can meet itself” by collapse in on itself.

Putting it all together: we have a model of moments of experience as a standing wave pattern inside a non-linear optical system. It is conceptually elegant, but still widely unspecified. We have noted how this conceptual framework would solve many philosophical problems while articulating the nature of otherwise extremely puzzling phenomenology (e.g. DMT breakthroughs). What follows is further speculation, specifically on how beam splitters could play a role in this framework. In particular, I’m going to describe and then try to explain the phenomenology of DMT’s autonomous entities as well as Indra’s Net (at the extreme) and then explain how a non-linear optical circuit with the right characteristics could give rise to these corner cases. In fact, as we will see, it makes sense to think of every experience as a kind of Indra’s Net but with significant opaque components. More on this later.

Context

I recently had the chance to talk to Michael Levin and Elan Barenholtz (thanks to Ekkolapto at University of Toronto!) on the topic of phenomenal binding and the Platonic Realm (hear also the conversation I had with Levin last year):

I recommend listening to the whole conversation, but I figured I’d share what I presented at the beginning to establish some context for further discussions. The talk was an interesting challenge for me because I was given exactly 5 minutes to present a case at the beginning of the panel. In general, I love to be challenged to deliver a specific insight or argument on a time limit. Although a fun exercise, I also realize that there is quite a bit of background needed to really get what I’m talking about. So this post will go over both the content of my presentation as well as its further implications. There will be a lot more QRI content on the topic of non-linear optical circuits in relation to consciousness coming in the future.

What Needs to be Explained

Two key phenomenological realities need to be explained. No matter how weird and absurd they may sound (they do happen, as a phenomenon), we need to take them seriously if our theory of consciousness is any good. The key idea we will circle back to is that we can explain this exotic phenomenology using non-linear optics as a substrate (at least conceptually). So, what is it that we ought to explain?

First, is the sense of autonomous entities while on DMT. While 5-MeO-DMT tends to generate a sense of global coherence that hints at Open Individualism, DMT instead tends to feel as if you’re being thrown into a deep ecosystem of rogue mindforms. More so, it is often reported that these entities not only feel like they are _not you_ but they also feel controlled by a variety of different agencies with disparate goals. It is also not the case that these agencies are in agreement about how to interact with you, as oftentimes fierce competition for attention and other cognitive or energetic resources ensues. It is for this reason we like to say DMT pushes you to a “competing clusters of coherence” attractor. More so, each of these clusters seems to have its own agenda and objective function. It often takes quite a bit of negotiating between the “parts” of the organism can “pull together” in one direction during the otherwise fragmented state of DMT intoxication.

And if that wasn’t enough of a mystery, the second is an even stranger but certainly no less real phenomenon: Indra’s Net. This is the feeling and felt sense that “everything reflects everything else”. Many people use to term to refer to an implicit quality of reality: interdependence. But when I use the term in this context, I’m pointing to a very real, very vivid, and very computationally non-trivial state of consciousness. It is _true_ that the state gives you the feeling that it has a lesson, message, implicit insight, etc. to deliver, and that it is that we’re all connected at a deep fractal level somehow, but leaving aside this impression, the immediate phenomenology of Indra’s Net is really something worth exploring and explaining in its own right.

I believe that Indra’s Net is a window into how consciousness works at a fundamental level, and in this essay you will see how we might be able to explain it in terms of non-linear optical circuits. But the deeper insight (note: don’t take a twig from the Dharma Tree, says Rob Burbea, instead go for the big flowers, the big fruits, the jewels of the path) is that perhaps “everything reflects everything else” is not a strange corner case you have to work to arrive at. But on the contrary, the sense that each part of experience has a clear identity, location, and boundary relative to every other part of experience, is itself the strange corner case – you have to twist and torque Indra’s Net just right so that its projection _looks_ like a normal everyday life type of experience. By default, consciousness is profoundly interconnected in overt and explicit ways. If so, a lot of the energy the brain is spending is on keeping the illusion that non-Indra’s Net states are the default somehow.

Another problem is that Indra’s Net sounds so outlandish and incredible that it is easy to dismiss as “recollection or confabulation after the fact”. The epistemological poverty of our predicament is further exacerbated by the fact that people tend to confuse semantic content and phenomenal character, in turn delivering fantastically confused and knotted trip reports.

So, let’s cut to the chase, what is so special about Indra’s Net and how does it actually manifest? Here is the essence of it: any gestalt on your visual/tactile field (which can be synesthetic, and typically is) can be an expression of the whole experience after a certain kind of transformation or information processing pipeline. Let me elaborate. In the classic case where Indra’s Net is expressed as a web of water droplets, then what you will see is that the content of every reflection (the light emitted by each droplet) is itself the whole scene, but transformed. Indeed, it is _what the scene looks like_ from that point of view (more or less). In turn, this is happening to every one of the elements on the scene. Each element is itself expressing what the rest of the scene looks like from its point of view. Each element is taking the whole scene, applying a transformation to it, and then expressing it back into the field for everyone else to see.

This is agnostic to the specific semantic content of the scene (though perhaps not entirely orthogonal, as content and shape are ultimately correlated). You could have an Indra’s Net experience of countless heavenly Jewels reflecting on each other in beautiful ways. Or you could have an experience of looking at hundreds of demon eyes, each one reflecting every other one. Or you could experience something much more computationally crazy, like a maze of mirrors and diffraction rays, where everything reflects everything else in highly non-trivial ways in maze paths you didn’t even know were mathematically possible. The point is that the mind seems to have this attractor state we can broadly point to with the term Indra’s Net, which corresponds to a state in which the geometric content of every gestalt reflects/and is connected to the content of every other gestalt and of the scene as a whole.

The question that naturally arises here is: why do we experience this on DMT? Seriously, why is this a common attractor state? Importantly, the feature that “the whole scene hangs together as an irreducible whole” in which “moving any part results in the whole state shifting and adjusting” is not, predicted, by current computational models of the mind (or is?). What would a theory that predicts Indra’s Net look like?

The core insight I want to share for the time being is that if we allow the whole experience to somehow “project onto itself” a transformed version of itself after underlying non-linear optical filters, then some of these features start to emerge for free.

At the limit, both DMT autonomous entities and Indra’s Net become sort of one and the same (!). In effect, it is not uncommon for the sense of the multiple entities to coalesce into a gigantic god-like hivemind that incorporates many gestalts at multiple scales and it makes it very clear that it is “one and all of them at the same time”. Indeed, one can perhaps re-interpret a lot of classic iconography (e.g. the hundreds of arms of the Hindu Gods) as perhaps a pictographic representation of the phenomenology of Indra’s Net. (See also how the improper stitching together of the holograms can result in misaligned Cronenberg-like DMT Shoggoths, too).

Both deep in a DMT experience, and also at high levels of meditative concentration (cf. hard Jhanas) Indra’s Net is really common. I want to emphasize how this is not a vague poetic metaphor. It is a concrete structure, where the phenomenological “screen” that makes up access consciousness (the part of your experience you can report on) is filled with clusters of agentic constructs (“entities”) that seem to be mutually inspecting and modifying each other. They behave like holographic cel animation layers, arranged with depth and dynamically interacting subcomponents that reflect the whole.

What we want is a conceptual framework that would make DMT autonomous entities as well as Indra’s Net a perfectly natural outcome. Indeed, perhaps even expected and obvious in retrospect. To do this, I will introduce a number of core ideas, all of them orbiting a central one: perhaps our “screen of consciousness” is being “beamed” to multiple semi-independent modules at the same time, each specialized in different aspects of information processing. In turn, these modules transform the beamed image, and then pull it together with the other post-processed images by the other modules, and projects it back onto the original screen. This is reminiscent of recurrent neural networks, non-linear optical networks, but above all, the core idea that intelligent dimensionality reduction is central to a well behaved mind. Let’s dive in!

One Screen, Many Contributors: Explaining the One + Many nature of Experience with Non-Linear Optical Circuits

Non-Linear Optical Circuitry at the core of the current iteration of BaaNLOC. The central screen beams copies of its content to semi-independent modules. Then each module applies learned non-linear optical transformations such as birefringence, diffraction, refraction, etc. The post-processed images are then pulled together, and after a final symmetry group transform (to know how to fit it onto the screen), are re-projected back onto the original screen. The experience that emerges is the steady state standing-wave pattern of Total Internal Reflection (TIR) trapped in the loop. Key idea is that the images projected from each module back to the main screen can interact with each other in a quasi-physical way there.

I start by portraying the overall geometry of a moment of experience, as illustrated by Steven Lehar:

Source: Cartoon Epistemology by Steven Lehar

Consider this “diorama-like shape” that contains phenomenal properties we can point to and discuss. It is deeply interconnected. An experience is not “just” a 2.5D screen of pixels, because something is actively integrating and interrelated all of those pixels under a shared “umbrella”: a point of view, or subject of experience. Whatever the true mathematical object is that corresponds to a moment of experience (cf. qualia formalism), it must be able to connect variables in ways that produce the specific patterns of binding we observe. The patterns of binding must somehow allow us to reconstruct the geometry of the experience as a whole. But the patterns of binding are complex. A cup is not merely a blue object – it has intricate structures like a handle and a floor and perhaps liquid content, features which are all put together into a coherent multi-level representation for us to interact with. Indeed, we have to ultimately provide a mathematical structure rich enough to model and account for all types of phenomenal binding. Worth mentioning is QRI’s long-standing idea of modeling experiences as graphs with nodes that represent qualia values and edges that represents the flow of attention. In this case, the nodes you attend to are salient due to reasons having to do with graph centrality (cf. PageRank). Why? Because e.g. PageRank tracks the probability of landing on a given node if you are doing a kind of random walk from node to node using the directed edge weights as probability of transitioning. The nodes with high PageRank are those for which “the flow of attention” leads to lakes where it pools and concentrates.

As explained already, we suspect that the psychedelic sense that “everything is connected to everything else” may not be an anomaly, but rather a feature of experience that is always present, only rarely made explicit. This kind of PageRank of attention is always ongoing. The geometry of experience seems to be a kind of stable equilibria that results from systems observing each other and creating representations with relative distances to each other. Naturally, experience is “self-reflective” for this reason (and not only due to introspection!). But Indra’s Net is a deeper kind of structure that is still way more interconnected than e.g. PageRank would suggest. We need something new:

The core idea is that the non-linear optical circuit diagram above might capture some of the more exotic and intricate aspects of phenomenology (as mentioned: autonomous entities and Indra’s Net). The sketch you see at the start of this section (“One Screen, Many Contributors: Explaining the One + Many nature of Experience with Non-Linear Optical Circuits”), aims to capture key structural insights for the generation of moments of experience, which beam splitters, birefringence, and image-teleporting TV stones (cf. “How does Television Stone Work?“; Ulexite) feeding a recursive optical loop. This loop allows many “sub-agents” to see the same field, alter it independently, and feed their changes back into the whole in real time. The equilibrium state of this process is what we experience as a moment of consciousness.

In the recorded discussion, Michael Levin offered an elegant metaphor for how self-organizing systems can “pull” you toward them, where constraints in the medium act like attractors and make parts of the problem solve themselves once enough structure is in place. One of his example was a triangle: if the fittest shape for a given problem involves a certain triangle (e.g. a triangular alga needs to have three specific angles at its corners to succeed in certain navigation task), you evolve the first angle, then the second, and the third is automatically determined by the laws of geometry (a free gift from Euclidean geometry; or the geometry of the network of relationships between the parts, more broadly, when we talk about intrinsic geometry). This kind of regularity is an example case for how complex systems can bootstrap themselves, where knowing part of the whole lock in the rest: symmetry reduces degrees of freedom, and constraint propagation allows the global pattern to self-assemble without exhaustive search. In Levin’s framework and worldview, these “free lunches” live in pattern-space or morphogenesis space (as we’ll see), so that once your system points to the right place, the rest of the pattern ingresses “into the physical”.

Indra’s Net might be one of these patterns. The state of consciousness where everything _explicitly_ reflects everything else, from this point of view, does not have to be built in its entirety from the bottom up; once parts of it crystallize, and high-level symmetries are locked in place, the rest already knows how to relax into its attractor. It’s worth mentioning Levin also pointed out that in his work with Chris Fields he extends the logic of navigation in pattern-space, to “morphogenesis space”. That is, the configuration space in which cells navigate to build and repair anatomies. Applying least action laws (perhaps the true building blocks of reality? Or the true underlying laws of reality?) not to physical three-dimensional space (which may itself be emergent) but to the implicit geometries that shape biological growth and repair, may explain how an organism navigates its possible self-organization and converges on an energy minima that is very wholistic in nature.

In the toy model I presented, a non-linear optical circuit containing beam splitters, birefringence, and image-teleporting TV stones feeds a recursive loop that allows many “sub-agents” to see the same field, alter it independently, and feed their changes back into the whole. The equilibrium of this process corresponds to a moment of consciousness: it’s the topologically closed standing-wave pattern that emerges out of the non-linear optical circuit reaching a point of stability – and then what it is like to be it perhaps corresponding to “the superposition of all points of view” within it (see Cube Flipper’s recent efforts to describe this way of “reading off” an experience out of a physical system).

The energy function locally rewards gestalts that succeed at being explanatory, meaning they can anticipate, compress, and model the behavior of other gestalts. This generates an ecosystem in which gestalts compete and cooperate by predicting one another, and some develop the capacity to swallow the entire scene and then re-express it in transformed form. The medium where these interactions occur (the phenomenal screen) is not a passive display (common misconception) but an active site of computation, where interferences between gestalts are identified and workshopped. It also plays the role of being a “metric” or “gauge” for the other various gestalts. The screen gives gestalts a kind of “radar” so that by emitting waves they can find each other “in 3D”. From this perspective, experience involves lifting the content of the field into higher dimensions (internal states of the modules), applying transformations there, and then re-projecting it back as a coherent standing wave onto 3D (or 2.5D). In fact, several semi-independent modules doing this in parallel and then responding to each other’s transformations. The result is often deeply interdependent and “enmeshed”, irreducible-seeming, as the process transforms experiences recursively mid-flight and converges on gestalts that get along well with each other, are explanatory, and can predict sensory input.

Beam Splitters

Let’s try to imagine this more concretely. First, let’s talk about beam splitters. A beam splitter is typically a piece of glass or plastic that allows a certain percentage of the light through and reflects the rest. They’re one of the pieces in the game Khet 2.0 (a variant of Laser Chess), where the laser effectively splits in two and has more chances to do damage to the other’s Kind (or Pharaoh). This multiplies the number of beams, and at least in some arrangements, can lead to combinatorial explosions. Beam splitters, I suspect, are ubiquitous in our brain’s information processing pipeline. The ability to carbon-copy a gestalt so that you can work on it in multiple streams in parallel is extremely empowering, and no doubt a core step in any serious implementation of non-linear optical computation. Think about the phenomenology of shifting around the content of a working memory module. Doesn’t it feel like you’re copy/pasting information from one part of your field to another? Beam splitters are also, I reckon, a key optical component of our world simulation that allows for parallel processing streams to get unified into the coherent experience we mistake for a single “simple” witness.

Teleprompters allow you to have “split vision” so that you can look at the camera while you read your speech. (cf. DIY Teleprompter). They’re a kind of highly functional beam/image splitters.


In an effort to making the above more relatable, let’s talk about a really cool invention: the holographic broadcasting system. It doesn’t exist yet, but it could. It should, in fact. For aesthetic, social, and computational reasons,. What is this I’m talking about? Check this out:

The Holographic Broadcasting System

Imagine this: in front of you is a special table. A table that shows an image. There are hundreds of other tables like it and they are all connected to each other. When you place something on the table, it appears as a hologram in every other table like it. You can use this to play board games with people in other countries in real time, or for strategizing, delivering presentations, and even solving a maze as a team.

Here is the twist: the object that you place on the table can itself be an object that holds a transformed image of the table. Say, the object you place on the table is an iPad that shows what the table looks like from your point of view (e.g. your glasses have cameras that beam data to the iPad). You can even do projection mapping on the table and overlay a digitally transformed version of what it looks like on top of itself.

Projection mapping: you use a model of the 3D scene so that you can “paint it” with a projector that displays a video of the very scene it’s illuminating, after processing it with digital tools.

Each person with access to (a parallel version of) the table might specialize in a different kind of transformation: some specialize in adding edge detectors that highlight the corners and sharp angles of what’s in it. Others perhaps do color enhancement. Yet another one does shape rotation, where it overlays rotated images of the table (or a region thereof) on top of itself. The result is that the table is a live hologram that gets to be edited in real time by many different groups of people, each looking for something different, and capable of emphasizing different features of this collective work of art.

But here’s what makes this system truly extraordinary: each hologram carries its own unique spectral signature (remember how you can do analogue Fourier transforms in optical circuits!). From the point of view of the system, each gestalt/hologram is a kind of molecule with distinct “vibratory modes” that interact with other nearby gestalts that share such frequencies. When an edge detector sharpens a visual element, it doesn’t just change the shape, it also “stamps” a vibratory signature, so to speak, onto the hologram metadata for the system to work with. From the point of view of the system as a whole, may at first seem like a simple object carries rich spectral (i.e. frequency/vibration in addition to position) information. Whether holograms in the table “get along with each other” is a function of how they resonate together, as a group (with other gestalts), and as a whole (how the whole state can self-harmonize, or not, with the presence of such features). Collectively, the local and global vibrations define how the system “wants” to settle, and how each region interferes and interacts with neighboring holograms.

Importantly, I think this is happening all the time. What is different about high dose DMT or hard Jhanas with prominent Indra’s Net phenomenology is the extent to which individual gestalts express information about the whole experience. Consider the spectrum that goes from a completely dark and uninteresting room, to a room that is filled with parallel mirrors, beam splitters, diffraction gratings, polarizers, etc. What the room looks like doesn’t change very much as a function of lighting and head position in the first room. But in the second room, subtle changes in lighting can change the look and feel of the whole scene, as well as subtle changes in head positioning or even direction to which the eyes are pointed. In both cases the rooms are ultimately made of the same kind of “material” (atoms, physically speaking; qualia, subjectively speaking). But the second room has implicit connections and relationships that makes it highly sensitive to things like angle of lighting. The punch line, as it were, is that both physical systems are kinds of Indra’s Net, at least in a raw physical sense: every part of the dark room does indeed reflect every other part, it’s just that the information has been scrambled and largely lost. But just because the materials are not reflective or smooth doesn’t mean that on a deep physical level we don’t find a web of interdependent physical fields giving rise to the room as a whole as a “point of stability” of the system. This requires “everything reflecting everything else”. It’s just that many of these reflections aren’t very interesting or coordinated! Yet they are always there.

Likewise, even very boring and prosaic “contents of the visual field” (say, a banana, an orange… a stim toy) without any “trippiness”, I would argue, do implicitly contain the “everything reflects everything else” quality. When you see a banana contextualized by being next to an orange, the very _meaning_ of the banana changes. It becomes, in look and feel, a “banana next to an orange” rather than a “banana plain and simple”. More so, now that this contextual relationship has been established, we see the same is the case for the orange. And once more, with recursion, we find that the banana starts to look like a “banana that’s next to an orange, which is next to a banana” and so on. In principle this sounds redundant. But it is not. On DMT trips, this “transitivity of context” may in fact break down. So, for example, you might find yourself contextualizing the banana by an orange, but the orange might feel like it’s coming from a space that _is not_ contextualized by the banana. At least not directly. It’s often as if the various gestalts on DMT could exist in semi-independent geometric spaces that only with joint attention can actually interact with one another. Thus, the Indra’s Net quality of experience is in some sense much more robust in “normal everyday life” relative to the depths of an ayahuasca journey. And that is because under normal circumstances we do in fact have that our phenomenal objects properly contextualize each other in a way that achieves closure.

On high doses of DMT, it is possible for the entirety of one’s experience to be “compressed” into a triangle and then having that triangle projected onto our experience. You see how this would be a rather unusual and special kind of mathematical object, right? We’re dealing with a situation in which materializing a projection of the whole space onto a part of it radically changes the nature of the geodesics of the space. The triangle becomes a shortcut between various points that find their shortest distance by jumping into it. Now, in really exotic states, when multiple parallel streams are re-projecting the whole experience back onto itself after doing unique transformations to it (say, one “rolls up the experience into a tube”, another one “turns it into the surface of a sphere”, and yet another one “does this weird Hopf-fibration-like foliation of the space”) you have the emergence of phenomenal spaces that are extremely interconnected and will for the most part be a once-in-a-life-time encounter, as the combinatorial explosion of these feedback processes is so large we often have no hope of reconstructing specific and weird corner case.

Harmonic Simplification

Hundreds of spectral holograms can coexist in the shared screen at once. They do not need to collide directly. They are controlled by different modules, but they do “collapse” and get pushed into the same screen, which tries to reconcile/compile them into a single “point of view”. There are two steps. First the system tries to flatten all of the holograms in the main screen. Then the system lifts all of the subsystems that didn’t find a clear fit with each other into a higher-dimensional work space where the more fined-grained information is computed (and where many more kinds of rotations are available to do so). This way, the screen, in light of the multiple commenting parallel streams that “lift it”, can dynamically transform in much more general ways than what the screen itself could afford geometrically on its own. In that space their spectra interact more directly: modes beat against modes and compatible components find strange projections (along higher dimensional transformations) that allow them to click together. The screen’s own low frequency harmonics act as a constraint (they amplify the 2D and 3D symmetries found in among the gestalts as seen presented in the screen, cf. our computational model of cessation) and work as selection pressures for patterns that fit the logic of 3D space. Anything that persists must couple to, and be consistent with, the global modes of the screen (imposing familiar geometry), as well as the constraints being carried in/imported by each of the semi-independent modules.

When a stable configuration that ties together multiple other gestalts in a clean composition is found, the circuit produces a simplified gestalt that stands in for the group. In some cases it replaces it, but more typically the “summary representation” works as a kind of leader of the gestalts it’s summarizing. Alas, all gestalts are decaying, so the visible and impactful ones are only the most recent summaries. The summary gestalt also carries spectral content that matters for downstream coupling (how to “get along with the current screen as a whole”) and drops detail that would only introduce new conflicts. That surrogate then re-enters the loop as a new gestalt with its own spectral signature. The process is recursive, which makes most of experience be a strange process where summaries compose with other summaries, and the screen converges toward a coherent standing wave that is both globally coherent and locally consistent. The “infinite reflections in the eyes of beings” inside Indra’s Net e..g. “spider eyes” (eyes reflecting eyes, etc.) move in a way that is both consistent with the local geometry of the main screen (of access consciousness) as well as with the geometry of the network of connections and reflections. When you move an eye in an Indra’s Net, you move the _whole_ Net.

On ordinary mindstates gestalts have short half-lives, so the loop clears quickly and the screen doesn’t tend to have long-range temporal self-interactions. High-energy conditions such as high dose DMT or hard Jhanas extend those half-lives (cf. Tracer Effects). More gestalts remain in the screen for longer, more summaries are formed, and more couplings between gestalts become possible. The result is a scene where parts model each other and the whole and then re-express it in transformed form that interact with one another. This is the functional core of Indra’s Net phenomenology as I currently see it. And I believe we can have it come about naturally in such an optical circuit.

The Multitude Behind the Screen

We typically think the screen of consciousness is like this: you think you are just one witness looking at it. But what if it’s actually being broadcast to hundreds of different locations at once? And every one of those locations has a specialized intelligence that knows how to identify faces/mechanisms/connections on the screen and overlay that information on top of it for everyone else to see?

Neither recurrence nor resonance can solve the phenomenal binding problem, but if consciousness is a standing wave pattern trapped in a TIR pocket, then beam splitters that allow different modules to work simultaneously into a shared space just might.

From Lehar’s Cartoon Epistemology

Each of these specialized processing locations generates its own “interpretation of the scene”. Effectively, taking the shared space and applying specialized filters (try to resonate with it in a bunch of ways and see what sticks!), in turn modifying it in real time and contributing additional gestalts to the collective mix. Face recognition modules stamp facial harmonics onto visual patterns. Motion detection systems add their characteristic rhythms. Mood modules add jitter or laminar flow to attention. Memory systems contribute resonant modes that connect current perceptions to stored patterns. Emotional processing centers overlay affective spectral information that colors the entire scene (cf. citta).

The beam splitter is multimodal. The signal gets split and is sent simultaneously to somatic processing modules, auditory systems, and other sensory domains. Each domain receives the same fundamental holistic information (the _entire_ experience!) but processes it according to its own characteristic geometry, topology, and harmonic features. There’s likely a master screen that combines these three primary modalities (incl. visual, somatic, auditory) each contributing their own spectral signatures to the unified conscious experience.

Crucially, this conceptual framework might articulate the phenomenology we observer in how tactile-visual synesthesia works through spectral principles (cf. Roger Thisdell on Pure Perception). Synesthetic states can be thought of as “solitons” of the system: self-reinforcing wave packets that maintain their coherence while propagating their spectral information to the rest of the field across modalities. These solitons resonate with one another and with the broader spectral ecosystem in the screen, integrating interactions, and in turn lock together the gestalts contributed by different modules into stable multi-modal gestalts.

The sense of “Autonomous Entities”, and even more strikingly, the feeling of being a multitude on DMT might come from this mechanism becoming more “transparent”. The screen is always broadcast to many locations, but at baseline only a few have edit rights, with a strong and smart filter gating what reaches the authoritative version. On DMT many (perhaps most?) streams gain editing privileges at once, so an ecosystem of patterns grows in the shared space and coordinates through the screen without the intermediate central organizer (ego?) filtering who talks to who. This results in complex subagents interacting through the medium that can plot for and against you. Thus framework that accounts for Indra’s Net also explains Autonomous Entities: the competing clusters of coherence on DMT form hierarchical networks that bootstrap semi-parallel agency. As Steven Lehar hypothesizes (personal communication), these entities are facets of yourself: the central screen is being beamed to separate modules, each “witnesses” the whole scene, processes it, and then comments by beaming transformed gestalts back to the screen. Under normal conditions few streams are active; with DMT’s coupling kernel you may be “opening half the streams at once” (chaotically and hierarchically), creating literally “more witnesses of your experience.” Streams come together that usually don’t co-exist, and must thus negotiate how agency will be distributed among them.

A bit like the kid behind a reporter saying “mom! I’m on tv!” – many subagents can now broadcast their existence to the whole organism and seek like-minded shards to work on (artistic? political? cosmic?) projects with. Not all the shards understand each other’s communication style, so there is a lot of cross-talk that goes unrecognized by the whole yet is happening beneath the surface.

This way, the entities we encounter can be thought of as different parts of yourself gaining editing privileges on a shared space whose control room is usually locked and safeguarded. It is a multitude in the same way that you’re always already a multitude. But you’re usually following an algorithm that prevents “multiple parts talking at once”; with DMT that system is gone.

The Tracer Effect in Light of the Hologram Collective

As briefly touched upon already, on DMT (and other psychedelics/exotic states of consciousness), sensations (and gestalts) don’t decay at the same rate as normal. Every sensation you experience tends to flicker at a high frequency and linger for a while (depending on dose, could be over several seconds). These “tracers” hang around as afterimages that flicker characteristically fast at the 10-40hz range typically as they interact with one another. When the process that effectively works as a “compression engine” (gestalts summarizing pre-existing gestalts) tries to replace a cluster of gestalts with their simplified proxy, the older ones are still present and spectrally active (meaning, their vibrations still condition the screen and one another). The screen now contains both the compressed summary AND its constituent parents, so the next compression cycle captures the recursive echoes of patterns that should have vanished under normal circumstances (cf. don’t look at cauliflowers while on DMT!). It doesn’t take much imagination to see how this could lead to “fractal-like” patterns.

Overall, this creates a spectral feedback loop, where each new compression inherits more and more afterimages from previous cycles (until it reaches a dose-dependent homeostatic level). Instead of an orderly hierarchy of representations with conventional order, you get a sprawling pattern of self-referential holograms and time-loops, each quoting fragments (and partial impressions) of earlier generations, all resonating and cross-modulating each other. The compression engine, as it were, starts feeding on its own history, creating recursive patterns that reference themselves in increasingly complex ways. One of the key ingredients for the fractal quality of Indra’s Net!

Collective Harmony in Emergent Gestalts

Finally, any discussion of this process would be incomplete without at least mentioning valence. Individual holograms both float independently and they organize themselves into gestalt collectives. These collectives develop their own characteristic resonant modes, creating new spectral patterns that can influence the entire system from the top down. When you recognize a face, you are doing more than combining features such as eyes, nose, and mouth. Really, the face is a higher order gestalts: it is a collective interlocked “metagestalt” that has genuine causal power over how subsequent processing goes. The gestalts that make it up compromise a little on their own characteristic frequencies so that they can interlock as a group and genuinely form something more (and different) than the mere superposition of the parts. Importantly, each gestalt (of any order) tends to have both an intrinsic valence as well as a valence in relation to the other gestalts present. I would posit the intrinsic valence is the result of its internal consonance, dissonance, noise signature (CDNS) of the gestalt. Namely, how would this vibrate if it were the only element in the screen? Whereas valence in relation to other gestalts is the result of mutual consonance, dissonance, and noise between the gestalts.

Indra’s Net valence tends to be pretty extreme. Usually positive (or very positive), but at times negative or very negative. Yes, it is likely the case that if you want to pack as much consonance (mystical choruses, interdimensional massages, etc.) as possible in a finite volume like our screen of consciousness, probably creating a complex web of fractal connections allows you to maximize the number of pleasant relationships. Alas, be warned that fractal dissonances lurk in Indra’s Net too, and a “fractured” not quite complete Indra’s Net can be really disconcerting in some ways. It’s possible that peak positive valence resides in minimal-information-content experiences (as Michael Johnson’s Symmetry Theory of Valence posits), so high-energy high-symmetry states like 5-MeO-DMT are more likely to be leads for peak pleasure states that those catalyzed by DMT or similar. In either case, both the valence (and specifically aesthetic!) value as well as computational significance of Indra’s Net keeps it in the short list of most interesting states to study.

Discussion and Conclusions

Let’s recap. In our non-linear optical circuit, each iteration runs the same loop: the screen copies the whole scene to many modules, they transform their copies, the returns are then projected back onto the screen, and what fits with everything else stays. This iteration-by-iteration “handoff” from each of the modules and the screen as a whole gives continuity where small overlaps between iterations keep motion smooth. The system tends to a few stable objects because it keeps spectra that cooperate with each other and lets go of the rest. The screen is not (just!) a display (!), because it turns out to be where useful compute happens. Namely, where different modules can see the work of each other in real time, and negotiate together how to transform the scene in order to both fit the constraints of the screen as well as of each other.

Radical state changes affect how this loop behaves. With altered coupling dynamics, streams running at their own speed can lock to one another in the presence of strong kernel changes (e.g. when the “DMT coupling kernel” is applied indiscriminately to many systems at once). With tracers, the feedback intensifies across iterations and the negotiation becomes visible on the screen: edges, colors, textures, posture, points of view, trying to fit each with other. By default this tends towards hyperbolic geometry (as the gestalts drift into a more relaxed metric so that all of their idiosyncratic distances to one another can be embedded in some space and the gestalts get stitched together). But even more interestingly, when many modules hold the whole scene at once and write back versions that still predict it, you get Indra’s Net: each patch shows the whole through its own lens, and pulling on any part pulls on the rest too. When more streams get edit rights at the same time in tandem with the tracer effects, the modules negotiate domains of influence by both communicating through the screen and developing agent-like qualities. They all see the same broadcast, process it in their own way, and comment on it by projecting their gestalts back onto the screen. They feel alien because the usual gate that merges commentaries is relaxed, so their “signatures” stay distinct and you can watch them interact and develop new kinds of languages mid-flight.

We are in early days of BaaNLOC, but I am optimistic that it won’t take long for us to be able to code simulations of this optical circuit (and many variants) and then test whether they generate recognizably-DMT-like dynamics. From playing with toy models (to be released soon), I think we’re on track. But much remains to be done. Stay tuned 🙂

On the Evolution of the Phenomenal Self (and Other Communications from QRI Sweden)

By Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators; see letters I & II, and letters III, IV, & V)


“QRI Law of Transhumanism”: The overall motivation of humans to solve social and mental problems will remain much higher than the motivation to solve physics problems. The human performance in solving social and mental problems will remain much lower than the performance in solving physics problems. This continues until social and mental problems become physics problems.

– Anders & Maggie


Letter VI: The Evolution of the Phenomenal Self

Re: Mini-Series on Open Individualism

A follow-up for the more nerdy audience could perhaps be how QRI seeks to resolve the confusion about individualism:

It often turns out that parsimony is a more useful guiding principle in science than naïve realism. This includes naïve realism about what constitutes parsimony. All relevant conditions must be taken into account, and some conditions are unknowns, which blurs the picture. Occam’s razor is powerful but more like a Samurai sword: you need great skill to use it well.magic-snake

Compare the state-space of consciousness with the state-space of chemistry known to humans: there is biochemistry and there is other chemistry. They manifest quite differently. However, parsimony favors that at the fundamental level of organization things reduce to a small set of rules which are the same for all of chemistry. This is now known to indeed be the case but was not always so. Rather, it tended to be assumed that some extra factor, a “life-force”, had to be involved when it comes to biochemistry.

DNA_Structure+Key+Labelled.pn_NoBBBiochemistry has been evolutionarily selected for performance on a most formidable problem. That of self-replicating a self-replicator. It takes a large number of steps in the process and high preciseness at each step. Only particular sequences of steps lead to normal cell function, and things are always open to getting corrupted. Take viruses, for instance.

Normal function of a brain is somewhat analogous to normal function of a cell. Evolution has selected for brains which produce the experience of continuity as a unique agent self. This is probably one of the harder tasks that conscious intelligence has solved, corresponding to the advanced parts necessary for reproduction in a cell. It is probably about as unusual in the state-space of consciousness as cellular replication is in the state-space of chemistry. However, the state naïvely feels like it is foundational to everything, which can make you confused when reflecting upon it. It can get even more confusing when you consider the strong possibility that valenced experiences of “good or bad” are much more commonplace in the state-space, perhaps more like transfer of electric charge is commonplace in chemistry.

4548499690_0c4987531d_b

Self-replicating a self-replicator

You can test this by altering (mental) system properties via meditation or psychedelics. Is “individuality” or “valence” more persistent under perturbation? It’s much harder to get rid of valence, and indeed, the highly altered state of a brain on high doses of 5-MeO-DMT gets rid of the agent self altogether but preserves and even enhances valence, interestingly more often in the positive than the negative direction. It’s like jumping from biochemistry to pyrotechnics.

xqscsoy

Self-less 5-MeO-DMT “void”: The state is as different and exotic from normal everyday evolved consciousness as the chemistry of explosive pyrotechnics is to evolved biochemistry.

Naïve realism would hold that the sensations of “one-ness” experienced in certain highly altered states of consciousness feel the way they do because they somehow expand to include other entities into a union with yourself. What is likely to really be going on could be the opposite: there is no “self” as a reality fundament but rather a local complex qualia construct that is easy to interfere with. When it (and other detail) goes away there is less mental model complexity left. A reduction in the information diversity of the experience. Take this far enough and you can get states like “X is love” where X could be anything. These can feel as if they reveal hidden truths, for you obviously had not thought that way before, right? “X is love, wow, what a cosmic connection!”


Letter VII: Fractional Crystallization to Enhance Qualia Diversity

Some more chemistry: is there in qualia state-space something analogous to fractional crystallization? When a magma solidifies relatively rapidly, most of the minor elements stay in solid solution within a few major mineral phases. You get a low diversity assemblage. When the magma solidifies slowly it can yield a continuum of various unique phases all the way down to compounds of elements that were only present at ppb levels in the bulk. Crucially, for this to work well, a powerful viscosity reducer is needed. Water happens to fit the bill perfectly.06400px-Fractional_crystallization.svg

Consider the computational performance of the process of solidification of a thousand cubic kilometer plutonic magma with and without an added cubic kilometer of water. The one with the added water functions as a dramatically more efficient sorting algorithm for the chemical element constituents than the dry one. The properties of minor minerals can be quite different from those of the major minerals. The spectrum of mineral physical and chemical properties that the magma solidification produces is greatly broadened by adding that small fraction of water. Which nature does on Earth.

It resembles the difference between narrow and broad intelligence. Now, since the general intelligence of humans requires multiple steps at multiple levels, which takes a lot of time, there might need to be some facilitator that plays the role water does in geology. Water tends to remain in liquid form all the way through crystallization, which compensates for the increase in viscosity that takes place on cooling, allowing fractional crystallization to go to completion in certain pegmatites.SnowflakesWilsonBentley

It seems that, in the brain, states become conscious once they “crystallize” into what an IIT-based model might describe as feedback loops. (Some physicalist model this as standing waves?). Each state could be viewed as analogous to a crystal belonging to a mineral family and existing somewhere on a composition spectrum. For each to crystallize as fast and distinctly as possible, there should be just the right amount of a water activity equivalent. Too much and things stay liquid, too little and no unique new states appear.download

It may perhaps be possible to tease out such “mental water” by analyzing brain scan data and comparing them with element fractionation models from geochemistry?

Eliezer Yudkowsky has pointed out that something that is not very high hanging must have upgraded the human brain so that it became able to make mental models of things no animal would (presumably) even begin to think of. Something where sheer size would not suffice as an explanation. It couldn’t be high hanging since the evolutionary search space available between early hominids and homo sapiens is small in terms of individuals, generations, and genetic variability. Could it be a single factor that does the job as crystallization facilitator to get the brain primed to produce a huge qualia range? For survival, the bulk of mental states would need to remain largely as they are in other animals, but with an added icing on the cake which turned out to confer a decisive strategic advantage.

It should be low hanging for AI developers, too, but in order to find it they may have to analyze models of qualia state-space and not just models of causal chains in network configurations…


Letter VIII: Tacking on the Winds of Valence

We just thought of something on the subjects of group intelligence and mental issues. Consider a possible QRI framing: valence realism is key to understanding all conscious agency. The psyche takes the experienced valence axis to be equal to “the truth” about the objects of attention which appear experientially together with states of valence. Moment to moment.

Realism coupled with parsimony means it is most likely not possible for a psyche to step outside their experience and override this function. (Leaving out the complication of non-conscious processes here for a moment). But of course learning does exist. Things in psyches can be re-trained within bounds which differ from psyche to psyche. New memories form and valence set-points become correspondingly adjusted.

Naïvely it can be believed that it is possible to go against negative valence. If you muster enough willpower, or some such. Like a sailboat moving against the wind by using an engine. But what if it’s a system which has to use the wind for everything? With tacking, you can use the wind to move against the wind. It’s more advanced, and only experienced sailors manage to do it optimally. Advanced psyches can couple expectations (strategic predictive modeling) with a high valence associated with the appropriate objects that correlate with strategic goals. If strong enough, such valence gives a net positive sum when coupled with unpleasant things which need to be “overcome” to reach strategic goals.1280px-Tacking.svg

You can “tack” in mental decision space. The expert psycho-mariner makes mental models of how the combinatorics of fractal valence plays out it in their own psyche and in others. Intra- and inter-domain valence summation modeling. Not quite there yet but the QRI is the group taking a systematic approach to it. We realize that’s what social superintelligences should converge towards. Experiential wellbeing and intelligence can be made to work perfectly in tandem for, in principle, arbitrarily large groups.

It is possible to make a model of negative valence states and render the model to appear in positive valence “lighting”. Sadism is possible, and self-destructive logic is possible. “I deserve to suffer so it is good that I suffer”. The valence is mixed but as long as the weighted sum is positive, agency moves in the destructive direction in these cases. Dysfunction can be complicated.

But on the bright side, a formalism that captures the valence summation well enough should be an excellent basis for ethics and for optimizing intelligences for both agency and wellbeing. This extends to group intelligences. The weight carried by various instantiations of positive and negative valence is then accessible for modeling and it is no longer necessary to consider it a moral imperative to want to destroy everything just to be on the safe side against any risk of negative experience taking place somewhere.

Magnetic_turbulence

Is it possible to tack on the winds of group valence?

At this early stage we are however faced with the problem of how influential premature conclusions of this type can be, and how much is too much. Certain areas in philosophy and ideology are, to most people, more immediately rewarding than science and engineering, and cheaper, too. But more gets done by a group of scientists who are philosophically inspired than by a group of philosophers who are scientifically inspired.

Could this be in the ballpark-ish?

Stay safe and symmetric!

– Maggie & Anders

QC Coronavirus Edition: Preventing Pandemics by Living on Toroidal Planets and Other Cocktail Napkin Ideas

Here is what we’ve gotta do.

I want every strategy we’ve got on Near Earth Object Collision, OK?

Any ideas, any programs, anything you’ve sketched on a pizza box or a cocktail napkin…

Armageddon (1998 film, when NASA realizes that there are 18 days left before the asteroid hits the Earth)

This Whole Thing

On January 20th someone shared, in a facebook group that I’m a part of, four facts about an emerging viral infection in China: (1) high death rate, (2) high contagion rate, (3) long incubation periods, and (4) the fact that it appeared uncontained. Despite the (at the time) relatively low number of cases, those four facts did not seem to paint a pretty picture of what was about to happen.

This was immediately alarming to a lot of people in my circles, and for good reason. Matthew Barnett, Justin Shovelain, Dony Christie, and Louis Francini sounded alarms as early as mid-January, and the rest of the EA and rationalist cluster followed suit. It makes sense people in this cluster would be concerned early on, as many of them have looked at global catastrophic risk scenarios for years, and were already well aware that the world was unequipped to deal with an infectious disease with all of the above four properties. Pandemic preparedness programs have so far relied on luck. For instance, in his 2015 TED talk “The next outbreak? We’re not readyBill Gates uses as an example the 2013 Ebola outbreak: “The problem wasn’t that there was a system that didn’t work well enough. The problem was that we didn’t have a system at all.” Accordingly, that particular outbreak didn’t become a disaster because of sheer luck: the disease only becomes contagious when you are already very sick and it didn’t hit a major urban area, so containing it was possible. But this time around we don’t seem to have the same luck.

Since, I’ve seen many thought leaders I respect succumb to focusing on this topic: Robin Hanson, Eliezer Yudkowsky, Paul Graham, Tyler Cohen, Sarah Constantine, Scott Alexander, Scott Aaronson, Joscha Bach, Ryan Carey, William EdenRobert Wilbin, etc. etc. Not to mention the way these people are publicly responding to each other and building a parallel narrative on a higher level of complexity than most everybody else****. These and many other well respected intellectuals have been going on and on about the situation for over a month now. An exponentially growing curve in its early stages may not be alarming to most people, but it certainly was to people like this (Ps. 3Blue1Brown, Kurzgezagt, and Mark Rober also recently joined the conversation).

89891338_10158153144883554_5533215167824789504_n

Image by Evan Gaensbauer (March 2020 Dank EA Memes banner)

This all adds up to a vibe of countdown to Armageddon: “X days until hospitals are overwhelmed, Y days until a million people die, Z days until a vaccine will be found”. In line with this perceived, if not frighteningly real, urgency, we’ve seen countless facebook groups, subreddits, and forums scouting for novel ideas and projects to help above and beyond what the governments of the world are already doing (e.g. Covid19RiskApp, Give Directly Response, Covid Accelerator [of technology to decelerate the spread [possibly a terrible or brilliant branding]], List of Predictors, and Corona Variolation).

march_19_2020_spread_pandemic

As of March 20 2020

I personally gave a lot of thought to pandemics several years ago (in college I was on the fence between working on pandemic prevention and consciousness research as a career), so my immediate thought when learning about the virus and its properties was “we are screwed, this can’t be contained with how the world is currently set up”. While containment might have been possible at the very beginning with some luck, it very quickly becomes unmanageable. That said, I’d like to explore here ways in which the world could be realistically modified in order to contain, mitigate, and ultimately reverse the spread of novel contagious diseases including this one. After all, the WHO director general said on March 9th: “The rule of the game is: never give up.” So, well, let’s give it some more thought. I hence offer my ‘sketches on a cocktail napkin’ type of ideas in case they find any application:

Introduction

Let us start by breaking down “social networks” into (1) contact networks, and (2) information networks:

  1. Contact networks are weighted undirected graphs where each node is a person and each edge encodes the frequency and intensity of the contact between the people it connects.*
  2. Information networks are weighted directed graphs that encode the amount of information transmission that there is between pairs of people. To a large extent, contact networks are subsets of information networks.**

Contact networks are what matters for modeling infectious disease transmission. Despite the constitutionally granted freedom of assembly, one can posit that if the risks to the public are high enough, it is justified to place some constraints on the nature and properties of contact networks. In a free society that truly grasps the danger of pandemics and is determined to squash them at the very beginning, contact networks might require some degree of top-down control. Perhaps, if we are serious about future pandemic prevention, we could re-conceptualize freedom of assembly as pertaining to information, rather than contact, networks.***d3297191270ea5bca8db652e977a6d57

So in what ways could a contact network be pandemic-safe? As an intuition pump for what I’ll be discussing further below, I’d like you to consider what it might be like to live in the original “HaloRingworld (and Ringworld too). Assume that unrestricted travel in Halo is limited to land roaming with a maximum speed, and that in order to use a spacecraft or tube across an arc of the circle, you need to be thoroughly tested and quarantined in-between. With these constraints, we would naturally infer that the structure of the contact network of the people in this world would be embedded in the ring itself. Meaning that if an infectious disease originates somewhere on the Ringworld, containing its spread would be as easy as blocking movement on two small fronts around the epicenter of the outbreak. This even allows you to control and ultimately fully suppress diseases with long incubation periods. It is a matter of estimating how long the incubation period is, and quarantining the entire region of “furthest possible transmissibility”.

More so, given the overall circular geometry of the world, after a brief period of quadratic growth of the epidemic (as concentric circles expand around the epicenter)  one would expect to see a threshold after which there is merely linear growth in the number of cases as a function of time!

Network Geometry as a Containment Strategy

To a first approximation, the single most important problem to overcome for containment is the exponential growth of the early stages of an outbreak. Of course in some cases an exponential growth is not itself the problem: and R0 = 1.001 leads to exponential growth, but it is still so slow that it can be easily dealt with. Likewise, a sub-exponential growth can still be unruly, as in a polynomial growth with an exponent of 20. But to a first approximation, I would argue that if you can get rid of exponential growth you can manage an outbreak. The example above of a Ringworld shows that exponential growth in contact networks can be slowed all the way down to linear growth at relatively early stages. Similarly, “thin” toroidal planets would also enable easy containment of outbreaks (Anders Sandberg‘s amazing work on the physics of toroidal planets finally pays off! It remains to be seen when his work on stacking high-dimensional polytopes finds real-world applications).

torusdonut2-thumb (1)

Toroidal World

But we don’t have to go all the way to high sci-fi scenarios to encounter sub-exponential growth of infections in human contact networks. You see, the black death happened at a time when the contact network of humanity had a quasi-quadratic structure at the largest of scales. Villages almost certainly had a scale-free structure (e.g. the priest touching everyone once a week and the lone serf perhaps only interacting with two people a week), but once you look at the structure at scales above the village, you would find routes between neighboring villages weaving a planar graph with a 2D Euclidean geometry. The trade routes, though, provided an exception, and in the end they turned out to be key for the spread of the plague. That said, in the absence of cars, trains, or airplanes, the maximum speed of transmission was seriously limited. Historians can tell when different parts of Europe got the plague because it really took a long time to spread; we are talking about years rather than weeks.1920px-1346-1353_spread_of_the_Black_Death_in_Europe_map.svg

So imagine having a contact network structure characteristic of the medieval times, but with an information network structure akin to the ones we currently have. Then controlling the black plague would be a piece of cake! You would simply need to close central trade routes, track down which villages are already infected, and put a perimeter around them.

Ok, so how do we generalize this idea to the modern times in a realistic way? I think we should perhaps think outside the box here. Remember, the core intention here is to make the spread of an infectious disease not behave in an exponential way at the beginning so that we can “segment out” the part of the network affected (i.e. quarantine) because the “surface area” of the region is not very large. Now, most analysis of disease spread on networks focus on analyzing how realistic-like network features affect disease spread. For example, clustering coefficients, the steepness of the slope of power law networks, the distribution of in-betweenness centrality of the nodes, and so on.

In a perhaps high-modernist style approach to network engineering, one can ask how the spread of a disease would change depending on alterations we could make to the network. The simplest real world case is the reasoning behind adding travel restrictions, which aim to block the spread between very large clusters (i.e. countries) and the closing of schools, universities, and large gatherings, which decrease the interconnectivity of each region of the network. A slightly more sophisticated version of this approach would be to come up with a “Pandemic Klout Score” for each person based on the their “network influence” and pay them to quarantine early on during an outbreak.

I actually worked at Klout as an intern in 2010, and my contributions were mostly on the (unfortunately slightly evil because it’s marketing) following problem: “How do you maximize the spread of a commercial campaign by giving free products to people?” Klout had what they called “perks” which was how they made money. They had contracts with other companies to give free products to “influencers” so that they talked about the perks on their social media accounts. To maximize the spread of a commercial campaign meant to distribute perks in such a way that the largest number of people made mentions of the campaign on their networks (including people who didn’t receive the free products). This is how they measured success- at least when I was there- and what the companies paid them for.

The “basic approach” would be simply to distribute the perks to people with the highest Klout scores, with the additional constraint that those people were influential on the relevant topic (e.g. if you had a popular Twitter account about “beauty and personal care” you might be a prime candidate to get a free “anti-aging sunscreen stick”, or whatever) . But since you can’t actually, you know, entice Justin Bieber (the person with the highest Klout score for several years) with a free Virgin America flight and expect him to either care or talk about it on his Twitter feed, the problem ends up being substantially more complex than just “give people with high Klout the free products”. I am under an NDA about the specific algorithms and research I conducted there. But I mention this because the problem of pandemic prevention could in some sense be thought of as the inverse of the problem Klout was trying to solve. Namely, how you use the node features of the network in order to minimize the spread of a contagious disease. The low-hanging fruit idea here can be to simply allot money to pay people with high Pandemic Klout Scores to stay home or cut their human touch in half whenever an outbreak arises. I would expect this to be significantly better at reducing the reproductive rate of a contagious disease than choosing people at random (or even just based on how many people they interact with on a daily basis).

That said, given the risks and costs involved with pandemics, especially in the long term in light of bioterrorism, we should not close off the possibility of making drastic changes to humanity’s contact network for the sake of our collective wellbeing. That is, merely asking some people to stay home may not be enough. We should contemplate what it would really take to be able to fully contain any future pandemic.

In terms of large-scale network geometry rather than just dealing with one node at a time, perhaps the key point to make is that we should really not fetishize and romanticize the “six degrees of separation” that results from the small world-like structure of the modern human contact network. Yes, “it’s a small world after all“, but you forgot to mention “and that’s what will get us all killed in the end.” Let’s not allow misguided network idealism to murder grandma. We need to make the contact network a large world, and save the small world exclusively for the information network.Screen-Shot-2012-04-05-at-19.26.38

Intuitively, it is precisely the small world-like property of our contact network that allows us to: meet many new people on a regular basis, collaborate with people around the world, be able to attend large gatherings, raves, and festivals, and travel care-free across the planet. Meaning, most people might think that changing the contact network structure to make it pandemic-proof would come at the cost of sacrificing what makes society so interesting and worth living in. I would disagree. I think that such a line of thinking is just the result of a failure of the imagination. We can, I posit, have contact networks that allow you to do all of that and yet be pandemic-proof. I will argue that with intelligent top-down network engineering you can in fact achieve this. Here is my case:

Scale-Dependent Geometry

The main concept that one needs to understand for my argument is that the options for large-scale network structure go far beyond the textbook examples of small worlds, scale-free, random, planar graphs, etc. In fact, one can create all kinds of fascinating hybrid networks where the properties vary by region and scale. The examples I am about to show you play with the notion of scale-dependent geometry. Meaning that the network properties depend on the number of interconnected nodes that you are considering. In particular, I’ll break down networks in terms of their micro (1 to 1,000 nodes), meso (1,000 to 1,000,000 nodes), and macro (1,000,000 to 1,000,000,000 or more nodes) structure:

QLE_ELQ

QLE and ELQ

QLE

The first example is one where the structure of the network leads to quadratic spread on the micro level, linear spread on the meso level, and exponential spread on the macro level. We achieve this by having the nodes arranged along a rectangular grid at the micro level. As one zooms out, the grid hits a limit on two fronts so that the advancement of an infection disease will start growing linearly as it only has two directions to grow in (for the sake of symmetry you can glue the two fronts to make a tube, for a meso network structure akin to that of a toroidal planet). Finally, at the largest scale this network looks like a binary tree, where the growth can reach an exponential rate.

The same scheme will apply to all of the following networks. That is, the letters indicate the ordering of the types of growth for the micro, meso, and macro scale. What I will instead focus on is explaining the advantages of these structures. In this case- the case of QLE- the primary advantage is that the spread can be entirely contained by cutting connections around the epicenter. And the best part is that even if you hit the exponential scale (i.e. you start spreading from “one arm to another”) you will still have long periods of linear growth as each “arm” will grow linearly, so cutting it will remain an option at any point. The “surface area of the spread” will remain tiny relative to the size of the network.

ELQ

A very nice property of this network is that you can have “villages” of up to 1,000 people where everyone can interact with and touch each other. Within each of these villages you have super efficient in-person information transmission and contact hedonism without restrictions. Then each of these villages would be connected to two neighboring villages, perhaps not unlike how kids in grade school often make friends with other kids in the grades immediately above and below (and only rarely with grades that are further apart). The spread of disease would very quickly engulf each village, but thankfully that would be it. After that you would have a very slow village-by-village take-over that could be stopped by ‘cutting’ the contact channels between two pairs of villages (or four if you started at an intersection of the macro structural grid). More so, you could conceive of a “conveyor belt” approach where every month half of the village moves in one direction while the other one stays put. This way over the course of years you would still be able to get to know tens of thousands of people, party like crazy in raves touching everybody, and be able to retain long-term friendships by coordinating with them to either move or stay. And you could do all of this while living in a pandemic-proof world!

LQE_EQL

LQE

This one is perhaps the least viable because it relies on most persons only having contact with two people. That said, the spread would start very, very slowly, and so it might be ideal for the worst possible pandemics. At the macro level the network looks like high-dimensional cardboard boxes, where each “cardboard side” is glued at the edge with one or multiple other sides.

A “continuous” version of LQE could use hyperbolic geometry at the macro level, such as what you get when you sneak a pentagon here and there in an otherwise rectangular grid so that locally you have a square spread, which slowly turns into an exponential spread as you begin swallowing pentagons. (Or a few heptagons in a grid of hexagons).

EQL

This one is pretty similar to ELQ, and you can do pretty much the same things I mentioned about ELQ. The main difference is that this structure is safer at the macro level but riskier at the meso level. So if you expect diseases to be really really contagious, then this structure might prevent “the end of the world” but it might be somewhat susceptible to “pretty bad scenarios”, while ELQ works the other way around.

LEQ_QEL

LEQ

I find this network very interesting because to build it I had to come up with the idea of connecting lots of cycles of different lengths with each other by having them share nodes. You can also easily construct a network like this by starting with a scale-free network and replacing the edges with long chains of nodes.

QEL

This would perhaps be the steel-manned version of the toroidal or ring world we discussed in the introduction. Here the infections would spread first slowly at a quadratic rate, then quickly accelerate once you reach the edges of the planar graph you start in, and finally there is a massive linear bottleneck at the macro scale. It’s like Ringworld, but where you interact with people in an interlaced braided mesh embedded inside the Ringworld rather than only in its meager inner surface.

Because each of these examples contain a “linear bottleneck” at some scale, an outbreak of a disease would be easy to contain at some scale. Which network is ideal for which kind of disease will depend on things like its incubation period and its contagion probability. But any of these examples is vastly safer pandemic-wise than our current contact network.

Is Biology Doing This Already?

One thing this exercise has made me wonder is if perhaps our bodies are already using this kind of strategy. I mean, looking at QLE reminds me of the structure of blood vessels in the kidney and liver. It would make sense that evolution would identify great micro, meso, and macro network structures in order to give each organ appropriate contact networks at the scale that matters to conduct its function, while creating network bottlenecks at other scales for protection against pathogens and the spread of cancer. In contrast, the immune system would have every reason to maximize spread at the largest scale while having compartmentalized spread at the micro scale (example: Topological Small-World Organization of the Fibroblastic Reticular Cell Network Determines Lymph Node Functionality). Finding the sub-exponential chokepoints in the human body would, I posit, give us a new angle for understanding it more deeply.

Creating a Global Human Organism

If this analysis pans out, we could perhaps think of the challenge being presented to us by SARS-CoV-2 and future pandemics as a wake up call to “scale up the network-protective measures our bodies are taking to combat disease while maintaining functionality” all the way up to the structure of all of human society. Indeed, wouldn’t it be amazing if we coordinated to be a harmonious large-scale global organism?

Now, I am not saying we should simply adopt one of these network structures. They are just proofs of concept to show it is possible to have humanly-desirable properties that come with highly interconnected networks along with a linear (or at least sub-exponential) bottleneck at some scale. The bottleneck does not even need to be visible or detectable from the point of view of each individual!

Even if we cannot construct an ideal world from scratch, we could still try to bootstrap it from within our current world. To do so we have a number of options. I will mention two and then dive into them in greater depth. The first is the strategy of “network modification” and it consists of developing gradient descent algorithms that point us to the modification of the network that would maximize a scale-specific sub-exponential bottleneck. Of course this could lead to local minima, but we don’t care about achieving the best configuration, just the closest one that is “good enough”. The second approach is that of “network nucleation” to bootstrap a pandemic-protected contact network by connecting with other people who can prove that they do not have the disease. They could all get to know each other, and then submit a list of “people they would like to hang out with on a regular basis”. An algorithm would then optimize the network so that each person can hang out with as many others as possible while making sure the overall geometry of the network is desirable for disease contention. If lucky, we could even bootstrap this system all the way up to the entire planet, starting from a mixture of people who’ve demonstrably been quarantined for a long time and people who have already recovered from the disease. And since, of course, people would eventually get sick of hanging out with a restricted list of friends, they could periodically re-submit another list and the algorithm would take into account this dynamic so that the geometry can be stable over time.

My prediction is that the current strategies that are being used to reduce the spread of disease would show up as a tiny subset of the set of possible effective strategies, many of which are currently invisible- and in some sense inconceivable- to us. This is because, in part (as far as I know) nobody is thinking in terms of scale-specific network geometry. Also, little is known about the actual empirical structure of the human contact network. In this sense, removing super-spreaders or closing schools may be re-conceptualized as pointing in this direction, and yet perhaps may not even make the Top 10 list of best cost-effective strategies. This is because just removing high-degree nodes in a scale-free network won’t automatically prevent exponential growth; since exponential growth is the killer, making strategies directly targeted at it will probably be vastly more effective. Let’s investigate these strategies in more detail:

Option 1: Network Modifications

The first thing we should do is find what actual contact networks look like, so that we can identify the smallest possible modifications to them in order to create sub-exponential bottlenecks on some scale. I have not found a good study on this, since there really aren’t public datasets of “who is physically hanging out with whom”. Though, if you were to combine, perhaps, the datasets of USA’s NSA, UK’s GCHQ, Russia’s KGB, China’s MSS, cellphone location information, census responses, and commercial surveillance camera data you might be able to get a very decent version of it. In fact, there is reason to believe Israel is already in the process of constructing this dataset.

In the absence of contact network data, we can nonetheless learn from other social and information networks. In particular, the best research I’ve read about the macro-structure of complex networks comes from the lab of Jure Leskovec (I recommend watching his CS224W lectures from past years, which are all available online):

We study over 100 large real-world social and information networks. Our results suggest a significantly more refined picture of community structure in large networks than has been appreciated previously. In particular, we observe tight communities that are barely connected to the rest of the network at very small size scales; and communities of larger size scales gradually “blend into” the expander-like core of the network and thus become less “community-like.” This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models.

Lescovec et al. 2008, “Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters

As you see, large-scale analysis of real-world networks indicate that they are not adequately described by the classic textbook structures that are most well known. Rather, there seems to be a kind of “galactic shape” at the more macro scale, where there is a highly connected giant core of overlapping communities surrounded by loosely connected superstructures (nicknamed ‘whiskers’):

Given this structure (and assuming it generalizes to contact networks), one could divide the problem into two rough components: (1) how to you deal with ‘whiskers’?, and (2) what do you do about the ‘galactic core’? I do not have answers here, but I do think that having more people who are good at math and computer science think about this would be very good. For what is worth, I have the hunch that in particular the following two network analysis techniques will be useful to tackle this problem:

  1. Spectral Graph Theory: This is a set of techniques that can help us ‘see diffusion bottlenecks in graphs’ at a glance. For instance, these techniques reveal the presence of network “chokepoints” that create insulation in heat flow. Clearly heat flow does not behave in the same way as the spread of disease, but the similarity makes it worth highlighting.
  2. Discrete Differential Geometry: An emerging field that blends differential geometry with network analysis and has shown amazing applications for graphics which can help us ‘see the curvature and dimensionality of a network around each of its nodes’ at a glance. Note: As much as I love hyperbolic spaces, I must admit that from the point of view of early pandemic prevention living in a contact network with hyperbolic geometry is a terrible idea.

Flatten the Network!

One additional interesting approach for Option 1 would be to apply topological clustering techniques to the contact network so that we can identify the hubs with the least desirable network geometry and try to “flatten them”. And policy-wise, I might imagine that in the long-run we could improve the flattening of the contact network by encouraging people to use things like the Bumble app for dating, where you find people physically near you with whom you could form a healthy relationship.

Option 2: Network Nucleation

Green and Red Countries

Countries_Recognizing(Green)_Not_recognizing(Red)_Kosovo

Imagine green are virus free, red are virus uncontrolled, and grey have unreliable statistics. (This actual map is about something unrelated I’m not going to name; it is just used as an example of what the world might look like).

Joscha Bach predicts that in a couple months there will be “green and red” countries, meaning that the outbreak will be completely under control in some countries, and completely out of control in others. I’d also add “grey” to refer to “unreliable statistics”, as many countries might just choose to not monitor the situation. You can imagine what the travel restrictions may be between green, red, and grey countries, as green countries would not find it worthwhile (or at least not politically viable) to accept the risk of reigniting the spread. Grey countries may end up also avoiding red countries while not being allowed to enter green countries.

Speculatively, this would perhaps lead to a worldwide Sakoku phenomenon, but where rather than just Japan, we would have all of the countries of each color becoming economic and cultural blocks.

What I’ll describe below is a kind of generalization of this possibility. Namely, that the blocks don’t need to be country-based.

A very interesting question to ask is “what possible partitions of humanity could create sets of people for whom a green/red/grey dynamic would successfully create clusters of wholly virus-free people?” The existence of at least some greens opens up the possibility of:

Reversing The Pandemic

I address you tonight, not as the president of the United States, not as the leader of a country, but as a citizen of humanity. We are faced with the very gravest of challenges. The Bible calls this day Armageddon. The end of all things. And yes, for the first time in the history of the planet, a species has the technology to prevent its own extinction. All of you praying with us need to know that everything that can be done to prevent this disaster is being called into service. The human thirst for excellence, knowledge, every step up the ladder of science, every adventurous reach into space, all of our combined technologies and imaginations, even the wars that we’ve fought, have provided us the tools to wage this terrible battle. Through all the chaos that is our history, through all of the wrongs and the discord, through all of the pain and suffering, through all of our times, there is one thing that has nourished our souls and elevated our species above its origins, and that is our courage. Dreams of an entire planet are focused tonight on those 14 brave souls traveling into the heavens. May we all citizens of the world over see these events through, Godspeed, and good luck to you.

– Armageddon (1998 film, when the president of the US announces the plans to avert an asteroid that would destroy the earth) [See also: what if they don’t come back?]

Nucleating Whole Virus-Free Communities

The simplest way to create a virus-free community would be to think of verifiable self- quarantining as an investment. If you can prove you’ve been physically disconnected from everyone for 30 days, you would be let into a club for people near you who have done the same already. This could become a large set of people, especially if it turns out that cash handouts are insufficient for millions of people who might end up needing to work in a month or two and defy any kind of large-scale quarantine. Those who can afford (and prove!) that they’ve been diligently quarantining would be allowed in. For a stricter “inner set” there might be stricter criteria where you would need to submit an unfakeable biosample to prove you are not infected (which would be tricky but not impossible given pre-existing DNA databases like 23andMe). Then the algorithm would group you with a subset that you can realistically physically meet, and then allow you to make friends with them. Finally, as you submit a list of people you do want to hang out with long-term, the algorithm would run an optimization process to make as many of the people happy and return the curated list of people you could hang out with so that the network as a whole has convenient scale-dependent sub-exponential chokepoints. I know this sounds like a lot. And it is. But again, pandemics can be really bad. And we have the technology, so why not try?

In a way this idea is the complementary problem to “keeping the virus out of the general population”. In the latter you start out in a fully virus-free situation and try to keep it that way, while the former starts out in a highly contaminated population and tries to “spread health” from the standpoint of a verifiably healthy core. That is, how you create pockets of health in a virus-saturated general population and grow them as much as possible.

Another approach in this vein I can think of is to seed a location with an excess of people who already have immunity and cannot transmit. The people there who haven’t gotten the disease would in a sense be lucky to find themselves around people who won’t transmit it, and thus be blessed with spontaneous herd immunity. That said, the key sacrifice here would be the potential damage elsewhere, where herd immunity would be reached later due to the removed group of immune people. This and the previous approach incur the cost of having to associate with new people, and the relocation challenges would be a logistical nightmare. But perhaps worth doing.

Finally, another approach to this problem would be to use an app with a personality test that is hard to fake, so that only healthy people who score in the top 2% of both introversion and conscientiousness could join the club. It would tell you where to go live with other people who meet the same criteria, and to get a comprehensive test of all major transmissible diseases and treat those you have before relocation. Given the temperament selected for, everyone who becomes part of the community would be extremely diligent about not physically meeting people outside the group and follow the contact network prescriptions dictated by the algorithm. If this sounds like hell to you, well, perhaps it is not for you. But at least this way there would be some pockets of fully healthy people, and that would have a lot of value. (Cf. Rat-free Alberta).

To Summarize:

What are your options for modifying a network in order to remove (or at least tame) exponential growth? The one’s I’ve considered are:

  1. Remove nodes with a high “Pandemic Klout Score”
  2. Creating sub-exponential chokepoints:
    1. Option 1: Gradient descent methods:
      1. You make piece-meal modifications to the contact network one connection at a time in order to improve the prospects of the entire network.
      2. Each person would receive a set of options for mild modifications to their contacts so that whichever they chose would lead to an improvement of the network geometry.
    2. Option 2: Network nucleation:
      1. You create a criteria for what constitutes “infection-free” such as:
        1. Self-enforced quarantine on one extreme, and
        2. Provable DNA-matched tests on the other extreme.
      2. Allow people who qualify to meet each other.
      3. Everyone submits a list of people they’d like to hang out with.
      4. The algorithm would optimize the connections to make everyone happy and at the same time maximize the sub-exponential chokepoints of the network (such as by making it a planar graph with a high clustering coefficient, etc.).

Now, perhaps if all of this sounds insane and like too much trouble, there is always the option of, er, becoming comfortable with no human touch…

Future Cultures

A Religion of Abstinence of Human Touch

I know how hard it is, what is being demanded of us.

Especially in times of needs such as these, we like to be close to one another.

We understand care and affection in terms of human closeness and human touch. 

But at the moment the exact opposite is the case, and everybody really must understand that.

At the moment, the only real way of showing you care is keeping your distance.

– German Chancellor Angela Merkel, at a Nationwide TV Address (March 18 2020)

Have you ever noticed that it is possible to reproduce without any human touch? Artificial insemination conducted with robotic arms is not a far-fetched prospect. A further question is: can we do away with human touch entirely for all functions of life?

You don’t need to be anywhere to be everywhere.

– John C. Lilly

You may say: wouldn’t a community of touch-free individuals somehow lack the most basic of human qualities, i.e. interpersonal intimacy? I reckon that you would be wrong on more than one account. First of all, insofar as touch-based intimacy is based on endorphin and oxytocin release in conjunction with nervous system entrainment under the hood, there is no reason why one couldn’t engineer a brain-stimulation technology ecosystem so that people receive the same kind of physically, psychologically, and spiritually rewarding feelings of connection by merely acknowledging each other’s presence or synchronizing with each other’s brainwaves. Perhaps even you could achieve this despite doing away with technology, as the power of deep metta meditation would suggest. Perhaps we could all cultivate a loving temperament that embraces all of the universe of sentient beings. Here, the commitment to each other’s physical wellbeing is possible without sacrificing the emotional richness of communion; in principle they could be simultaneously satisfied. Alas, the evolutionary roots of human touch are deep, and trying to mess with them with humans as they currently are is far fetched. But just wait until a virus with 0.98 fatality rate and R0 = 6 is discovered and see what people are willing to do to survive.

This concludes my presentation of the cocktail napkin ideas I’ve considered so far to deal with pandemics. But I still have a couple more things to say about this topic, so I’ll take advantage of the soap box I’m standing on and add:


Now That The World Is Paying Attention

consciousness_of_the_planet

From the 1998 film “Armageddon”

I’d like to draw your attention to the following highly relevant goals that the current crisis highlights:

1) We ought to recognize the existence of extreme suffering so that we focus our efforts on its prevention (asphyxiation is an example of extreme suffering, which is how people are dying of COVID-19).

2) Investigating what makes MDMA and 5-MeO-DMT so special and useful for treating PTSD (as people recover from the disease it will become apparent many experience PTSD associated with the episode – this will need to be addressed on a massive scale).

3) Get factory farms banned (for real, they are the breeding grounds of future pandemics – and they of course also cause the bulk of easily preventable suffering, so there is that too. Every animal product you put on your plate is a probabilistic pandemic on its way. Sorry!).

naval_common_enemy

Let’s make the best of this situation (More Dakka!)


A Few Final Thoughts

The Framing Effect

Recall the “Framing Effect” – the cognitive bias where we prefer an option when the problem is framed in a certain way, and a different option when it’s framed differently even though the corresponding options in each framing are of equal expected value.

I worry a lot of the people in my friend network, and in fact worldwide, might be falling prey to the framing effect for the coronavirus situation:

Here is how the “containment vs. mitigation” problem is being “framed” right now (assume 5 million people will die worldwide if nothing is done, but you can choose to invest your resources on ‘containment’ or ‘mitigation’):

Option A: 10% chance 0 people die (i.e. successful containment), and 90% chance 5 million people die.
Option B: 100% chance 4 million people die.

Clearly option A is more ‘heroic’. Alas, it is the one that leads to more expected deaths.

Now consider the alternate framing that might make you feel differently about the options:

Option A: 10% chance of saving 5 million people (i.e. successful containment) and 90% of saving nobody.
Option B: 100% chance of saving 1 million people (i.e. mitigation prevents many deaths).

In both cases option B is much better by a huge margin. In fact by an expected number of 500,000 people saved. Yet when framed in the first way option A seems a lot more attractive. Why? And should we try to get rid of this bias?

Of course in the real world you don’t have to choose between A and B entirely. You can try to do both containment and mitigation. But you *do* need to choose how to allocate resources, and I believe this framing issue does actually come up in our current situation.

I do want to say that, as Robin Hanson suggests, if we are doing the containment strategy we need buy-in from the population. Some personally costly and dramatic public display of commitment from many people would be useful. I am personally very happy to commit in public to hard-core quarantine if it’s ethically necessary.


Social Withdrawal and Behavioral Enrichment

Social distancing is painful because we are all opioid addicts, namely, addicts to the endogenous opioids released when socializing. With a quarantine in place, we can anticipate that people who are on the threshold of being depressed might cross that threshold as an effect of reduced in-person socializing. Likewise, we can anticipate collective health decline at a statistical level due to reduced exercise, sunlight exposure, and sensory diversity (cf. white torture).*****

Possible solutions? Besides being very bullish on at-home exercise routines and HEPA filters, I would also point out the following. I think that we should not be afraid of comparing ourselves with other animals. Bear with me. Humans, not unlike domestic dogs and cats, benefit from being exposed to a wide variety of novel sensory inputs. If you enjoy scents, for example, it would be advisable to order a set of essential oils or perfume samples in order to trick your brain into thinking you are exploring a larger area than you are. Apparently, for example, big cats in captivity are more engaged and less depressed when you spray Calvin Klein perfumes on their territory. Alternatively, if scent is not something you care about, think of perhaps increasing the repertoire of visual art, dance, food, touch, and music you are exposed to on a daily basis. This, I suggest, will help you keep depression away (for a while longer).

Caption: Just a little bit of behavioral enrichment for you! 🙂

Finally (self-promotion ahead), if you have time on your hands, and you’ve been meaning to dive deeper into Qualia Computing, this might be your chance. I’d suggest you start out with the following three resources:

  1. Top 10 Qualia Computing Articles
  2. Glossary of Qualia Research Institute Terms
  3. Every Qualia Computing Article Ever

And if you are really hard core, feel free to reach out to the Qualia Research Institute to help with volunteer work. Also we are going to be doing virtual internship cycles in April, May, and June, so you can stay home and safe and still collaborate with us. But shh! It’s a secret! (Wait, how come it’s a secret but you now know about it? Well, because you’ve scrolled all the way here, that’s some commitment!).

The End


* A more accurate representation might require the use of directed edges to encode asymmetrical contact relationships. For example: the cleaning crew of a hotel might be more exposed to the guests than the guests are exposed to the crew. Also, when two people who have very different habits of hygiene meet, the cleaner person is more likely to get the short end of the stick transmission-wise.

** It is worth pointing out for information networks the “degree of interaction” between nodes is extremely skewed. You may have a thousand friends on Facebook, but the number of people you are likely to interact on a daily basis will be a tiny subset of them, perhaps on the order of 0 to 20. And among the people you interact with, you are likely interacting much more word-count-wise with some than with the others. Indeed, if you plot the number of words exchanged in private messages between people in an information network, the distribution follows a long-tail.

*** In the long-run, this may also have to apply to information networks. Whether information networks will need also some level of top-down control will be a difficult question to answer that requires a complex cost-benefit analysis beyond the scope of this article. The most important variables being (a) what the benefits of fully-free communication are, and (b) the density and severity of memetic hazards in idea-space, in conjunction with the nature of intellectual selection pressures in future societies. If it turns out that people above a certain level of education and intelligence in a future with far more advanced science and engineering are extremely likely to encounter what Nick Bostrom calls “black balls”, there might be no way around developing tight controls on information networks for the safety of everyone. It this happens, we could also use many of the strategies outlined in this article for contact networks. After all, viruses are related to contact networks in the same way as meme hazards are related to information networks.

**** Of course, in some ways this is more about collective emotional processing than about object-level problem solving.

***** It is worth noting that the better air quality might buffer a bit against these negatives.

A Big State-Space of Consciousness

Kenneth Shinozuka of Blank Horizons asks: Andrés, how long do you think it’ll take to fully map out the state space of consciousness? A thousand or a million years?

The state-space of consciousness is unimaginably large (and yet finite)

I think we will discover the core principles of a foundational theory of consciousness within a century or so. That is, we might find plausible solutions to Mike Johnsons’ 8 subproblems of consciousness and experimentally verify a specific formal theory of consciousness before 2100. That said, there is a very large distance between proving a certain formal theory of consciousness and having a good grasp of the state-space of consciousness.

Knowing Maxwell’s equations gives you a formal theory of electromagnetism. But even then, photons are hidden as an implication of the formalism; you need to do some work to find them in it. And that’s the tip of the iceberg; you would also find hidden in the formalism an array of exotic electromagnetic behavior that arise in unusual physical conditions such as those produced by metamaterials. The formalism is a first step to establish the fundamental constraints for what’s possible. What follows is filling in the gaps between the limits of physical possibility, which is a truly fantastical enterprise considering the range of possible permutations.Island_of_Stability_derived_from_Zagrebaev

A useful analogy here might be: even though we know all of the basic stable elements and many of their properties, we have only started mapping out the space of possible small molecules (e.g. there are ~10^60 bioactive drugs that have never been tested), and have yet to even begin the project in earnest of understanding what proteins can do. Or consider the number of options there are to make high-entropy alloys (alloys made with five or more metals). Or all the ways in which snowflakes of various materials can form, meaning that even when you are studying a single material it can form crystal structures of an incredibly varied nature. And then take into account the emergence of additional collective properties: physical systems can display a dazzling array of emergent exotic effects, from superconductivity and superradiance to Bose-Einstein condensates and fusion chain reactions. Exploring the state-space of material configurations and their emergent properties entails facing a combinatorial explosion of unexpected phenomena.

And this is the case in physics even though we know for a fact that there are only a bit over a hundred possible building blocks (i.e. the elements).

In the province of the mind, we do not yet have even that level of understanding. When it comes to the state-space of consciousness we do not have a corresponding credible “periodic table of qualia”. The range of possible experiences in normal everyday life is astronomical. Even so, the set of possible human sober experiences is a vanishing fraction of the set of possible DMT trips, which is itself a vanishing fraction of the set of possible DMT + LSD + ketamine + TMS + optogenetics + Generalized Wada Test + brain surgery experiences. Brace yourself for a state-space that grows supergeometrically with each variable you introduce.

If we are to truly grasp the state-space of consciousness, we should also take into account non-human animal qualia. And then further still, due to dual-aspect monism, we will need to go into things like understanding that high-entropy alloys themselves have qualia, and then Jupiter Brains, and Mike’s Fraggers, and Black Holes, and quantum fields in the inflation period, and so on. This entails a combinatorial explosion of the likes I don’t believe anyone is really grasping at the moment. We are talking about a monumental “monster” state-space far beyond the size of even the wildest dreams of full-time dreamers. So, I’d say -honestly- I think that mapping out the state-space of consciousness is going to take millions of years.

But isn’t the state-space of consciousness infinite, you ask?

Alas, no. There are two core limiting factors here – one is the speed of light (which entails the existence of gravitational collapse and hence limits to how much matter you can arrange in complex ways before a black hole arises) and the second one is quantum (de)coherence. If phenomenal binding requires fundamental physical properties such as quantum coherence, there will be a maximum limit to how much matter you can bind into a unitary “moment of experience“. Who knows what the limit is! But I doubt it’s the size of a galaxy – perhaps it is more like a Jupiter Brain, or maybe just the size of a large building. This greatly reduces the state-space of consciousness; after all, something finite, no matter how large, is infinitely smaller than something infinite!

But what if reality is continuous? Doesn’t that entail an infinite state-space?

I do not think that the discrete/continuous distinction meaningfully impacts the size of the state-space of consciousness. The reason is that at some point of degree of similarity between experiences you get “just noticeable differences” (JNDs). Even with the tiniest hint of true continuity in consciousness, the state-space would be infinite as a result. But the vast majority of those differences won’t matter: they can be swept under the rug to an extent because they can’t actually be “distinguished from the inside”. To make a good discrete approximation of the state-space, we would just need to divide the state-space into regions of equal area such that their diameter is a JND.15965332_1246551232103698_2088025318638395407_n

Conclusion

In summary, the state-space of consciousness is insanely large but not infinite. While I do think it is possible that the core underlying principles of consciousness (i.e. an empirically-adequate formalism) will be discovered this century or the next, I do not anticipate a substantive map of the state-space of consciousness to be available anytime soon. A truly comprehensive map would, I suspect, be only possible after millions of years of civilizational investment on the task.

Binding Quiddities

Excerpt from The Combination Problem for Panpsychism (2013) by David Chalmers


[Some] versions of identity panpsychism are holistic in that they invoke fundamental physical entities that are not atomic or localized. One such view combines identity panpsychism with the monistic view that the universe itself is the most fundamental physical entity. The result is identity cosmopsychism, on which the whole universe is conscious and on which we are identical to it. (Some idealist views in both Eastern and Western traditions appear to say something like this.) Obvious worries for this view are that it seems to entail that there is only one conscious subject, and that each of us is identical to each other and has the same experiences. There is also a structural mismatch worry: it is hard to see how the universe’s experiences (especially given a Russellian view on which these correspond to the universe’s physical properties) should have anything like the localized idiosyncratic structure of my experiences. Perhaps there are sophisticated versions of this view on which a single universal consciousness is differentiated into multiple strands of midlevel macroconsciousness, where much of the universal consciousness is somehow hidden from each of us. Still, this seems to move us away from identity cosmopsychism toward an autonomous cosmopsychist view in which each of us is a distinct constituent of a universal consciousness. As before, the resulting decomposition problem seems just as hard as the combination problem.

Perhaps the most important version of identity panpsychism is quantum holism. This view starts from the insight that on the most common understandings of quantum mechanics, the fundamental entities need not be localized entities such as particles. Multiple particles can get entangled with each other, and when this happens it is the whole entangled system that is treated as fundamental and that has fundamental quantum-mechanical properties (such as wave functions) ascribed to it. A panpsychist might speculate that such an entangled system, perhaps at the level of the brain or one of its subsystems, has microphenomenal properties. On the quantum holism version of identity panpsychism, macrosubjects such as ourselves are identical to these fundamental holistic entities, and our macrophenomenal properties are identical to its microphenomenal properties.

This view has more attractions than the earlier views, but there are also worries. Some worries are empirical: it does not seem that there is the sort of stable brain-level entanglement that would be needed for this view to work. Some related worries are theoretical: on some interpretations of quantum mechanics the locus of entanglement is the whole universe (leading us back to cosmopsychism), on others there is no entanglement at all, and on still others there are regular collapses that tend to destroy this sort of entanglement. But perhaps the biggest worry is once again a structural mismatch worry. The structure of the quantum state of brain-level systems is quite different from the structure of our experience. Given a Russellian view on which microphenomenal properties correspond directly to the fundamental microphysical properties of these entangled systems, it is hard to see how they could have the familiar structure of our macroexperience.

The identity panpsychist (of all three sorts) might try to remove some of these worries by rejecting Russellian panpsychism, so that microphenomenal properties are less closely tied to microphysical structure. The cost of this move is that it becomes much less clear how these phenomenal properties can play a causal role. On the face of it they will be either epiphenomenal, or they will make a difference to physics. The latter view will in effect require a radically revised physics with something akin to our macrophenomenal structure present at the basic level. Then phenomenal properties will in effect be playing the role of quiddities within this revised physics, and the resulting view will be a sort of revisionary Russellian identity panpsychism.

Qualia Productions Presents: When AI Equals Advanced Incompetence

By Maggie and Anders Amelin

Letter I: Introduction

We are Maggie & Anders. A mostly harmless Swedish old-timer couple only now beginning to discover the advanced incompetence that is the proto-science — or “alchemy” — of consciousness research. A few centuries ago a philosopher of chemistry could have claimed with a straight face to be quite certain that a substance with negative mass had to be invoked to explain the phenomenon of combustion. Another could have been equally convinced that the chemistry of life involves a special force of nature absent from all non-living matter. A physicist of today may recognize that the study of consciousness has even less experimental foundation than alchemy did, yet be confident that at least it cannot feel like something to be a black hole. Since, obviously, black holes are simple objects and consciousness is a phenomenon which only emerges from “complexity” as high as that of a human brain.

Is there some ultimate substrate, basic to reality and which has properties intrinsic to itself? If so, is elementary sentience one of those properties? Or is it “turtles all the way down” in a long regress where all of reality can be modeled as patterns within patterns within patterns ending in Turing-style “bits”? Or parsimoniously never ending?

Will it turn out to be patterns all the way down, or sentience all the way up? Should people who believe themselves to perhaps be in an ancestor simulation take for granted that consciousness exists for biologically-based people in base-level reality? David Chalmers does. So at least that must be one assumption it is safe to make, isn’t it? And the one about no sentience existing in a black hole. And the one about phlogiston. And the four chemical elements.

This really is good material for silly comedy or artistic satire. To view a modest attempt by us in that direction, please feel encouraged to enjoy this youtube video we made with QRI in mind:

When ignorance is near complete, it is vital to think outside the proverbial box if progress is to be made. However, spontaneous creative speculation is more context-constrained than it feels like, and it rarely correlates all that beautifully with anything useful. Any science has to work via the baby steps of testable predictions. The integrated information theory (IIT) does just that, and has produced encouraging early results. IIT could turn out to be a good starting point for eventually mapping and modeling all of experiential phenomenology. For a perspective, IIT 3.0 may be comparable to how Einstein’s modeling of the photoelectric effect stands in relation to a full-blown theory of quantum gravity. There is a fair bit of ground to cover. We have not been able to find any group more likely than the QRI to speed up the process whereby humanity eventually manages to cover that ground. That is, if they get a whole lot of help in the form of outreach, fundraising and technological development. Early pioneers have big hurdles to overcome, but the difference they can make for the future is enormous.anders_and_maggie_thermometer

For those who feel inspired, a nice start is to go through all that is on or linked via the QRI website. Indulge in Principia Qualia. If that leaves you confused on a higher level, you are in good company. With us. We are halfway senile and are not information theorists, neuroscientists or physicists. All we have is a nerdy sense of humor and work experience in areas like marketing and planetary geochemistry. One thing we think we can do is help bridge the gap between “experts” and “lay people”. Instead of “explain it like I am five”, we offer the even greater challenge of explaining it like we are Maggie & Anders. Manage that, and you will definitely be wiser afterwards!

– Maggie & Anders


Letter II: State-Space of Matter and State-Space of Consciousness

A core aspect of science is the mapping out of distributions, spectra, and state-spaces of the building blocks of reality. Naturally occurring states of things can be spontaneously discovered. To gain more information about them, one can experimentally alter such states to produce novel ones, and then analyze them in a systematic way.

The full state-space of matter is multidimensional and vast. Zoom in anywhere in it and there will be a number of characteristic physics phenomena appearing there. Within a model of the state-space you can follow independent directions as you move towards regions and points. As an example, you can hold steady at one particular simple chemical configuration. Diamond, say. The stable region of diamond and its emergent properties like high hardness extends certain distances in other parameter directions such as temperature and pressure. The diamond region has neighboring regions with differently structured carbon, such as graphite. Diamond and graphite make for an interesting case since the property of hardness emerges very differently in the two regions. (In the pure carbon state-space the dimensions denoting amounts of all other elements can be said to be there but set to zero). Material properties like hardness can be modeled as static phenomena. According to IIT however, consciousness cannot. It’s still an emergent property of matter though, so just stay in the matter state-space and add a time dimension to it. Then open chains and closed loops of causation emerge as a sort of fundamental level of what matter “does”. Each elementary step of causation may be regarded to produce or intrinsically be some iota of proto-experience. In feedback loops this self-amplifies into states of feeling like something. Many or perhaps most forms of matter can “do” these basic things at various regions of various combinations of parameter settings. Closed causal loops require more delicate fine-tuning in parameter space, so the state-space of nonconscious causation structure is larger than that of conscious structure. The famous “hard problem” has to do with the fact that both an experientially very weak and a very strong state can emerge from the same matter (shown to be the case so far only within brains). A bit like the huge difference in mechanical hardness of diamond and graphite both emerging from the same pure carbon substrate (a word play on “hard” to make it sticky).

By the logic of IIT it should be possible to model (in arbitrarily coarse or fine detail) the state-space of all conscious experience whose substrate is all possible physical states of pure carbon. Or at room temperature in any material. And so on. If future advanced versions of IIT turn out to be a success then we may guess there’ll be a significant overlap to allow for a certain “substrate invariance” for hardware that can support intelligence with human-recognizable consciousness. Outside of that there will be a gargantuan additional novel space to explore. It ought to contain maxima of (intrinsic) attractiveness, none of which need to reside within what a biological nervous system can host. Biological evolution has only been able to search through certain parts of the state-space of matter. One thing it has not worked with on Earth is pure carbon. Diamond tooth enamel or carbon nanotube tendons would be useful but no animal has them. What about conscious states? Has biology come close to hit upon any of the optima in those? If all of human sentience is like planet Earth, and all of Terrestrial biologically-based sentience is like the whole Solar System, that leaves an entire extrasolar galaxy out there to explore. (Boarding call: Space X Flight 42 bound for Nanedi Settlement, Mars. Sentinauts please go to the Neuralink check-in terminal).

Of course we don’t currently know how IIT is going to stand up, but thankfully it does make testable predictions. There is, therefore, a beginning of something to be hoped for with it. In a hopeful scenario IIT turns out to be like special relativity, and what QRI is reaching for is like quantum gravity. It will be a process of taking baby steps, for sure. But each step is likely to bring benefits in many ways.

Is any of this making you curious? Then you may enjoy reading “Principia Qualia” and other QRI articles.

– Maggie & Anders

Materializing Hyperbolic Spaces with Gradient-Index Optics and One-Way Mirrors

Burning Man is one week away, so I figured I would share a neat idea I’ve been hoarding  that could lead to a kick-ass Burning Man-style psychedelic art installation. If I have the time and resources to do so, I may even try to manifest this idea in real life at some point.

Around the time I was writing The Hyperbolic Geometry of DMT Experiences (cf. Eli5) I began asking myself how to help people develop a feel for what it is like to inhabit non-Euclidean phenomenal spaces. I later found out that Henry Segerman developed an immersive VR experience in which you can explore 3D hyperbolic spaces. That is fantastic, and a great step in the right direction. But I wanted to see if there was any way for us to experience 3D hyperbolic geometry in a material way without the aid of computers. Something that you could hold in your hand, like a sort of mystical amulet that works as a reminder of the vastness of the state-space of consciousness.

What I had in mind was along the lines of how we can, in a sense, visualize infinite (Euclidean) space using two parallel mirrors. I thought that maybe there could be a way to do the same but in a way that visualizes a hyperbolic space.

One-Way Mirrors and 3D Space-Filling Shapes

Right now you can use one-way mirrors on the sides of a polyhedra whose edges are embedded with LEDs to create a fascinating “infinite space effect”:

This works perfectly for cubes in particular, given that cubes are symmetrical space-filling polyhedra. But as you can see in the video above, the effect is not quite perfect when we use dodecahedra (or any other Platonic solid). The corners simply do not align properly. This is because the solid angles of non-cube Platonic solids cannot be used to cover perfectly 4π steradians (i.e. what 8 cubes do when meeting at a corner):

n-light-objects-header

This is not the case in hyperbolic space, though; arbitrary regular polyhedra can tesselate 3D hyperbolic spaces. For instance, one can use dodecahedra by choosing their size appropriately in such a way that they all have 90 degree angle corners (cf. Not Knot):

Gradient-Index Optics

Perhaps, I thought to myself, there is a way to physically realize hyperbolic curvature and enable us to see what it is like to live in a place in which dodecahedra tesselate space. I kept thinking about this problem, and one day while riding the BART and introspecting on the geometry of sound, I realized that one could use gradient-index optics to create a solid in which light-paths behave as if the space was hyperbolic.

Gradient-index optics is the subfield of optics that specializes in the use of materials that have a smooth non-constant refractive index. One way to achieve this is to blend two transparent materials (e.g. two kinds of plastic) in such a way that the concentration of each type varies smoothly from one region to the next. As a consequence, light travels in unusual and bendy ways, like this:

Materializing Hyperbolic Spaces

By carefully selecting various transparent plastics with different indices of refraction and blending them in a 3D printer in precisely the right proportions, one can in principle build solids in which the gradient-index properties of the end product instantiate a hyperbolic metric. If one were to place the material with the lowest refraction index at the very center in a dodecahedron and add materials of increasingly larger refractive indices all the way up to the corners, then the final effect could be one in which the dodecahedron has an interior in which light moves as if it were in a hyperbolic space. One can then place LED strips along the edges and seal the sides with one-way window film. Lo-and-behold, one would then quite literally be able to “hold infinity in the palm of your hand”:

dodecahedra_hyperbolic

I think that this sort of gadget would allow us to develop better intuitions for what the far-out (experiential) spaces people “visit” on psychedelics look like. One can then, in addition, generalize this to make space behave as if its 3D curvature was non-constant. One might even, perhaps, be able to visualize a black-hole by emulating its event-horizon using a region with extremely large refractive index.

6a00d8341bf7f753ef01b7c863d353970b

Challenges

I would like to conclude by considering some of the challenges that we would face trying to construct this. For instance, finding the right materials may be difficult because they would need to have a wide range of refractive indices, all be similarly transparent, able to smoothly blend with each other, and have low melting points. I am not a material scientist, but my gut feeling is that this is not currently impossible. Modern gradient-index optics already has a rather impressive level of precision.

Another challenge comes from the resolution of the 3D printer. Modern 3D printers have layers with a thickness between .2 to 0.025mm. It’s possible that this is simply not small enough to avoid visible discontinuities in the light-paths. At least in principle this could be surmounted by melting the last layer placed such that the new layer smoothly diffuses and partially blends with it in accordance with the desired hyperbolic metric.

An important caveat is that the medium in which we live (i.e. air at atmospheric pressure) is not very dense to begin with. In the example of the dodecahedra, this may represent a problem considering that the corners need to form 90 degree angles from the point of view of an outside observer. This would imply that the surrounding medium needs to have a higher refraction index than that of the transparent medium at the corners. This could be fixed by immersing the object in water or some other dense media (and designing it under the assumption of being surrounded by such a medium). Alternatively, one can simply fix the problem by using appropriately curved sides in lieu of straight planes. This may not be as aesthetically appealing, though, so it may pay off to brainstorm other clever approaches to deal with this that I haven’t thought of.

Above all, perhaps the most difficult challenge would be that of dealing with the inevitable presence of chromatic aberrations:

Since the degree to which a light-path bends in a medium depends on its frequency, how bendy light looks like with gradient-index optics is variable. If the LEDs placed at the edges of the polyhedra are white, we could expect very visible distortions and crazy rainbow patterns to emerge. This would perhaps be for the better when taken for its aesthetic value. But since the desired effect is one of actually materializing faithfully the behavior of light in hyperbolic space, this would be undesirable. The easiest way to deal with this problem would be to show the gadget in a darkened room and have only monochrome LEDs on the edges of the polyhedra whose frequency is tuned to the refractive gradient for which the metric is hyperbolic. More fancifully, it might be possible to overcome chromatic aberrations with the use of metamaterials (cf. “Metasurfaces enable improved optical lens performance“). Alas, my bedtime is approaching so I shall leave the nuts and bolts of this engineering challenge as an exercise for the reader…

The Appearance of Arbitrary Contingency to Our Diverse Qualia

By David Pearce (Mar 21, 2012; Reddit AMA)

 

The appearance of arbitrary contingency to our diverse qualia – and undiscovered state-spaces of posthuman qualia and hypothetical micro-qualia – may be illusory. Perhaps they take the phenomenal values they do as a matter of logico-mathematical necessity. I’d make this conjecture against the backdrop of some kind of zero ontology. Intuitively, there seems no reason for anything at all to exist. The fact that the multiverse exists (apparently) confounds one’s pre-reflective intuitions in the most dramatic possible way. However, this response is too quick. The cosmic cancellation of the conserved constants (mass-energy, charge, angular momentum) to zero, and the formal equivalence of zero information to all possible descriptions [the multiverse?] means we have to take seriously this kind of explanation-space. The most recent contribution to the zero-ontology genre is physicist Lawrence Krauss’s readable but frustrating “A Universe from Nothing: Why There Is Something Rather Than Nothing“. Anyhow, how does a zero ontology tie in with (micro-)qualia? Well, if the solutions to the master equation of physics do encode the field-theoretic values of micro-qualia, then perhaps their numerically encoded textures “cancel out” to zero too. To use a trippy, suspiciously New-Agey-sounding metaphor, imagine the colours of the rainbow displayed as a glorious spectrum – but on recombination cancelling out to no colour at all. Anyhow, I wouldn’t take any of this too seriously: just speculation on idle speculation. It’s tempting simply to declare the issue of our myriad qualia to be an unfathomable mystery. And perhaps it is. But mysterianism is sterile.

Open Individualism and Antinatalism: If God could be killed, it’d be dead already

Abstract

Personal identity views (closed, empty, open) serve in philosophy the role that conservation laws play in physics. They recast difficult problems in solvable terms, and by expanding our horizon of understanding, they likewise allow us to conceive of new classes of problems. In this context, we posit that philosophy of personal identity is relevant in the realm of ethics by helping us address age-old questions like whether being born is good or bad. We further explore the intersection between philosophy of personal identity and philosophy of time, and discuss the ethical implications of antinatalism in a tenseless open individualist “block-time” universe.

Introduction

Learning physics, we often find wide-reaching concepts that simplify many problems by using an underlying principle. A good example of this is the law of conservation of energy. Take for example the following high-school physics problem:

An object that weighs X kilograms falls from a height of Y meters on a planet without an atmosphere and a gravity of Zg. Calculate the velocity with which this object will hit the ground.

One could approach this problem by using Newton’s laws of motion and differentiating the distance traveled by the object as a function of time and then obtaining the velocity of the object at the time it has fallen Y meters.

Alternatively, you could simply note that given that energy is conserved, all of the potential energy of the object at a height of X meters will be transformed into kinetic energy at 0 height. Thus the velocity of the object is equivalent to this amount, and the problem is easier to solve.

Once one has learned “the trick” one starts to see many other problems differently. In turn, grasping these deep invariants opens up new horizons; while many problems that seemed impossible can be solved using these principles, it also allows you to ask new questions, which opens up new problems that cannot be solved with those principles alone.

Does this ever happen in philosophy? Perhaps entire classes of difficult problems in philosophy may become trivial (or at least tractable) once one grasps powerful principles. Such is the case, I would claim, of transcending common-sense views of personal identity.

Personal Identity: Closed, Empty, Open

In Ontological Qualia I discussed three core views about personal identity. For those who have not encountered these concepts, I recommend reading that article for an expanded discussion.

In brief:

  1. Closed Individualism: You start existing when you are born, and stop when you die.
  2. Empty Individualism: You exist as a “time-slice” or “moment of experience.”
  3. Open Individualism: There is only one subject of experience, who is everyone.

This slideshow requires JavaScript.

Most people are Closed Individualists; this is the default common sense view for good evolutionary reasons. But what grounds are there to believe in this view? Intuitively, the fact that you will wake up in “your body” tomorrow is obvious and needs no justification. However, explaining why this is the case in a clear way requires formalizing a wide range of concepts such as causality, continuity, memory, and physical laws. And when one tries to do so one will generally find a number of barriers that will prevent one from making a solid case for Closed Individualism.

As an example line of argument, one could argue that what defines you as an individual is your set of memories, and since the person who will wake up in your body tomorrow is the only human being with access to your current memories then you must be it. And while this may seem to work on the surface, a close inspection reveals otherwise. In particular, all of the following facts work against it: (1) memory is a constructive process and every time you remember something you remember it (slightly) differently, (2) memories are unreliable and do not always work at will (e.g. false memories), (3) it is unclear what happens if you copy all of your memories into someone else (do you become that person?), (4) how many memories can you swap with someone until you become a different person?, and so on. Here the more detailed questions one asks, the more ad-hoc modifications of the theory are needed. In the end, one is left with what appears to be just a set of conventional rules to determine whether two persons are the same for practical purposes. But it does not seem to carve nature at its joints; you’d be merely over-fitting the problem.

The same happens with most Closed Individualist accounts. You need to define what the identity carrier is, and after doing so one can identify situations in which identity is not well-defined given that identity carrier (memory, causality, shared matter, etc.).

But for both Open and Empty Individualism, identity is well-defined for any being in the universe. Either all are the same, or all are different. Critics might say that this is a trivial and uninteresting point, perhaps even just definitional. Closed Individualism seems sufficiently arbitrary, however, that questioning it is warranted, and once one does so it is reasonable to start the search for alternatives by taking a look at the trivial cases in which either all or none of the beings are the same.

More so, there are many arguments in favor of these views. They indeed solve and usefully reformulate a range of philosophical problems when applied diligently. I would argue that they play a role in philosophy that is similar to that of conservation of energy in physics. The energy conservation law has been empirically tested to extremely high levels of precision, which is something which we will have to do without in the realm of philosophy. Instead, we shall rely on powerful philosophical insights. And in addition, they make a lot of problems tractable and offer a powerful lens to interpret core difficulties in the field.

Open and Empty Individualism either solve or have bearings on: Decision theory, utilitarianism, fission/fusion, mind-uploading and mind-melding, panpsychism, etc. For now, let us focus on…

Antinatalism

Antinatalism is a philosophical view that posits that, all considered, it is better not to be born. Many philosophers could be adequately described as antinatalists, but perhaps the most widely recognized proponent is David Benatar. A key argument Benatar considers is that there might be an asymmetry between pleasure and pain. Granted, he would say, experiencing pleasure is good, and experiencing suffering is bad. But while “the absence of pain is good, even if that good is not enjoyed by anyone”, we also have that “the absence of pleasure is not bad unless there is somebody for whom this absence is a deprivation.” Thus, while being born can give rise to both good and bad, not being born can only be good.

Contrary to popular perception, antinatalists are not more selfish or amoral than others. On the contrary, their willingness to “bite the bullet” of a counter-intuitive but logically defensible argument is a sign of being willing to face social disapproval for a good cause. But along with the stereotype, it is generally true that antinatalists are temperamentally depressive. This, of course, does not invalidate their arguments. If anything, sometimes a degree of depressive realism is essential to arrive at truly sober views in philosophy. But it shouldn’t be a surprise to learn that either experiencing or having experienced suffering in the past predispose people to vehemently argue for the importance of its elimination. Having a direct acquaintance with the self-disclosing nastiness of suffering does give one a broader evidential base for commenting on the matter of pain and pleasure.

Antinatalism and Closed Individualism

Interestingly, Benatar’s argument, and those of many antinatalists, rely implicitly on personal identity background assumptions. In particular, antinatalism is usually framed in a way that assumes Closed Individualism.

The idea that a “person can be harmed by coming into existence” is developed within a conceptual framework in which the inhabitants of the universe are narrative beings. These beings have both spatial and temporal extension. And they also have the property that had the conditions previous to their birth been different, they might not have existed. But how many possible beings are there? How genetically or environmentally different do they need to be to be different beings? What happens if two beings merge? Or if they converge towards the same exact physical configuration over time?

 

This conceptual framework has counter-intuitive implications when taken to the extreme. For example, the amount of harm you do involves how many people you allow to be born, rather than how many years of suffering you prevented.

For the sake of the argument, imagine that you have control over a sentient-AI-enabled virtual environment in which you can make beings start existing and stop existing. Say that you create two beings, A and B, who are different in morally irrelevant ways (e.g. one likes blue more than red, but on average they both end up suffering and delighting in their experience with the same intensity). With Empty Individualism, you would consider giving A 20 years of life and not creating B vs. giving A and B 10 years of life each to be morally equivalent. But with Closed Individualism you would rightly worry that these two scenarios are completely different. By giving years of life to both A and B (any amount of life!) you have doubled the number of subjects who are affected by your decisions. If the gulf of individuality between two persons is infinite, as Closed Individualism would have it, by creating both A and B you have created two parallel realities, and that has an ontological effect on existence. It’s a big deal. Perhaps a way to put it succinctly would be: God considers much more carefully the question of whether to create a person who will live only 70 years versus whether to add a million years of life to an angel who has already lived for a very long time. Creating an entirely new soul is not to be taken lightly (incidentally, this may cast the pro-choice/pro-life debate in an entirely new light).

Thus, antinatalism is usually framed in a way that assumes Closed Individualism. The idea that a being is (possibly) harmed by coming into existence casts the possible solutions in terms of whether one should allow animals (or beings) to be born. But if one were to take an Open or Empty Individualist point of view, the question becomes entirely different. Namely, what kind of experiences should we allow to exist in the future…

Antinatalism and Empty Individualism

I think that the strongest case for antinatalism comes from a take on personal identity that is different than the implicit default (Closed Individualism). If you assume Empty Individualism, in particular, reality starts to seem a lot more horrible than you had imagined. Consider how in Empty Individualism fundamental entities exist as “moments of experience” rather than narrative streams. Therefore, every time that an animal suffers, what is actually happening is that some moments of experience get to have their whole existence in pain and suffering. In this light, one stops seeing people who suffer terrible happenings (e.g. kidney stones, schizophrenia, etc.) as people who are unlucky, and instead one sees their brains as experience machines capable of creating beings whose entire existence is extremely negative.

With Empty Individualism there is simply no way to “make it up to someone” for having had a bad experience in the past. Thus, out of compassion for the extremely negative moments of experience, one could argue that it might be reasonable to try to avoid this whole business of life altogether. That said, this imperative does not come from the asymmetry between pain and pleasure Benetar talks about (which as we saw implicitly requires Closed Individualism). In Empty Individualism it does not make sense to say that someone has been brought into existence. So antinatalism gets justified from a different angle, albeit one that might be even more powerful.

In my assessment, the mere possibility of Empty Individualism is a good reason to take antinatalism very seriously.

It is worth noting that the combination of Empty Individualism and Antinatalism has been (implicitly) discussed by Thomas Metzinger (cf. Benevolent Artificial Anti-Natalism (BAAN)) and FRI‘s Brian Tomasik.

Antinatalism and Open Individualism

Here is a Reddit post and then a comment on a related thread (by the same author) worth reading on this subject (indeed these artifacts motivated me to write the article you are currently reading):

There’s an interesting theory of personal existence making the rounds lately called Open Individualism. See herehere, and here. Basically, it claims that consciousness is like a single person in a huge interconnected library. One floor of the library contains all of your life’s experiences, and the other floors contain the experiences of others. Consciousness wanders the aisles, and each time he picks up a book he experiences whatever moment of life is recorded in it as if he were living it. Then he moves onto the next one (or any other random one on any floor) and experiences that one. In essence, the “experiencer” of all experience everywhere, across all conscious beings, is just one numerically identical subject. It only seems like we are each separate “experiencers” because it can only experience one perspective at a time, just like I can only experience one moment of my own life at a time. In actuality, we’re all the same person.

 

Anyway, there’s no evidence for this, but it solves a lot of philosophical problems apparently, and in any case there’s no evidence for the opposing view either because it’s all speculative philosophy.

 

But if this were true, and when I’m done living the life of this particular person, I will go on to live every other life from its internal perspective, it has some implications for antinatalism. All suffering is essentially experienced by the same subject, just through the lens of many different brains. There would be no substantial difference between three people suffering and three thousand people suffering, assuming their experiences don’t leave any impact or residue on the singular consciousness that experiences them. Even if all conscious life on earth were to end, there are still likely innumerable conscious beings elsewhere in the universe, and if Open Individualism is correct, I’ll just move on to experiencing those lives. And since I can re-experience them an infinite number of times, it makes no difference how many there are. In fact, even if I just experienced the same life over and over again ten thousand times, it wouldn’t be any different from experiencing ten thousand different lives in succession, as far as suffering is concerned.

 

The only way to end the experience of suffering would be to gradually elevate all conscious beings to a state of near-constant happiness through technology, or exterminate every conscious being like the Flood from the Halo series of games. But the second option couldn’t guarantee that life wouldn’t arise again in some other corner of the multiverse, and when it did, I’d be right there again as the conscious experiencer of whatever suffering it would endure.

 

I find myself drawn to Open Individualism. It’s not mysticism, it’s not a Big Soul or something we all merge with, it’s just a new way of conceptualizing what it feels like to be a person from the inside. Yet, it has these moral implications that I can’t seem to resolve. I welcome any input.

 

– “Open individualism and antinatalism” by Reddit user CrumbledFingers in r/antinatalism (March 23, 2017)

And on a different thread:

I have thought a lot about the implications of open individualism (which I will refer to as “universalism” from here on, as that’s the name coined by its earliest proponent, Arnold Zuboff) for antinatalism. In short, I think it has two major implications, one of which you mention. The first, as you say, is that freedom from conscious life is impossible. This is bad, but not as bad as it would be if I were aware of it from every perspective. As it stands, at least on Earth, only a small number of people have any inkling that they are me. So, it is not like experiencing the multitude of conscious events taking place across reality is any kind of burden that accumulates over time; from the perspective of each isolated nervous system, it will always appear that whatever is being experienced is the only thing I am experiencing. In this way, the fact that I am never truly unconscious does not have the same sting as it would to, for example, an insomniac, who is also never unconscious but must experience the constant wakefulness from one integrated perspective all the time.

 

It’s like being told that I will suffer total irreversible amnesia at some point in my future; while I can still expect to be the person that experiences all the confusion and anxiety of total amnesia when it happens, I must also acknowledge that the residue of any pains I would have experienced beforehand would be erased. Much of what makes consciousness a losing game is the persistence of stresses. Universalism doesn’t imply that any stresses will carry over between the nervous systems of individual beings, so the reality of my situation is by no means as nightmarish as eternal life in a single body (although, if there exists an immortal being somewhere in the universe, I am currently experiencing the nightmare of its life).

 

The second implication of this view for antinatalism is that one of the worst things about coming into existence, namely death, is placed in quite a different context. According to the ordinary view (sometimes called “closed” individualism), death permanently ends the conscious existence of an alienated self. Universalism says there is no alienated self that is annihilated upon the death of any particular mind. There are just moments of conscious experience that occur in various substrates across space and time, and I am the subject of all such experiences. Thus, the encroaching wall of perpetual darkness and silence that is usually an object of dread becomes less of a problem for those who have realized that they are me. Of course, this realization is not built into most people’s psychology and has to be learned, reasoned out, intellectually grasped. This is why procreation is still immoral, because even though I will not cease to exist when any specific organism dies, from the perspective of each one I will almost certainly believe otherwise, and that will always be a source of deep suffering for me. The fewer instances of this existential dread, however misplaced they may be, the better.

 

This is why it’s important to make more people understand the position of universalism/open individualism. In the future, long after the person typing this sentence has perished, my well-being will depend in large part on having the knowledge that I am every person. The earlier in each life I come to that understanding, and thus diminish the fear of dying, the better off I will be. Naturally, this project decreases in potential impact if conscious life is abundant in the universe, and in response to that problem I concede there is probably little hope, unless there are beings elsewhere in the universe that have comprehended who they are and are taking the same steps in their spheres of influence. My dream is that intelligent life eventually either snuffs itself out or discovers how to connect many nervous systems together, which would demonstrate to every connected mind that it has always belonged to one subject, has always been me, but I don’t have any reason to assume this is even possible on a physical level.

 

So, I suppose you are mostly right about one thing: there are no lucky ones that escape the badness of life’s worst agonies, either by virtue of a privileged upbringing or an instantaneous and painless demise. They and the less fortunate ones are all equally me. Yet, the horror of going through their experiences is mitigated somewhat in the details.

 

– A comment by CrumbledFingers in the Reddit post “Antinatalism and Open individualism“, also in r/antinatalism (March 12, 2017)

Our brain tries to make sense of metaphysical questions in wet-ware that shares computational space with a lot of adaptive survival programs. It does not matter if you have thick barriers (cf. thick and thin boundaries of the mind), the way you assess the value of situations as a human will tend to over-focus on whatever would allow you to go up Maslow’s hierarchy of needs (or, more cynically, achieve great feats as a testament to signal your genetic-fitness). Our motivational architecture is implemented in such a way that it is very good at handling questions like how to find food when you are hungry and how to play social games in a way that impresses others and leaves a social mark. Our brains utilize many heuristics based on personhood and narrative-streams when exploring the desirability of present options. We are people, and our brains are adapted to solve people problems. Not, as it turns out, general problems involving the entire state-space of possible conscious experiences.

Prandium Interruptus

Our brains render our inner world-simulation with flavors and textures of qualia to suit their evolutionary needs. This, in turn, impairs our ability to aptly represent scenarios that go beyond the range of normal human experiences. Let me illustrate this point with the following thought experiment:

Would you rather (a) have a 1-hour meal, or (b) have the same meal but at the half-hour point be instantly transformed into a simple, amnesic, and blank experience of perfectly neutral hedonic value that lasts ten quintillion years, and after that extremely long time of neither-happiness-nor-suffering ends, then resume the rest of the meal as if nothing had happened, with no memory of that long neutral period?

According to most utilitarian calculi these two scenarios ought to be perfectly equivalent. In both cases the total amount of positive and negative qualia is the same (the full duration of the meal) and the only difference is that the latter also contains a large amount of neutral experience too. Whether classical or negative, utilitarians should consider these experiences equivalent since they contain the same amount of pleasure and pain (note: some other ethical frameworks do distinguish between these cases, such as average and market utilitarianism).

Intuitively, however, (a) seems a lot better than (b). One imagines oneself having an awfully long experience, bored out of one’s mind, just wanting it to end, get it over with, and get back to enjoying the nice meal. But the very premise of the thought experiment presupposes that one will not be bored during that period of time, nor will one be wishing it to be over, or anything of the sort, considering that all of those are mental states of negative quality and the experience is supposed to be neutral.

Now this is of course a completely crazy thought experiment. Or is it?

The One-Electron View

In 1940 John Wheeler proposed to Richard Feynman the idea that all of reality is made of a single electron moving backwards and forwards in time, interfering with itself. This view has come to be regarded as the One-Electron Universe. Under Open Individualism, that one electron is you. From every single moment of experience to the next, you may have experienced life as a sextillion different animals, been 10^32 fleeting macroscropic entangled particles, and gotten stuck as a single non-interacting electron in the inter-galactic medium for googols of subjective years. Of course you will not remember any of this, because your memories, and indeed all of your motivational architecture and anticipation programs, are embedded in the brain you are instantiating right now. From that point of view, there is absolutely no trace of the experiences you had during this hiatus.

The above way of describing the one-electron view is still just an approximation. In order to see it fully, we also need to address the fact that there is no “natural” order to all of these different experiences. Every way of factorizing it and describing the history of the universe as “this happened before this happened” and “this, now that” could be equally inapplicable from the point of view of fundamental reality.

Philosophy of Time

17496270_10208752190872647_1451187529_n-640x340

Presentism is the view that only the present moment is real. The future and the past are just conceptual constructs useful to navigate the world, but not actual places that exist. The “past exists as footprints”, in a matter of speaking. “Footprints of the past” are just strangely-shaped information-containing regions of the present, including your memories. Likewise, the “future” is unrealized: a helpful abstraction which evolution gave us to survive in this world.

On the other hand, eternalism treats the future and the past as always-actualized always-real landscapes of reality. Every point in space-time is equally real. Physically, this view tends to be brought up in connection with the theory of relativity, where frame-invariant descriptions of the space-time continuum have no absolute present line. For a compelling physical case, see the Rietdijk-Putnam argument.

Eternalism has been explored in literature and spirituality extensively. To name a few artifacts: The EggHindu and Buddhist philosophy, the videos of Bob Sanders (cf. The Gap in Time, The Complexity of Time), the essays of Philip K. Dick and J. L. Borges, the poetry of T. S. Eliot, the fiction of Kurt Vonnegut Jr (TimequakeSlaughterhouse Five, etc.), and the graphic novels of Alan Moore, such as Watchmen:

Let me know in the comments if you know of any other work of fiction that explores this theme. In particular, I would love to assemble a comprehensive list of literature that explores Open Individualism and Eternalism.

Personal Identity and Eternalism

For the time being (no pun intended), let us assume that Eternalism is correct. How do Eternalism and personal identity interact? Doctor Manhattan in the above images (taken from Watchmen) exemplifies what it would be like to be a Closed Individualist Eternalist. He seems to be aware of his entire timeline at once, yet recognizes his unique identity apart from others. That said, as explained above, Closed Individualism is a distinctly unphysical theory of identity. One would thus expect of Doctor Manhattan, given his physically-grounded understanding of reality, to espouse a different theory of identity.

A philosophy that pairs Empty Individualism with Eternalism is the stuff of nightmares. Not only would we have, as with Empty Individualism alone, that some beings happen to exist entirely as beings of pain. We would also have that such unfortunate moments of experience are stuck in time. Like insects in amber, their expressions of horror and their urgency to run away from pain and suffering are forever crystallized in their corresponding spatiotemporal coordinates. I personally find this view paralyzing and sickening, though I am aware that such a reaction is not adaptive for the abolitionist project. Namely, even if “Eternalism + Empty Individualism” is a true account of reality, one ought not to be so frightened by it that one becomes incapable of working towards preventing future suffering. In this light, I adopt the attitude of “hope for the best, plan for the worst”.

Lastly, if Open Individualism and Eternalism are both true (as I suspect is the case), we would be in for what amounts to an incredibly trippy picture of reality. We are all one timeless spatiotemporal crystal. But why does this eternal crystal -who is everyone- exist? Here the one-electron view and the question “why does anything exist?” could both be simultaneously addressed with a single logico-physical principle. Namely, that the sum-total of existence contains no information to speak of. This is what David Pearce calls “Zero Ontology” (see: 1, 2, 3, 4). What you and I are, in the final analysis, is the necessary implication of there being no information; we are all a singular pattern of self-interference whose ultimate nature amounts to a dimensionless unit-sphere in Hilbert space. But this is a story for another post.

On a more grounded note, Scientific American recently ran an article that could be placed in this category of Open Individualism and Eternalism. In it the authors argue that the physical signatures of multiple-personality disorder, which explain the absence of phenomenal binding between alters that share the same brain, could be extended to explain why reality is both one and yet appears as the many. We are, in this view, all alters of the universe.

Personal Identity X Philosophy of Time X Antinatalism

Sober, scientifically grounded, and philosophically rigorous accounts of the awfulness of reality are rare. On the one hand, temperamentally happy individuals are more likely to think about the possibilities of heaven that lie ahead of us, and their heightened positive mood will likewise make them more likely to report on their findings. Temperamental depressives, on the other hand, may both investigate reality with less motivated reasoning than the euthymic and also be less likely to report on the results due to their subdued mood (“why even try? why even bother to write about it?”). Suffering in the Multiverse by David Pearce is a notable exception to this pattern. David’s essay highlights that if Eternalism is true together with Empty Individualism, there are vast regions of the multiverse filled with suffering that we can simply do nothing about (“Everett Hell Branches”). Taken together with a negative utilitarian ethic, this represents a calamity of (quite literally) astronomical proportions. And, sadly, there simply is no off-button to the multiverse as a whole. The suffering is/has/will always be there. And this means that the best we can do is to avoid the suffering of those beings in our forward-light cone (a drop relative to the size of the ocean of existence). The only hope left is to find a loop-hole in quantum mechanics that allows us to cross into other Everett branches of the multiverse and launch cosmic rescue missions. A counsel of despair or a rational prospect? Only time will tell.

Another key author that explores the intersection of these views is Mario Montano (see: Eternalism and Its Ethical Implications and The Savior Imperative).

A key point that both of these authors make is that however nasty reality might be, ethical antinatalists and negative utilitarians shouldn’t hold their breath about the possibility that reality can be destroyed. In Open Individualism plus Eternalism, the light of consciousness (perhaps what some might call the secular version of God) simply is, everywhere and eternally. If reality could be destroyed, such destruction is certainly limited to our forward light-cone. And unlike Closed Individualist accounts, it is not possible to help anyone by preventing their birth; the one subject of existence has already been born, and will never be unborn, so to speak.

Nor should ethical antinatalists and negative utilitarians think that avoiding having kids is in any way contributing to the cause of reducing suffering. It is reasonable to assume that the personality traits of agreeableness (specifically care and compassion), openness to experience, and high levels of systematizing intelligence are all over-represented among antinatalists. Insofar as these traits are needed to build a good future, antinatalists should in fact be some of the people who reproduce the most. Mario Montano says:

Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce.” However, from an anthropic perspective in infinite dimensional Hilbert space, you won’t have any values beyond “survive and reproduce.” The you which survives will not be the one with exotic values of radical compassion for all existence that caused you to commit peaceful suicide. That memetic stream weeded himself out and your consciousness is cast to a different narrative orbit which wants to survive and reproduce his mind. Eventually. Wanting is, more often than not, a precondition for successfully attaining the object of want.

Physicalism Implies Existence Never Dies

Also, from the same essay:

Anti-natalists full of weeping benignity are literally not successful replicators. The Will to Power is life itself. It is consciousness itself. And it will be, when a superintelligent coercive singleton swallows superclusters of baryonic matter and then spreads them as the flaming word into the unconverted future light cone.

[…]

You eventually love existence. Because if you don’t, something which does swallows you, and it is that which survives.

I would argue that the above reasoning is not entirely correct in the large scheme of things*, but it is certainly applicable in the context of human-like minds and agents. See also: David Pearce’s similar criticisms to antinatalism as a policy.

This should underscore the fact that in its current guise, antinatalism is completely self-limiting. Worryingly, one could imagine an organized contingent of antinatalists conducting research on how to destroy life as efficiently as possible. Antinatalists are generally very smart, and if Eliezer Yudkowsky‘s claim that “every 18 months the minimum IQ necessary to destroy the world drops by one point” is true, we may be in for some trouble. Both Pearce’s, Montano’s, and my take is that even if something akin to negative utilitarianism is the case, we should still pursue the goal of diminishing suffering in as peaceful of a way as it is possible. The risk of trying to painlessly destroy the world and failing to do so might turn out to be ethically catastrophic. A much better bet would be, we claim, to work towards the elimination of suffering by developing commercially successful hedonic recalibration technology. This also has the benefit that both depressives and life-lovers will want to team up with you; indeed, the promise of super-human bliss can be extraordinarily motivating to people who already lead happy lives, whereas the prospect of achieving “at best nothing” sounds stale and uninviting (if not outright antagonistic) to them.

An Evolutionary Environment Set Up For Success

If we want to create a world free from suffering, we will have to contend with the fact that suffering is adaptive in certain environments. The solution here is to avoid such environments, and foster ecosystems of mind that give an evolutionary advantage to the super-happy. More so, we already have the basic ingredients to do so. In Wireheading Done Right I discussed how, right now, the economy is based on trading three core goods: (1) survival tools, (2) power, and (3) information about the state-space of consciousness. Thankfully, the world right now is populated by humans who largely choose to spend their extra income on fun rather than on trips to the sperm bank. In other words, people are willing to trade some of their expected reproductive success for good experiences. This is good because it allows the existence of an economy of information about the state-space of consciousness, and thus creates an evolutionary advantage for caring about consciousness and being good at navigating its state-space. But for this to be sustainable, we will need to find the way to make positive valence gradients (i.e. gradients of bliss) both economically useful and power-granting. Otherwise, I would argue, the part of the economy that is dedicated to trading information about the state-space of consciousness is bound to be displaced by the other two (i.e. survival and power). For a more detailed discussion on these questions see: Consciousness vs. Pure Replicators.

12565637_1182612875090077_9123676868545012453_n

Can we make the benevolent exploration of the state-space of consciousness evolutionarily advantageous?

In conclusion, to close down hell (to the extent that is physically possible), we need to take advantage of the resources and opportunities granted to us by merely living in Hanson’s “dream time” (cf. Age of Spandrels). This includes the fact that right now people are willing to spend money on new experiences (especially if novel and containing positive valence), and the fact that philosophy of personal identity can still persuade people to work towards the wellbeing of all sentient beings. In particular, scientifically-grounded arguments in favor of both Open and Empty Individualism weaken people’s sense of self and make them more receptive to care about others, regardless of their genetic relatedness. On its natural course, however, this tendency may ultimately be removed by natural selection: if those who are immune to philosophy are more likely to maximize their inclusive fitness, humanity may devolve into philosophical deafness. The solution here is to identify the ways in which philosophical clarity can help us overcome coordination problems, highlight natural ethical Schelling points, and ultimately allow us to summon a benevolent super-organism to carry forward the abolition of as much suffering as is physically possible.

And only once we have done everything in our power to close down hell in all of its guises, will we be able to enjoy the rest of our forward light-cone in good conscience. Till then, us ethically-minded folks shall relentlessly work on building universe-sized fire-extinguishers to put out the fire of Hell.


* This is for several reasons: (1) phenomenal binding is not epiphenomenal, (2) the most optimal computational valence gradients are not necessarily located on the positive side, sadly, and (3) wanting, liking, and learning are possible to disentangle.

John von Neumann

Passing of a Great Mind

John von Neumann, a Brilliant, Jovial Mathematician, was a Prodigious Servant of Science and his Country

by Clary Blair Jr. – Life Magazine (February 25th, 1957)

The world lost one of its greatest scientists when Professor John von Neumann, 54, died this month of cancer in Washington, D.C. His death, like his life’s work, passed almost unnoticed by the public. But scientists throughout the free world regarded it as a tragic loss. They knew that Von Neumann’s brilliant mind had not only advanced his own special field, pure mathematics, but had also helped put the West in an immeasurably stronger position in the nuclear arms race. Before he was 30 he had established himself as one of the world’s foremost mathematicians. In World War II he was the principal discoverer of the implosion method, the secret of the atomic bomb.

The government officials and scientists who attended the requiem mass at the Walter Reed Hospital chapel last week were there not merely in recognition of his vast contributions to science, but also to pay personal tribute to a warm and delightful personality and a selfless servant of his country.

For more than a year Von Neumann had known he was going to die. But until the illness was far advanced he continued to devote himself to serving the government as a member of the Atomic Energy Commission, to which he was appointed in 1954. A telephone by his bed connected directly with his EAC office. On several occasions he was taken downtown in a limousine to attend commission meetings in a wheelchair. At Walter Reed, where he was moved early last spring, an Air Force officer, Lieut. Colonel Vincent Ford, worked full time assisting him. Eight airmen, all cleared for top secret material, were assigned to help on a 24-hour basis. His work for the Air Force and other government departments continued. Cabinet members and military officials continually came for his advice, and on one occasion Secretary of Defence Charles Wilson, Air Force Secretary Donald Quarles and most of the top Air Force brass gathered in Von Neumann’s suite to consult his judgement while there was still time. So relentlessly did Von Neumann pursue his official duties that he risked neglecting the treatise which was to form the capstone of his work on the scientific specialty, computing machines, to which he had devoted many recent years.

von_neumann_1_1

His fellow scientists, however, did not need any further evidence of Von Neumann’s rank as a scientist – or his assured place in history. They knew that during World War II at Los Alamos Von Neumann’s development of the idea of implosion speeded up the making of the atomic bomb by at least a full year. His later work with electronic computers quickened U.S. development of the H-bomb by months. The chief designer of the H-bomb, Edward Teller, once said with wry humor that Von Neumann was “one of those rare mathematicians who could descend to the level of the physicist.” Many theoretical physicists admit that they learned more from Von Neumann in methods of scientific thinking than from any of their colleagues. Hans Bethe, who was director of the theoretical physics division at Los Alamos, says, “I have sometimes wondered whether a brain like Von Neumann’s does not indicate a species superior to that of man.”

von_neumann_2

The foremost authority on computing machines in the U.S., Von Neumann was more than anyone else responsible for the increased use of the electronic “brains” in government and industry. The machine he called MANIAC (mathematical analyzer, numerical integrator and computer), which he built at the Institute for Advanced Study in Princeton, N.J., was the prototype for most of the advanced calculating machines now in use. Another machine, NORC, which he built for the Navy, can deliver a full day’s weather prediction in a few minutes. The principal adviser to the U.S. Air Force on nuclear weapons, Von Neumann was the most influential scientific force behind the U.S. decision to embark on accelerated production of intercontinental ballistic missiles. His “theory of games,” outlined in a book which he published in 1944 in collaboration with Economist Oskar Morgenstern, opened up an entirely new branch of mathematics. Analyzing the mathematical probabilities behind games of chance, Von Neumann went on to formulate a mathematical approach to such widespread fields as economics, sociology and even military strategy. His contributions to the quantum theory, the theory which explains the emission and absorption of energy in atoms and the one on which all atomic and nuclear physics are based, were set forth in a work entitled Mathematical Foundations of Quantum Mechanics which he wrote at the age of 23. It is today one of the cornerstones of this highly specialized branch of mathematical thought.

For Von Neumann the road to success was a many-laned highway with little traffic and no speed limit. He was born in 1903 in Budapest and was of the same generation of Hungarian physicists as Edward Teller, Leo Szilard and Eugene Wigner, all of whom later worked on atomic energy development for the U.S.

The eldest of three sons of a well-to-do Jewish financier who had been decorated by the Emperor Franz Josef, John von Neumann grew up in a society which placed a premium on intellectual achievement. At the age of 6 he was able to divide two eight-digit numbers in his head. By the age of 8 he had mastered college calculus and as a trick could memorize on sight a column in a telephone book and repeat back the names, addresses and numbers. History was only a “hobby,” but by the outbreak of World War I, when he was 10, his photographic mind had absorbed most of the contents of the 46-volume works edited by the German historian Oncken with a sophistication that startled his elders.

Despite his obvious technical ability, as a young man Von Neumann wanted to follow his father’s financial career, but he was soon dissuaded. Under a kind of supertutor, a first-rank mathematician at the University of Budapest named Leopold Fejer, Von Neumann was steered into the academic world. At 21 he received two degrees – one in chemical engineering at Zurich and a PhD in mathematics from the University of Budapest. The following year, 1926, as Admiral Horthy’s rightist regime had been repressing Hungarian Jews, he moved to Göttingen, Germany, then the mathematical center of the world. It was there that he published his major work on quantum mechanics.

The young professor

His fame now spreading, Von Neumann at 23 qualified as a Privatdozent (lecturer) at the University of Berlin, one of the youngest in the school’s history. But the Nazis had already begun their march to power. In 1929 Von Neumann accepted a visiting lectureship at Princeton University and in 1930, at the age of 26, he took a job there as professor of mathematical physics – after a quick trip to Budapest to marry a vivacious 18-year-old named Mariette Kovesi. Three years later, when the Institute for Advanced Study was founded at Princeton, Von Neumann was appointed – as was Albert Einstein – to be one of its first full professors. “He was so young,” a member of the institute recalls, “that most people who saw him in the halls mistook him for a graduate student.”

von_neumann_3

Although they worked near each other in the same building, Einstein and Von Neumann were not intimate, and because their approach to scientific matters was different they never formally collaborated. A member of the institute who worked side by side with both men in the early days recalls, “Einstein’s mind was slow and contemplative. He would think about something for years. Johnny’s mind was just the opposite. It was lightning quick – stunningly fast. If you gave him a problem he either solved it right away or not at all. If he had to think about it a long time and it bored him, hist interest would begin to wander. And Johnny’s mind would not shine unless whatever he was working on had his undivided attention.” But the problems he did care about, such as his “theory of games,” absorbed him for much longer periods.

‘Proof by erasure’

Partly because of this quicksilver quality Von Neumann was not an outstanding teacher to many of his students. But for the advanced students who could ascend to his level he was inspirational. His lectures were brilliant, although at times difficult to follow because of his way of erasing and rewriting dozens of formulae on the blackboard. In explaining mathematical problems Von Neumann would write his equations hurriedly, starting at the top of the blackboard and working down. When he reached the bottom, if the problem was unfinished, he would erase the top equations and start down again. By the time he had done this two or three times most other mathematicians would find themselves unable to keep track. On one such occasion a colleague at Princeton waited until Von Neumann had finished and said, “I see. Proof by erasure.”

Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.'”von_neumann_4

Once a friend showed him an extremely complex problem and remarked that a certain famous mathematician had taken a whole week’s journey across Russia on the Trans-Siberian Railroad to complete it. Rushing for a train, Von Neumann took the problem along. Two days later the friend received an air-mail packet from Chicago. In it was a 50-page handwritten solution to the problem. Von Neumann had added a postscript: “Running time to Chicago: 15 hours, 26 minutes.” To Von Neumann this was not an expression of vanity but of sheer delight – a hole in one.

During periods of intense intellectual concentration Von Neumann, like most of his professional colleagues, was lost in preoccupation, and the real world spun past him. He would sometimes interrupt a trip to put through a telephone call to find out why he had taken the trip in the first place.

Von Neumann believed that concentration alone was insufficient for solving some of the most difficult mathematical problems and that these are solved in the subconscious. He would often go to sleep with a problem unsolved, wake up in the morning and scribble the answer on a pad he kept on the bedside table. It was a common occurrence for him to begin scribbling with pencil and paper in the midst of a nightclub floor show or a lively party, “the noisier,” his wife says, “the better.” When his wife arranged a secluded study for Von Neumann on the third floor of the Princeton home, Von Neumann was furious. “He stormed downstairs,” says Mrs. von Neumann, “and demanded, ‘What are you trying to do, keep me away from what’s going on?’; After that he did most of his work in the living room with my phonograph blaring.”

His pride in his brain power made him easy prey to scientific jokesters. A friend once spent a week working out various steps in an obscure mathematical process. Accosting Von Neumann at a party he asked for help in solving the problem. After listening to it, Von Neumann leaned his plump frame against a door and stared blankly, his mind going through the necessary calculations. At each step in the process the friend would quickly put in, “Well, it comes out to this, doesn’t it?” After several such interruptions Von Neumann became perturbed and when his friend “beat” him to the final answer he exploded in fury. “Johnny sulked for weeks,” recalls the friend, “before he found out it was all a joke.”

He did not look like a professor. He dressed so much like a Wall Street banker that a fellow scientist once said, “Johnny, why don’t you smear some chalk dust on your coat so you look like the rest of us?” He loved to eat, especially rich sauces and desserts, and in later years was forced to diet rigidly. To him exercise was “nonsense.”

Those lively Von Neumann parties

Most card-playing bored him, although he was fascinated by the mathematical probabilities involved in poker and baccarat. He never cared for movies. “Every time we went,” his wife recalls, “he would either go to sleep or do math problems in his head.” When he could do neither he would break into violent coughing spells. What he truly loved, aside from work, was a good party. Residents of Princeton’s quiet academic community can still recall the lively goings-on at the Von Neumann’s big, rambling house on Westcott Road. “Those old geniuses got downright approachable at the Von Neumanns’,” a friend recalls. Von Neumann’s talents as a host were based on his drinks, which were strong, his repertoire of off-color limericks, which was massive, and his social ease, which was consummate. Although he could rarely remember a name, Von Neumann would escort each new guest around the room, bowing punctiliously to cover up the fact that he was not using names in introducing people.von_neumann_5

Von Neumann also had a passion for automobiles, not for tinkering with them but for driving them as if they were heavy tanks. He turned up with a new one every year at Princeton. “The way he drove, a car couldn’t possibly last more than a year,” a friend says. Von Neumann was regularly arrested for speeding and some of his wrecks became legendary. A Princeton crossroads was for a while known as “Von Neumann corner” because of the number of times the mathematician had cracked up there. He once emerged from a totally demolished car with this explanation: “I was proceeding down the road. The threes on the right were passing me in orderly fashion at 60 miles an hour. Suddenly one of them stepped out in my path. Boom!”

Mariette and John von Neumann had one child, Marina, born in 1935, who graduated from Radcliffe last June, summa cum laude, with the highest scholastic record in her class. In 1937, the year Von Neumann was elected to the National Academy of Sciences and became a naturalized citizen of the U.S., the marriage ended in divorce. The following year on a trip to Budapest he met and married Klara Dan, whom he subsequently trained to be an expert on electronic computing machines. The Von Neumann home in Princeton continued to be a center of gaiety as well as a hotel for prominent intellectual transients.

In the late 1930s Von Neumann began to receive a new type of visitor at Princeton: the military scientist and engineer. After he had handled a number of jobs for the Navy in ballistics and anti-submarine warfare, word of his talents spread, and Army Ordnance began using him more and more as a consultant at its Aberdeen Proving Ground in Maryland. As war drew nearer this kind of work took up more and more of his time.

During World War II he roved between Washington, where he had established a temporary residence, England, Los Alamos and other defense installations. When scientific groups heard Von Neumann was coming, they would set up all of their advanced mathematical problems like ducks in a shooting gallery. Then he would arrive and systematically topple them over.

After the Axis had been destroyed, Von Neumann urged that the U.S. immediately build even more powerful atomic weapons and use them before the Soviets could develop nuclear weapons of their own. It was not an emotional crusade, Von Neumann, like others, had coldly reasoned that the world had grown too small to permit nations to conduct their affairs independently of one another. He held that world government was inevitable – and the sooner the better. But he also believed it could never be established while Soviet Communism dominated half of the globe. A famous Von Neumann observation at the time: “With the Russians it is not a question of whether but when.” A hard-boiled strategist, he was one of the few scientists to advocate preventive war, and in 1950 he was remarking, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock?”von_neumann_6

In late 1949, after the Russians had exploded their first atomic bomb and the U.S. scientific community was split over whether or not the U.S. should build a hydrogen bomb, Von Neumann reduced the argument to: “It is not a question of whether we build it or not, but when do we start calculating?” When the H-bomb controversy raged, Von Neumann slipped quietly out to Los Alamos, took a desk and began work on the first mathematical steps toward building the weapon, specifically deciding which computations would be fed to which electronic computers.

Von Neumann’s principal interest in the postwar years was electronic computing machines, and his advice on computers was in demand almost everywhere. One day he was urgently summoned to the offices of the Rand Corporation, a government-sponsored scientific research organization in Santa Monica, Calif. Rand scientists had come up with a problem so complex that the electronic computers then in existence seemingly could not handle it. The scientists wanted Von Neumann to invent a new kind of computer. After listening to the scientists expound, Von Neumann broke in: “Well, gentlemen, suppose you tell me exactly what the problem is?”

For the next two hours the men at Rand lectured, scribbled on blackboards, and brought charts and tables back and forth. Von Neumann sat with his head buried in his hands. When the presentation was completed, he scribbled on a pad, stared so blankly that a Rand scientist later said he looked as if “his mind had slipped his face out of gear,” then said, “Gentlemen, you do not need the computer. I have the answer.”

While the scientists sat in stunned silence, Von Neumann reeled off the various steps which would provide the solution to the problem. Having risen to this routine challenge, Von Neumann followed up with a routine suggestion: “Let’s go to lunch.”

In 1954, when the U.S. development of the intercontinental ballistic missile was dangerously bogged down, study groups under Von Neumann’s direction began paving the way for solution of the most baffling problems: guidance, miniaturization of components, heat resistance. In less than a year Von Neumann put his O.K. on the project – but not until he had completed a relentless investigation in his own dazzlingly fast style. One day, during an ICBM meeting on the West Coast, a physicist employed by an aircraft company approached Von Neumann with a detailed plan for one phase of the project. It consisted of a tome several hundred pages long on which the physicist had worked for eight months. Von Neumann took the book and flipped through the first several pages. Then he turned it over and began reading from back to front. He jotted down a figure on a pad, then a second and a third. He looked out the window for several seconds, returned the book to the physicist and said, “It won’t work.” The physicist returned to his company. After two months of re-evaluation, he came to the same conclusion.von_neumann_7

In October 1954 Eisenhower appointed Von Neumann to the Atomic Energy Commission. Von Neumann accepted, although the Air Force and the senators who confirmed him insisted that he retain his chairmanship of the Air Force ballistic missile panel.

Von Neumann had been on the new job only six months when the pain first stuck in the left shoulder. After two examinations, the physicians at Bethesda Naval Hospital suspected cancer. Within a month Von Neumann was wheeled into surgery at the New England Deaconess Hospital in Boston. A leading pathologist, Dr. Shields Warren, examined the biopsy tissue and confirmed that the pain was a secondary cancer. Doctors began to race to discover the primary location. Several weeks later they found it in the prostate. Von Neumann, they agreed, did not have long to live.

When he heard the news Von Neumann called for Dr. Warren. He asked, “Now that this thing has come, how shall I spend the remainder of my life?”

“Well, Johnny,” Warren said, “I would stay with the commission as long as you feel up to it. But at the same time I would say that if you have any important scientific papers – anything further scientifically to say – I would get started on it right away.”

Von Neumann returned to Washington and resumed his busy schedule at the Atomic Energy Commission. To those who asked about his arm, which was in a sling, he muttered something about a broken collarbone. He continued to preside over the ballistic missile committee, and to receive an unending stream of visitors from Los Alamos, Livermore, the Rand Corporation, Princeton. Most of these men knew that Von Neumann was dying of cancer, but the subject was never mentioned.

Machines creating new machines

After the last visitor had departed Von Neumann would retire to his second-floor study to work on the paper which he knew would be his last contribution to science. It was an attempt to formulate a concept shedding new light on the workings of the human brain. He believed that if such a concept could be stated with certainty, it would also be applicable to electronic computers and would permit man to make a major step forward in using these “automata.” In principle, he reasoned, there was no reason why some day a machine might not be built which not only could perform most of the functions of the human brain but could actually reproduce itself, i.e., create more supermachines like it. He proposed to present this paper at Yale, where he had been invited to give the 1956 Silliman Lectures.

As the weeks passed, work on the paper slowed. One evening, as Von Neumann and his wife were leaving a dinner party, he complained that he was “uncertain” about walking. Doctors furnished him with a wheelchair. But Von Neumann’s world had begun to close in tight around him. He was seized by periods of overwhelming melancholy.

In April 1956 Von Neumann moved into Walter Reed Hospital for good. Honors were now coming from all directions. He was awarded Yeshiva University’s first Einstein prize. In a special White House ceremony President Eisenhower presented him with the Medal of Freedom. In April the AEC gave him the Enrico Fermi award for his contributions to the theory and design of computing machines, accompanied by a $50,000 tax-free grant.

Although born of Jewish parents, Von Neumann had never practiced Judaism. After his arrival in the U.S. he had been baptized a Roman Catholic. But his divorce from Mariette had put him beyond the sacraments of the Catholic Church for almost 19 years. Now he felt an urge to return. One morning he said to Klara, “I want to see a priest.” He added, “But he will have to be a special kind of priest, one that will be intellectually compatible.” Arrangements were made for special instructions to be given by a Catholic scholar from Washington. After a few weeks Von Neumann began once again to receive the sacraments.

The great mind falters

Toward the end of May the seizures of melancholy began to occur more frequently. In June the doctors finally announced – though not to Von Neumann himself – that the cancer had begun to spread. The great mind began to falter. “At times he would discuss history, mathematics, or automata, and he could recall word for word conversations we had had 20 years ago,” a friend says. “At other times he would scarcely recognize me.” His family – Klara, two brothers, his mother and daughter Marina – drew close around him and arranged a schedule so that one of them would always be on hand. Visitors were more carefully screened. Drugs fortunately prevented Von Neumann from experiencing pain. Now and then his old gifts of memory were again revealed. One day in the fall his brother Mike read Goethe’s Faust to him in German. Each time Mike paused to turn the page, Von Neumann recited from memory the first few lines of the following page.

One of his favorite companions was his mother Margaret von Neumann, 76 years old. In July the family in turn became concerned about her health, and it was suggested that she go to a hospital for a checkup. Two weeks later she died of cancer. “It was unbelievable,” a friend says. “She kept on going right up to the very end and never let anyone know a thing. How she must have suffered to make her son’s last days less worrisome.” Lest the news shock Von Neumann fatally, elaborate precautions were taken to keep it from him. When he guessed the truth, he suffered a severe setback.

Von Neumann’s body, which he had never given much thought to, went on serving him much longer than did his mind. Last summer the doctors had given him only three or four weeks to live. Months later, in October, his passing was again expected momentarily. But not until this month did his body give up. It was characteristic of the impatient, witty and incalculably brilliant John von Neumann that although he went on working for others until he could do not more, his own treatise on the workings of the brain – the work he thought would be his crowning achievement in his own name – was left unfinished.

von_neumann_8