Listen "Bayesian Generalized Simulation Argument and Calculator"
Episode Synopsis
I’ve composed a Bayesian formulation of the generalized Simulation Argument. To help guide you through it, I construct the formulation step by step, beginning with Nick Bostrom’s popular formulation of the standard Simulation Argument, and proceeding through Brian Eggleston’s and my own refinements to the argument. Finally, I generalize a composite of the previous formulations, applying them to all feasible creation mechanisms. The generalized Simulation Argument is a principal component of the Creation Argument of the New God Argument. But it’s not identical with the Creation Argument. The Creation Argument also includes two pragmatically-motivated assumptions: that we will not become extinct before evolving into superhumanity, and that superhumanity probably would create many worlds emulating its evolutionary history. The idea is that trusting actively in such possibilities actually increases their probabilities, as we work to realize them. This article is technical – more so than most of my articles. If you’re interested in metaphorical rabbit holes, you’ll probably enjoy it. But if you just want the gist, I recommend reading only the opening paragraphs of each section. Then, if you’re reading the article online, you can jump to the end of the article and use the Simulation Argument calculator to help you decide whether we’re destined for DOOM or living in a world created by SUPERS. Bostrom Simulation Argument Nick Bostrom’s formulation of the Simulation Argument formalizes a mathematical relationship between the potential computational power of superhuman civilizations and the probability that our world is computed (avoiding inaccurate connotations of “simulated”). The argument demonstrates that if human-like civilizations typically evolve to a superhuman stage and compute emulations of their evolutionary ancestors, computed observers would vastly outnumber non-computed observers. This results in a trilemma where civilizations either universally perish before reaching this capacity, systematically abstain from using it, or generate a population distribution where we almost certainly live in a computed world. By applying principles of indifference to connect objective frequencies with subjective beliefs, the argument compels us to expect that our world is computed, unless we assume abstinence or doom await us. 1) P(SUPERS): fraction of all human worlds that become superhuman 2) E[WORLDS]: average total number of human worlds that are ever computed directly or indirectly by a single superhuman world 3) E[HUMANS]: average total number of humans that ever live in a single human world 4) E[HUMANS | COMPUTED]: average total number of humans that ever live in all human worlds that are ever computed directly or indirectly by a single superhuman world 5) E[HUMANS | COMPUTED] = P(SUPERS) * E[WORLDS] * E[HUMANS] 6) E[HUMANS | NONCOMPUTED]: average total number of humans that ever live in a single human world that was not computed directly or indirectly by any superhuman world 7) E[HUMANS | NONCOMPUTED] = E[HUMANS] 8) P(COMPUTED): fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world 9) P(COMPUTED) = E[HUMANS | COMPUTED] / (E[HUMANS | COMPUTED] + E[HUMANS | NONCOMPUTED]) 10) P(COMPUTED) = (P(SUPERS) * E[WORLDS] * E[HUMANS]) / ((P(SUPERS) * E[WORLDS] * E[HUMANS]) + E[HUMANS]) 11) P(CHOOSE): fraction of all superhuman worlds that choose to compute human worlds 12) E[WORLDS | CHOOSE]: average total number of human worlds that are ever computed directly or indirectly by a single superhuman world that chooses to compute human worlds 13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE] 14) P(COMPUTED) = (P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS]) 15) P(COMPUTED) = ((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS]) 16) P(COMPUTED) = (P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) 17) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(SUPERS) ≈ 0 or P(CHOOSE) ≈ 0 or P(COMPUTED) ≈ 1 18) Cr(DOOM): subjective credence that our world will never become superhuman 19) x: specific fraction of all human worlds that become superhuman 20) Cr(DOOM | P(SUPERS) = x) = 1 - x: predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds 21) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(COMPUTED) ≈ 1 22) Cr(ABSTAIN): subjective credence that our world will become a superhuman world that abstains from computing human worlds 23) y: specific fraction of all superhuman worlds that choose to compute human worlds 24) Cr(ABSTAIN | P(SUPERS) = x, P(CHOOSE) = y) = x * (1 - y): predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds 25) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(COMPUTED) ≈ 1 26) Cr(COMPUTED): subjective credence that our world was computed directly or indirectly by a superhuman world 27) z: specific fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world 28) Cr(COMPUTED | P(COMPUTED) = z) = z: indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in computed worlds 29) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED) ≈ 1 Eggleston Simulation Argument with Self-Exclusion Brian Eggleston refines the Simulation Argument by applying the principle of self-exclusion, which asserts that a civilization cannot compute itself and restricts the set of our possible computers to other worlds in our past. This constraint clarifies that the probability of our world being computed depends not on our own future potential, but on whether previous civilizations successfully reached a superhuman stage before us. The logic changes the trilemma by adding a temporal filter: we exist in a computed world only if previous worlds avoided abstinence and doom long enough to compute emulations of their evolutionary history. Consequently, the force of the Simulation Argument depends on us rejecting the possibility that our world is the only or first to survive where all others failed. 0E) P(OTHERS): fraction of all worlds that become human before our own 1E) P(SUPERS | OTHERS): fraction of all human worlds that become superhuman before our own 2) E[WORLDS]: average total number of human worlds that are ever computed directly or indirectly by a single superhuman world 3) E[HUMANS]: average total number of humans that ever live in a single human world 4E) E[HUMANS | COMPUTED, OTHERS]: average total number of humans that ever live in all human worlds that are ever computed directly or indirectly by a single superhuman world before our own 5E) E[HUMANS | COMPUTED, OTHERS] = P(OTHERS) * P(SUPERS | OTHERS) * E[WORLDS] * E[HUMANS] 6) E[HUMANS | NONCOMPUTED]: average total number of humans that ever live in a single human world that was not computed directly or indirectly by any superhuman world 7) E[HUMANS | NONCOMPUTED] = E[HUMANS] 8E) P(COMPUTED | OTHERS): fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own 9E) P(COMPUTED | OTHERS) = E[HUMANS | COMPUTED, OTHERS] / (E[HUMANS | COMPUTED, OTHERS] + E[HUMANS | NONCOMPUTED]) 10E) P(COMPUTED | OTHERS) = (P(OTHERS) * P(SUPERS | OTHERS) * E[WORLDS] * E[HUMANS]) / ((P(OTHERS) * P(SUPERS | OTHERS) * E[WORLDS] * E[HUMANS]) + E[HUMANS]) 11) P(CHOOSE): fraction of all superhuman worlds that choose to compute human worlds 12) E[WORLDS | CHOOSE]: average total number of human worlds that are ever computed directly or indirectly by a single superhuman world that chooses to compute human worlds 13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE] 14E) P(COMPUTED | OTHERS) = (P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS]) 15E) P(COMPUTED | OTHERS) = ((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS]) 16E) P(COMPUTED | OTHERS) = (P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) 17E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or P(SUPERS | OTHERS) ≈ 0 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS) ≈ 1 18) Cr(DOOM): subjective credence that our world will never become superhuman 19E) x: specific fraction of all human worlds that become superhuman before our own 20E) Cr(DOOM | P(SUPERS | OTHERS) = x) = 1 - x: predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds before our own 21E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS) ≈ 1 22) Cr(ABSTAIN): subjective credence that our world will become a superhuman world that abstains from computing human worlds 23) y: specific fraction of all superhuman worlds that choose to compute human worlds 24E) Cr(ABSTAIN | P(SUPERS | OTHERS) = x, P(CHOOSE) = y) = x * (1 - y): predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds before our own 25E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(COMPUTED | OTHERS) ≈ 1 26E) Cr(COMPUTED | OTHERS): subjective credence that our world was computed directly or indirectly by a superhuman world before our own 27E) z: specific fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own 28E) Cr(COMPUTED | OTHERS, P(COMPUTED | OTHERS) = z) = z: indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in computed worlds before our own 29E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED | OTHERS) ≈ 1 30E) Cr(UNIQUE): subjective credence that our world is the first or only to become human 31E) w: specific fraction of all worlds that become human before our own 32E) Cr(UNIQUE | P(OTHERS) = w) = 1 - w: indexical indifference principle that you have insufficient information to distinguish the temporal location of our world from that of typical human worlds before our own 33E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(UNIQUE) ≈ 1 or Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED | OTHERS) ≈ 1 Cannon Simulation Argument with Self-Exclusion and Technological Uniformity My formulation strengthens the Simulation Argument by introducing the principle of technological uniformity, which posits that feasible technologies tend to emerge repeatedly across civilizations situated within similar physics. This constraint refines the reference class to focus on potential computers that resemble our own world, avoiding speculation about radically unique or magical physics. By applying the principle of mediocrity, the argument asserts that if computing worlds is feasible for us, it was likely feasible for predecessors operating within similar laws of physics. Consequently, the force of the original argument is restored, compelling us to expect that our world is computed, unless we assume abstinence or doom await us. 0CU) P(OTHERS | UNIFORM): fraction of all worlds that become human before our own, given uniform physics 1CU) P(SUPERS | OTHERS, UNIFORM): fraction of all human worlds that become superhuman before our own, given uniform physics 2) E[WORLDS]: average total number of human worlds that are ever computed directly or indirectly by a single superhuman world 3) E[HUMANS]: average total number of humans that ever live in a single human world 4CU) E[HUMANS | COMPUTED, OTHERS, UNIFORM]: average total number of humans that ever live in all human worlds that are ever computed directly or indirectly by a single superhuman world before our own, given uniform physics 5CU) E[HUMANS | COMPUTED, OTHERS, UNIFORM] = P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS] 6) E[HUMANS | NONCOMPUTED]: average total number of humans that ever live in a single human world that was not computed directly or indirectly by any superhuman world 7) E[HUMANS | NONCOMPUTED] = E[HUMANS] 8CU) P(COMPUTED | OTHERS, UNIFORM): fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own, given uniform physics 9CU) P(COMPUTED | OTHERS, UNIFORM) = E[HUMANS | COMPUTED, OTHERS, UNIFORM] / (E[HUMANS | COMPUTED, OTHERS, UNIFORM] + E[HUMANS | NONCOMPUTED]) 10CU) P(COMPUTED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) + E[HUMANS]) 11) P(CHOOSE): fraction of all superhuman worlds that choose to compute human worlds 12) E[WORLDS | CHOOSE]: average total number of human worlds that are ever computed directly or indirectly by a single superhuman world that chooses to compute human worlds 13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE] 14CU) P(COMPUTED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS]) 15CU) P(COMPUTED | OTHERS, UNIFORM) = ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS]) 16CU) P(COMPUTED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) 17CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ P(SUPERS | OTHERS, UNIFORM) ≈ 0 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS, UNIFORM) ≈ 1 18) Cr(DOOM): subjective credence that our world will never become superhuman 19CU) x: specific fraction of all human worlds that become superhuman before our own, given uniform physics 20CU) Cr(DOOM | P(SUPERS | OTHERS, UNIFORM) = x) = 1 - x: predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds before our own, given uniform physics 21CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS, UNIFORM) ≈ 1 22) Cr(ABSTAIN): subjective credence that our world will become a superhuman world that abstains from computing human worlds 23) y: specific fraction of all superhuman worlds that choose to compute human worlds 24CU) Cr(ABSTAIN | P(SUPERS | OTHERS, UNIFORM) = x, P(CHOOSE) = y) = x * (1 - y): predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds before our own, given uniform physics 25CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(COMPUTED | OTHERS, UNIFORM) ≈ 1 26CU) Cr(COMPUTED | OTHERS, UNIFORM): subjective credence that our world was computed directly or indirectly by a superhuman world before our own, given uniform physics 27CU) z: specific fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own, given uniform physics 28CU) Cr(COMPUTED | OTHERS, UNIFORM, P(COMPUTED | OTHERS, UNIFORM) = z) = z: indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in computed worlds before our own, given uniform physics 29CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED | OTHERS, UNIFORM) ≈ 1 Cannon Generalized Simulation Argument with Self-Exclusion and Technological Uniformity My generalized Simulation Argument expands the scope of the original, recognizing that computation is just one of many potential mechanisms (along with terraforming or cosmoforming) for creating worlds. This broader vision maintains the expectation that world creation is a scalable process, allowing a single parent civilization to create many child worlds through various technological means. By integrating the previous constraints of time and uniformity, the argument asserts that if creation is typically feasible and scalable, then created worlds will vastly outnumber uncreated worlds. Therefore, unless we assume that all forms of creation are impossible or ethically prohibited, we should conclude with high credence that our own world is created. 0CU) P(OTHERS | UNIFORM): fraction of all worlds that become human before our own, given uniform physics 1CU) P(SUPERS | OTHERS, UNIFORM): fraction of all human worlds that become superhuman before our own, given uniform physics 2CG) E[WORLDS]: average total number of human worlds that are ever created directly or indirectly by a single superhuman world 3) E[HUMANS]: average total number of humans that ever live in a single human world 4CG) E[HUMANS | CREATED, OTHERS, UNIFORM]: average total number of humans that ever live in all human worlds that are ever created directly or indirectly by a single superhuman world before our own, given uniform physics 5CG) E[HUMANS | CREATED, OTHERS, UNIFORM] = P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS] 6CG) E[HUMANS | NONCREATED]: average total number of humans that ever live in a single human world that was not created directly or indirectly by any superhuman world 7CG) E[HUMANS | NONCREATED] = E[HUMANS] 8CG) P(CREATED | OTHERS, UNIFORM): fraction of all humans that live in worlds that are created directly or indirectly by any superhuman world before our own, given uniform physics 9CG) P(CREATED | OTHERS, UNIFORM) = E[HUMANS | CREATED, OTHERS, UNIFORM] / (E[HUMANS | CREATED, OTHERS, UNIFORM] + E[HUMANS | NONCREATED]) 10CG) P(CREATED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) + E[HUMANS]) 11CG) P(CHOOSE): fraction of all superhuman worlds that choose to create human worlds 12CG) E[WORLDS | CHOOSE]: average total number of human worlds that are ever created directly or indirectly by a single superhuman world that chooses to create human worlds 12.1CG) E[WORLDS | CHOOSE, COMPUTED]: average total number of human worlds that are ever created using computation directly or indirectly by a single superhuman world 12.2CG) E[WORLDS | CHOOSE, NONCOMPUTED]: average total number of human worlds that are ever created using mechanisms other than computation (e.g., terraforming or cosmoforming) directly or indirectly by a single superhuman world 12.3CG) E[WORLDS | CHOOSE] = E[WORLDS | CHOOSE, COMPUTED] + E[WORLDS | CHOOSE, NONCOMPUTED] 13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE] 14CG) P(CREATED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS]) 15CG) P(CREATED | OTHERS, UNIFORM) = ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS]) 16CG) P(CREATED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) 17CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ P(SUPERS | OTHERS, UNIFORM) ≈ 0 or P(CHOOSE) ≈ 0 or P(CREATED | OTHERS, UNIFORM) ≈ 1 18) Cr(DOOM): subjective credence that our world will never become superhuman 19CU) x: specific fraction of all human worlds that become superhuman before our own, given uniform physics 20CU) Cr(DOOM | P(SUPERS | OTHERS, UNIFORM) = x) = 1 - x: predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds before our own, given uniform physics 21CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(CREATED | OTHERS, UNIFORM) ≈ 1 22CG) Cr(ABSTAIN): subjective credence that our world will become a superhuman world that abstains from creating human worlds 23CG) y: specific fraction of all superhuman worlds that choose to create human worlds 24CU) Cr(ABSTAIN | P(SUPERS | OTHERS, UNIFORM) = x, P(CHOOSE) = y) = x * (1 - y): predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds before our own, given uniform physics 25CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(CREATED | OTHERS, UNIFORM) ≈ 1 26CG) Cr(CREATED | OTHERS, UNIFORM): subjective credence that our world was created directly or indirectly by a superhuman world before our own, given uniform physics 27CG) z: specific fraction of all humans that live in worlds that are created directly or indirectly by any superhuman world before our own, given uniform physics 28CG) Cr(CREATED | OTHERS, UNIFORM, P(CREATED | OTHERS, UNIFORM) = z) = z: indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in created worlds before our own, given uniform physics 29CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(CREATED | OTHERS, UNIFORM) ≈ 1 Simulation Argument Calculator Calculate the probability that you are living in a computed world. Adjust input values to see how different assumptions change the odds that superhumanity created our world. If you run into any issues, please let me know. #SimulationArgumentCalculator { font-family: Arial, Helvetica, sans-serif; } #SimulationArgumentCalculator h3 { text-align: left; margin-top: 0; } #SimulationArgumentCalculator.section { background-color: #EEE; padding: 1em; border-radius: 5px; } #SimulationArgumentCalculator.result-row { margin-bottom: 0.5em; } #SimulationArgumentCalculator.discrepancy { color: #d32f2f; font-weight: bold; } #SimulationArgumentCalculator.match { color: #388e3c; font-weight: bold; } #SimulationArgumentCalculator td { background-color: #DDD; border-color: #EEE; } #SimulationArgumentCalculator td:nth-child(2) { white-space: nowrap; } #SimulationArgumentCalculator.highlight-row { background-color: #e8f5e9; font-weight: bold; } #SimulationArgumentCalculator.spinner { display: inline-block; width: 40px; height: 40px; border: 4px solid #f3f3f3; border-top: 4px solid #2c3e50; border-radius: 50%; animation: spin 1s linear infinite; margin: 20px auto; } @keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } } Configuration Start here by choosing the model of the Simulation Argument that you want to use. The Bostrom model is the basic formulation of the argument. The Eggleston model applies the principle of self-exclusion (the idea that we cannot be a civilization that came before us). The Cannon models apply the principle of technological uniformity (the idea that physics are the same everywhere), and generalize the argument to include all means for creating worlds (computation, terraforming, cosmoforming, or others). Model Bostrom (Basic) Eggleston (Self-Exclusion) Cannon (Technological Uniformity) Cannon (Generalized) Turn explanatory text on or off. Keep it on "Show" if you want to learn what each input does. Help Text Show Hide Custom Scenario What are the odds that other human-like civilizations existed before us? A value of 0 means we are definitely the only or first human-like civilization. A value of 1 means many came before us. P(OTHERS) Applying the principle of technological uniformity, we assume we are typical rather than special. Therefore, it is highly likely that other human-like civilizations existed before us. This value is fixed at 1. P(OTHERS | UNIFORM) Enter the fraction of human-like civilizations that manage to survive long enough to reach a superhuman level of technology. A value of 0 means none become superhuman, while 1 means all become superhuman. P(SUPERS) Of the human-like civilizations that started before us, enter the fraction that survived to become superhuman. A value of 0 means none became superhuman, while 1 means all became superhuman. P(SUPERS | OTHERS) Assuming physics is the same everywhere, enter the fraction of past human-like civilizations that survived to become superhuman. A value of 0 means none became superhuman, while 1 means all became superhuman. P(SUPERS | OTHERS, UNIFORM) If a civilization becomes superhuman, what fraction decides to compute new worlds with people like us? A value of 0 means no one computes new worlds, perhaps due to ethics or lack of interest. A value of 1 means everyone chooses to compute new worlds. P(CHOOSE) If a civilization becomes superhuman, what fraction decides to create new worlds with people like us via computation, terraforming, cosmoforming, or other means? A value of 0 means no one creates new worlds, perhaps due to ethics or lack of interest. A value of 1 means everyone chooses to create new worlds. P(CHOOSE) If a superhuman civilization decides to compute new worlds, how many worlds do they compute on average? This number is often assumed to be very large, potentially in the millions or even much higher. E[WORLDS | CHOOSE] If a superhuman civilization decides to create new worlds, how many worlds do they create on average? This includes all means for creating new worlds. Leave this blank if you prefer to use the specific numbers below. E[WORLDS | CHOOSE] Specifically, how many new worlds do superhuman civilizations compute on average? This number is often assumed to be very large, potentially in the millions or even much higher. Leave this blank if you prefer to use the total number above. E[WORLDS | COMPUTED] Specifically, how many new worlds do superhuman civilizations create on average using other means, such as terraforming planets or cosmoforming baby universes? Leave this blank if you prefer to use the total number above. E[WORLDS | NONCOMPUTED] Click "Calculate" to see the results. The calculator will display the probability for each possible fate of our world, based on your input values. You can also reset the calculator to start over. Calculate Reset Preset Scenarios Don't want to guess the numbers? Try one of these common scenarios. The "Conservative" scenario uses the same input values as the default custom scenario, and reveals an incoherence in common assumptions. Conservative Optimistic Skeptical!function(){"use strict";function e(){function h(){E&&(E.style.display="none")}function e(){for(var e=m.value,t=document.querySelectorAll(".param-group"),o=0;o Based on your inputs, the probability that we live in a "+O(e)+' world is: ',o=document.createElement("div");o.className="section",o.id="FinalProbability",o.innerHTML=" Final Probability "+t,b.appendChild(o),E.style.display="block","function"==typeof scrollToAnchor&&scrollToAnchor("#FinalProbability")}function u(e){function t(e,t){var o=document.createElement("div");return o.className="section",o.innerHTML=" "+e+" "+t,b.appendChild(o),o}b.innerHTML="";O(e.model,"title");var o=O(e.model,"lower"),r=g(e.model),l=100*e.Cr_CREATED,n=l Based on your inputs, the probability that we live in a "+o+' world is: '+n+"% ",s=[];"eggleston"===e.model&&s.push({value:e.Cr_UNIQUE||0,label:"Cr(UNIQUE)",description:"Humanity is the only or first human-like civilization.",formatted:(100*(e.Cr_UNIQUE||0)).toFixed(4)+"%",formulaKey:"Cr_UNIQUE_Formula",resultsKey:"Cr_UNIQUE_Calculation"}),s.push({value:e.Cr_DOOM,label:"Cr(DOOM)",description:"Humanity will become extinct before evolving into superhumanity.",formatted:(100*e.Cr_DOOM).toFixed(4)+"%",formulaKey:"Cr_DOOM_Formula",resultsKey:"Cr_DOOM_Calculation"}),s.push({value:e.Cr_ABSTAIN,label:"Cr(ABSTAIN)",description:"Superhumanity would not "+r+" many worlds emulating its evolutionary history.",formatted:(100*e.Cr_ABSTAIN).toFixed(4)+"%",formulaKey:"Cr_ABSTAIN_Formula",resultsKey:"Cr_ABSTAIN_Calculation"}),s.push({value:e.Cr_CREATED,label:"Cr("+O(e.model,"upper")+")",description:"Superhumanity "+o+" our world.",formatted:n+"%",formulaKey:"Cr_CREATED_Formula",resultsKey:"Cr_CREATED_Calculation"});for(var i=[],u=0;u [ Visit the webpage to view the table. ] ",m=e[p.formulaKey]||"",v=e[p.resultsKey]||"",y="";if(m||v)y=' ',y+=' ',m&&(y+="Formula: "+m+" "),v&&(y+="Calculation: "+v),y+=" ",y+=" ";c.push(h+y)}var f=c.join(""),C=e.checkSum;.01 Probabilities sum to '+C.toFixed(4)+". This reveals incoherence in the assumptions. ":f+=' \u2713 Probabilities sum to ~1. This confirms coherence in the assumptions. ',t("Results",a+f).id="FinalProbability",E.style.display="block","function"==typeof scrollToAnchor&&scrollToAnchor("#FinalProbability"),e}var S=!0,o=document.getElementById("SimulationArgumentCalculator"),m=document.getElementById("modelSelect"),r=document.getElementById("helpTextSelect"),v=document.getElementById("calculateBtn"),E=document.getElementById("CalculatorResults"),b=document.getElementById("ResultsContent"),a={pSupers:"P_SUPERS",pOthers:"P_OTHERS",pSupersOthers:"P_SUPERS_OTHERS",pOthersUniform:"P_OTHERS_UNIFORM",pSupersOthersUniform:"P_SUPERS_OTHERS_UNIFORM",pChoose:"P_CHOOSE",pChooseGeneralized:"P_CHOOSE",eWorldsChoose:"E_WORLDS_CHOOSE",eWorldsChooseGeneralized:"E_WORLDS_CHOOSE",eWorldsChooseComputed:"E_WORLDS_CHOOSE_COMPUTED",eWorldsChooseNoncomputed:"E_WORLDS_CHOOSE_NONCOMPUTED"};m.addEventListener("change",function(){e(),h()}),r.addEventListener("change",function(){S="show"===r.value;for(var e=o.querySelectorAll("small:not(.coherence-message)"),t=0;t
More episodes of the podcast Lincoln Cannon
Embody Christmas
24/12/2025
Financial Nihilism Strengthens Crypto
06/12/2025
Sankofa Futurism
28/11/2025
Theophanies of the Future
12/11/2025
Origin of Eternal Power
03/11/2025
God, Humanity, and Artificial Intelligence
22/10/2025
Prompting God
18/10/2025
Archive: Volume One
15/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.