The Apoptosis of Reason
Against Ontology
Here I defend two connected theses. The first is Logical Nihilism. This view is that there is no single, global, topic-neutral relation of logical consequence that governs all correct reasoning. Once we fix a language, a class of models, and a practical goal, we can define local consequence relations that work well for that setting, but there is no framework-independent fact of what follows from what. The second thesis is a kind of Instrumental Finitism. I treat mathematics as a finite, resource-bounded technology of symbol manipulation rather than as a window onto a realm of abstract objects or actual infinities. Mathematics is indispensable as a tool, yet dispensable as ontology.
I arrive at these positions by extending a line of eliminative thinking that runs from Sellars and Quine through the Churchlands and contemporary predictive processing accounts of cognition, and that receives a particularly stark expression in Ray Brassier’s nihilistic reading of the scientific image. I suggest that the same apoptotic pressure that dissolves the folk psychological subject also undermines the idea of a universal logic that binds all rational agents. At the same time, I draw on work by Russell, Cotnoir, Wyatt, Payette, Field, Maddy, and others to argue that both logic and mathematics are better understood as engineered instruments inside finite practices than as metaphysical structures that thought must mirror. I then sketch a picture of scientific rationality that relies on robustness, model building, and social objectivity rather than on transcendental logical or mathematical guarantees. I will close by proposing a stance of onto-suspension, a kind of modern Pyrrhonism. But instead of replacing old metaphysics with a new skepticism, I refuse the demand that successful practices be underwritten by any ontology at all, while acknowledging that they are bounded by the finite and possibly terminal character of our cognitive lives.
From the Apoptosis of Belief to Logic
I want to begin from a thought that has become increasingly difficult to evade in contemporary philosophy of mind and science. The conceptual framework through which I ordinarily understand myself is not sacrosanct. The categories of belief, desire, intention, and experience do not arrive with the authority of timeless structures of reason. They are theoretical posits that have grown up within a particular historical form of life, and they are vulnerable to replacement by more powerful explanatory schemes.
Sellars taught me to distrust the supposed immediacy of the given. On his account, even the simplest perceptual awareness is already articulated within a conceptual framework. The manifest image, as he calls it, is not a mere report of what is there. It is a sophisticated pattern of classificatory and normative practices. When I say that someone believes that it is raining, I am not pointing to a primitive mental glow. I am applying a theory-laden concept that helps explain and evaluate their behavior.
Once I accept this, eliminative materialism becomes a live option rather than a caricature. If folk psychological talk of beliefs and desires is theoretical, then in principle it can be replaced. The Churchlands take that possibility seriously. They suggest that a mature neuroscience may have no real use for belief and desire at all. Instead, it will traffic in neural activation patterns, vector coding, or whatever structure turns out to do the actual explanatory work. In that case, my current self-image as a belief-bearing subject would resemble the phlogiston theorist’s self-image as an observer of fiery exhalations. It would have done valuable work for a time, then quietly it would die via the apoptotic process.
Recent work in cognitive science only sharpens this trajectory. Predictive processing and free energy accounts depict the brain as a hierarchically organized prediction error minimizer. In this picture, the nervous system does not passively receive data and then reason about it in a classical logical sense. It generates probabilistic predictions, compares them with sensory input, and adjusts its generative models in order to keep expected surprise within viable bounds. What I call a belief, on this view, is at best a rough way of talking about a relatively stable high-level component of such a generative model, and at worst a social tool for coordinating behavior, what some authors call mindshaping rather than mindreading. The brain computes, but it does not obviously implement the inferential structure of textbook logic.
Ray Brassier takes this line of thought very seriously. He reads the scientific image as progressively dismantling the manifest image. In his work, the disenchantment that results does not end with the loss of comforting metaphysical pictures. It eventually reveals the indifference of cosmological and thermodynamic processes to the existence of any rational subject at all. Extinction becomes the strict horizon of thought. Brassier calls this a kind of transcendental nihilism. Thought discovers, by its own lights, that it is neither necessary nor central to what there is.
I share much of this nihilistic diagnosis, but I want to push it in a different direction. I want to apply the same apoptotic pressure not only to folk psychology, but to logic itself. My target is not just the image of the human subject as a belief holder, but the image of logic as a universal law that binds that subject. Call this the logical image. It is the idea that there exists a set of topic-neutral, exceptionless rules, often codified under names such as modus ponens or the law of non-contradiction, that any rational agent must implicitly acknowledge whenever they think.
In the background of much philosophy of logic lies what Wyatt and Payette call logical generalism. According to the generalist, logical validity is a property of certain forms of argument that holds regardless of subject matter. If an inference is logically valid, then its premises cannot be true and its conclusion false in any possible circumstance. Generalists then disagree over which logic captures that notion, and whether there is exactly one correct logic or several, but they tend to agree that there is something in the world or in the space of possibilities that answers to the idea of a single core consequence relation.
My first claim is that this entire picture is unsustainable. When we look closely at how logical systems actually function in mathematics, computer science, law, natural language semantics and so on, we do not find convergence on a unique logical skeleton. We find a proliferation of incompatible logics, each designed to handle particular kinds of problems. More importantly, we find that the very standards that were supposed to pick out a universal logic, such as topic neutrality and exceptionlessness, drive us toward this fragmentation rather than away from it. If we insist that an alleged law of logic must hold across all domains, then we rapidly discover that every familiar candidate runs into serious counterpressure somewhere. We can salvage it by restricting its range, or we can devise a new logic that behaves better in that domain, but in doing so we give up the idea that we are describing one single universal relation. This is the core of Logical Nihilism as I understand it.
Logical Nihilism is not the claim that there is no difference between good and bad reasoning. It is the claim that there is no single, global, topic neutral consequence relation that all good reasoning instantiates. Once we fix a language, a model class, and a set of practical aims, we can define a consequence relation that works well within that framework. We can then prove formally that certain inferences preserve truth in the intended models, or that certain sets of constraints are jointly unsatisfiable. All of that is legitimate and important. What I deny is that there is any fact of the matter, independent of such framework choices, about which arguments are logically valid in general. Logical consequence, with a capital L and a capital C, does not name a natural kind.
The second claim I want to defend is Instrumental Finitism about mathematics. Here my target is the idea that mathematics reveals to us an independently existing realm of abstract objects, such as sets, numbers, and functions, including actual infinities. In my view, we can understand mathematical practice as the development and deployment of a finite, resource bounded symbolic technology. We adopt concepts such as the real numbers or uncountable sets because they make certain forms of reasoning tractable, and because treating them as if they were genuine objects simplifies our models and proofs. We do not need to believe that these entities exist in any robust metaphysical sense for the technology to do its work.
Instrumental Finitism does not require me to adopt an extreme ultrafinitist position that denies the meaningfulness of all large numbers, although I will draw on finitist arguments as a useful source of pressure. It requires only that I take seriously the fact that all mathematical activity is implemented by finite agents with limited computational resources in a physically finite environment. From that observation, together with work by Hartry Field on the conservativeness of mathematics over nominalistic physics, and practice oriented accounts of mathematics by authors such as Penelope Maddy, I conclude that mathematical indispensability in science supports only an instrumental reading of mathematical discourse, not a commitment to a realm of abstracta.
What results from combining Logical Nihilism with Instrumental Finitism is a strongly deflationary picture of rationality. I do not possess a universal logic written into the fabric of thought. I do not have access to a mathematical realm that underwrites my most successful physical theories. Instead, I have a growing, revisable toolkit of inferential and representational devices that have proven useful within the limits of my finite projects. Logic and mathematics, on this view, are like measuring instruments or experimental protocols. They are crucial, but they are also historically contingent and subject to replacement. They can undergo apoptosis when they outlive their usefulness.
A natural worry at this point is that I have undermined scientific rationality rather than clarified it. If there is no universal logic and no mathematical ontology, how can I claim that science has any special authority? Part of my task in what follows is to show that this worry is misplaced. I will argue that scientific practice gains its robustness not from correspondence to an independent logical or mathematical structure of the world, but from features such as model pluralism, error correction, social criticism, and robustness analysis. These do not presuppose a universal logic or mathematical ontology.
The final ingredient in my picture is a stance I call onto-suspension. I do not want to replace the old idea that reality has a logical or mathematical form with the equally heavy idea that reality is void, chaotic, or intrinsically illogical. Instead I want to stop demanding that my best practices be grounded in any ontology at all. Extinction and cosmological eschatology play a role here, but not as revelations of the essence of being. They function as reminders that my cognitive tools are finite and may eventually cease to be used. That finitude is enough to motivate humility about the reach of my inferential frameworks, without forcing me to erect a new metaphysical barrier around what reality is really like.
I will develop each of these themes in detail over the course of the month. In the next section, I state and defend Logical Nihilism, and distinguish it carefully from both logical monism and logical pluralism. I then illustrate the toolkit character of logic by discussing substructural, non-monotonic, paraconsistent, and probabilistic logics as engineered responses to different constraints. After that, I turn to Instrumental Finitism about mathematics, drawing on finitist concerns, Field’s nominalism, and practice-based philosophy of mathematics. I then show how scientific rationality can be reconstructed in terms of robustness and social objectivity, without appealing to a universal logic or a mathematical realm. Finally, I return to the theme of extinction and articulate onto-suspension as an appropriate attitude for a finite, scientifically informed rational agent who has accepted the apoptosis of both belief and logic.
Logical Nihilism against Logical Generalism
I can now say more clearly what I mean by Logical Nihilism, and how it differs from the more familiar options of logical monism and logical pluralism.
In the background of almost all mainstream philosophy of logic lies what Wyatt and Payette call logical generalism. On this view, logic is a system of universally valid laws. These laws are supposed to apply to any subject matter, at any time, for any rational agent. They are topic neutral, exceptionless, and captured by a stable set of argument forms such as Identity, Conjunction Introduction, Conjunction Elimination, Disjunction Introduction, and so on. The project of the generalist is to identify which schemes genuinely have no counterexamples, and then to treat those as the backbone of correct reasoning.
The basic nihilist challenge begins as an internal critique of this picture. It can be put roughly as follows:
- If logic is a system of laws, each law must hold without exception in every domain of application.
- For every candidate law proposed so far, there are serious counterexamples once we widen our view across natural language, mathematics, computer science, legal reasoning, and paradoxical constructions.
- Therefore, there are no laws of logic in the generalist sense.
If one keeps the first premise (logic is essentially a system of universal laws) and accepts the second (there are no such universal laws), the conclusion seems to be that there is simply no logic at all. It is this reductio that motivates many attempts to repair generalism by appeal to hidden structure, idealized languages, or increasingly fine grained semantic machinery.
Logical Nihilism, as I understand it, takes a different lesson. I do not accept the move from “there are no universal laws of logic” to “there is no logic.” I deny the generalist starting point. The target of my nihilism is not the existence of any inferential norms whatsoever, but the idea that those norms are captured by a single, global, topic neutral consequence relation that deserves the name Logical Consequence with capital letters.
So I draw a distinction between two theses:
- Global Logical Nihilism: there is no single consequence relation that governs all correct reasoning across all domains.
- Local Logical Practice: once we fix a language, a class of models or proof systems, and a set of aims, we can perfectly well define a consequence relation that is useful and can be studied with all the tools of formal logic.
I endorse the first and rely on the second. Logical Nihilism in my sense is a rejection of logical ontology, not of logical practice. It says that the search for a core, framework independent relation of consequence is empty. Logic in the generalist sense is a myth that falls together with the myth of the belief-bearing subject.
From here it is helpful to contrast three positions:
- Logical monism. There is exactly one correct consequence relation. Classical first order logic is often put forward as the best candidate. On this view, non classical logics are either mistaken or merely deviant calculi that describe special kinds of inference, not genuine rivals.
- Logical pluralism. There are several equally correct consequence relations. Beall and Restall, for example, argue that classical, intuitionistic, and relevant logics are all legitimate instances of a more general Tarskian template, each capturing a different class of “cases” or interpretations.
- Logical nihilism. There is no single property in virtue of which classical, intuitionistic, relevant, paraconsistent, linear, probabilistic, and so on, all count as “logics” in a world describing sense. There is no natural kind here. There are only tools.
Logical monists take the proliferation of logics to be a temporary confusion that a sufficiently powerful theory will eventually resolve. Logical pluralists embrace the plurality, but try to fit it under a unifying notion of consequence. My nihilism is more radical in one respect and more modest in another. It is more radical in that it treats the failure of convergence as terminal, not provisional. It is more modest in that it gives up the ambition to say, from nowhere, what is valid “in general.” I am content to say what follows from what relative to a framework, and I deny that there is any coherent further question.
This is where logical particularism becomes important. Particularism about arguments rejects the idea that there must be a stock of general rules that underwrite every good inference. On a particularist picture, what makes an argument good in a given context is not that it instantiates some fixed scheme, but that the premises support the conclusion in that situation, given the purposes and background commitments in play. Argument evaluation is holistic and practice bound.
Logical Nihilism can be seen as a hardening of this particularist tendency. I do not deny that we can abstract patterns from successful arguments, or that such abstractions are often useful. I deny that these patterns amount to laws that hold across all contexts. Modus ponens, for example, works extraordinarily well in many mathematical and everyday settings. But we now have detailed examples in which insisting on its unrestricted validity produces trouble, for instance in contexts involving conditionals that carry probabilistic or information theoretic content, or in cases of semantic pathology. A nihilist takes these failures not as puzzles to be explained away so that the law can be saved, but as evidence that we should stop thinking of it as a law at all.
The same point applies more generally. When we look at how different logics are used, a few features stand out:
- Substructural logics such as linear and relevant logics modify structural rules like Contraction and Weakening in order to track resources, relevance, or information flow. They do not treat those structural principles as sacrosanct, but as adjustable parameters.
- Non monotonic logics formalize reasoning under uncertainty and default assumptions. They explicitly permit the withdrawal of conclusions in light of new information, something classical consequence forbids.
- Paraconsistent logics allow controlled reasoning in the presence of contradictions, blocking explosion. They treat non contradiction not as a universal constraint on intelligibility, but as a norm that can be suspended in certain theoretical contexts.
- Probabilistic and Bayesian frameworks encode degrees of support and update as real valued quantities, not as a binary relation of validity at all.
These are not minor deviations from an underlying skeleton. Once we take them seriously on their own terms, the hope that there is a single, invariant core that they all approximate becomes increasingly strained. Their design is driven by heterogeneous aims: the management of inconsistency, resource sensitivity, defeasible reasoning, learning under uncertainty, and so on. Their virtue is not that they approximate an ideal law of thought, but that they work where they are applied.
Logical Nihilism is the decision to treat this as more than an unfortunate side effect of an incomplete project. I take it as data about what logic is. Logic is not the discovery of a universal consequence relation that floats free of our practices. Logic is a family name for a collection of engineered formalisms that help finite agents manage inference under different constraints.
A natural worry is that this position collapses the distinction between good and bad reasoning. If there is no universal logic, is anything permitted? The answer is no. What disappears is not normativity, but the idea of a single, ahistorical source of normative force. In my view, distinctions between better and worse inferences are made within practices that already have stakes, standards, and failure modes. A proof that breaks the rules of a chosen calculus is bad as a proof in that calculus. A statistical inference that leads to systematically poor predictions is bad as a tool for navigating the world. A legal argument that violates established precedent and procedure is bad in that institutional context. None of this requires a further guarantee by a universal logic.
So Logical Nihilism has two main components:
- A negative thesis: there is no single, global, topic neutral consequence relation that deserves to be called Logic in the strong, generalist sense.
- A positive redescription: what we have instead is a toolkit of local inferential instruments, each answerable to its own standards of success and failure.
From here, the next task is to flesh out the positive side in more detail. In the following section I turn to logical engineering in a stricter sense. I will look at how particular logics arose in response to concrete pressures, how they function as technologies rather than mirrors, and how this supports a more general picture of rationality as finite navigation rather than obedience to eternal laws.
Logical Engineering: Logics as Tools
If Logical Nihilism is to be more than a gesture of deflation, it has to be anchored in how logicians actually work. I need to show that once I stop treating logic as a body of laws that describe a universal consequence relation, it makes sense to treat logical systems as engineered instruments. In this section I spell out what I mean by logical engineering and how it supports the nihilist thesis.
I will use “logical engineering” for any activity that treats logical formalisms as objects that can be designed, modified, and evaluated in light of specific aims and constraints. On this view, a logic is not discovered whole, like a hidden structure of thought. It is built. It can be redesigned when requirements change. Its virtues and vices are assessed in terms that look a lot like the evaluation of tools in other domains.
I find it helpful to distinguish three overlapping roles that logics play.
- Descriptive role. A logic can be used to model some pattern of actual reasoning: mathematical proof, conversational implicature, default inference, scientific explanation.
- Normative role. A logic can be proposed as an improvement on actual reasoning: it can regiment messy inference patterns, make hidden assumptions explicit, and flag certain moves as errors.
- Technological role. A logic can be used as a component in artifacts: proof assistants, type systems, verification tools, databases, planning algorithms.
In practice, the same formal system often serves all three roles, but the engineering perspective highlights that these are different tasks with different success conditions. No single notion of “correctness in general” governs them all.
1. Design constraints and trade offs
When I look at the history of nonclassical logics with this perspective, a pattern emerges. New systems arise not because someone discovers a previously hidden law of thought, but because existing formalisms fail in some practical respect.
There are several kinds of pressure that repeatedly drive the design of new logics.
- Expressive pressure. Classical first order logic cannot easily handle talk of obligation, belief, knowledge, possibility, time, probability, vagueness, or counterfactual dependence. Modal, temporal, epistemic, deontic, probabilistic, and many other logics are introduced to handle these kinds of discourse.
- Structural pressure. The structural rules of classical sequent calculi, such as Contraction and Weakening, break down when we care about resource usage, order, or relevance. Linear and relevant logics, along with Lambek style calculi, respond by altering or dropping those rules.
- Pathology pressure. Semantic paradoxes, inconsistent yet scientifically useful theories, and incomplete databases create trouble for systems that validate Explosion and treat all contradictions as catastrophic. Paraconsistent logics are designed to tolerate some contradictions without triviality.
- Computational pressure. Highly expressive logics are often undecidable or intractable. In automated reasoning and verification, logicians design restricted formalisms in which satisfiability and validity are decidable and tractable enough for computation.
In each case, the logic is evaluated by how well it handles these pressures. Criteria include soundness and completeness relative to a chosen semantics, but also proof complexity, decidability, ease of use, and fit with the target application. These are engineering virtues rather than reflections of an antecedent metaphysical status.
I can make the point sharper. Suppose I am designing a logic for resource sensitive reasoning, for example to control how many times a function can be applied, or to track use of linear resources in a programming language. Classical logic allows me to duplicate and discard premises freely. That is exactly what I want to avoid. I do not bemoan the fact that classical logic is “wrong.” I treat its structural rules as features that are ill suited to my purpose. I then design a different calculus, perhaps linear or affine, and judge it by whether it meets my constraints.
From a generalist perspective, this looks like a proliferation of rival accounts of a single notion of validity. From my nihilist perspective, it looks more like a family of devices optimized for different tasks.
2. Case studies
It will help to look at a few specific roles that nonclassical logics now play.
Substructural logics. Linear logic forbids unrestricted Contraction and Weakening in order to track resources. Relevant logics restrict inference patterns that ignore the content of premises. These systems are motivated by concrete failures of classical reasoning in settings where the reuse or irrelevance of premises matters. Their success is measured by how well they model resource flow or relevance constraints and by their performance in proof theory and applications, for example to type systems or process calculi.
Paraconsistent logics. In domains like legal reasoning, belief revision, and inconsistent scientific theories, it is often useful to reason in the presence of contradictions without collapsing into triviality. Paraconsistent systems are explicitly engineered to avoid Explosion while preserving as much inferential strength as possible. They are judged by their ability to support non trivial inference from inconsistent databases, to model dialetheist views of truth, or to handle paradoxes while avoiding collapse. Their status as “real logics” does not depend on converging with classical consequence in some limit. It depends on how well they do the work.
Non monotonic and defeasible logics. Real world reasoning often involves default assumptions that can be withdrawn in light of new evidence. Classical logic is monotonic: once a conclusion follows from a set of premises, it continues to follow from any larger set. Non monotonic systems are designed to formalize the opposite pattern. Their success is measured by how well they capture intuitive default inferences, how they behave in benchmark scenarios like the Tweety example or the frame problem, and how well they can be implemented in knowledge representation systems.
Probabilistic and Bayesian frameworks. Many reasoning tasks involve graded support rather than all or nothing validity. Probability theory and Bayesian update rules provide tools for this. Logicians and statisticians argue about the right constraints on priors or the right update scheme, but the entire enterprise is already far from the search for a single bivalent consequence relation. Here the object of engineering is an algebra of degrees of belief and a dynamics for their revision. The key questions concern calibration, convergence, and performance under uncertainty.
In all of these cases it is clear that different logics embody different choices and trade offs. Some sacrifice expressive power for computational tractability. Some allow more inferences at the price of tolerating inconsistency. Some encode stronger structural rules at the cost of misrepresenting resource sensitivity. There is no neutral standpoint from which to declare one of these logics the uniquely correct description of an independent consequence relation.
3. Normativity without universal laws
The engineering picture might seem to drain logic of normativity. If logics are tools, and we choose tools for our purposes, does anything remain of the idea that certain inferences are simply invalid?
I think so, but the source of normativity shifts. It no longer comes from the idea that a universal logic legislates over all thought. It arises from the fact that we are finite agents with projects that can fail in characteristic ways.
If my aim is to produce computer programs that do not crash, the choice of a type system and underlying logic is not arbitrary. Some choices will systematically allow bugs that I regard as unacceptable. Others will rule out legitimate programs. I evaluate logics by how well they navigate this space of risks.
If my aim is to prove theorems in algebraic topology, I will choose a background system that is strong and flexible enough to support the constructions I need, but not so strong that it undermines the conservativeness or relative consistency proofs I care about. I will then judge arguments within that system by whether they adhere to its rules.
If my aim is to design a legal procedure that treats like cases alike and avoids arbitrary verdicts, I will embed certain inferential patterns in statutes, precedents, and rules of evidence. Logical reconstruction can help clarify and criticize these patterns, but the ultimate standards still come from the aims of the practice.
In each case, there are bad inferences. They are bad relative to the adopted framework and its aims. A proof that commits an undisguised fallacy in Zermelo Fraenkel set theory is simply incorrect as a proof in that theory. A statistical argument that uses p values in a way known to generate high false positive rates is simply defective by the standards of the field. I do not need an additional story about how these verdicts are backed by a framework independent Logic.
Normativity here looks a lot like the normativity of instrument use. There are better and worse ways to use a microscope. There are design flaws that make an instrument unreliable. There are calibration procedures that tighten its connection to the phenomena we are studying. Logic, understood as an engineered tool, is subject to analogous evaluations.
4. Logical ontology as a bad abstraction
Once I adopt this practice oriented view, the idea of a single logical ontology begins to look like a misfired abstraction. Generalism takes what are really families of tools, built for heterogeneous purposes, and treats them as approximations to a single, hidden relation. It then asks which one is really correct.
Logical Nihilism says that this is a mistake of the same kind as treating “phlogiston content” as a hidden magnitude behind combustion phenomena. The fact that there are regularities in how certain arguments behave across frameworks does not mean that they are tracking one invariant property called Logical Consequence. It means that our cognitive and social systems have evolved overlapping but diverging standards of inferential success.
I do not deny that we can talk about “logic” in a harmless, generic way. I talk that way myself. What I deny is that there is a metaphysically distinguished object or relation that answers to that talk. Once I see how logics are actually made and used, the temptation to reify them into components of the structure of reality loses its grip.
This is the sense in which Logical Nihilism is about the apoptosis of logic. I am not calling for the abandonment of proof theory, model theory, or formal semantics. I am arguing that the particular conception of logic as a universal set of laws should die off, leaving in place a healthier understanding of logical practice as a species of engineering.
5. Transition to Instrumental Finitism
The engineering stance toward logic already points toward a parallel stance toward mathematics. Logicians freely design proof systems and model classes, extend or restrict comprehension principles, adopt or reject choice principles, and so on, depending on what they want to prove and what consistency or conservativeness results they can obtain. Set theories, type theories, and arithmetic theories become tunable frameworks rather than windows onto a fixed mathematical universe.
Instrumental Finitism generalizes this attitude. It treats mathematical theories as symbolic technologies developed by finite agents to solve problems. In the next section I turn explicitly to mathematics. I will argue that once we take seriously the finitude of our cognitive and physical resources, and once we look closely at how mathematics functions inside science, the case for a robust mathematical ontology looks as weak as the case for a universal logical ontology.
Instrumental Finitism: Mathematics as Finite Technology
I now turn from logic to mathematics. The structure of my argument is parallel. In the same way that I reject a universal logical ontology, I want to reject a robust mathematical ontology. I do not deny the practice of mathematics. I deny that its success commits me to a realm of abstract objects or completed infinities.
Instrumental Finitism is my name for this attitude. It combines three claims.
- All actual mathematical activity is carried out by finite agents with finite resources in a physically finite environment.
- Mathematics functions, inside science and technology, as a symbolic instrument for building, compressing, and manipulating models, not as a descriptive inventory of an abstract universe.
- The apparent ontological commitment to infinite and abstract objects can be understood as a useful idealization, not as a literal description of what there is.
This is not a novel package. It draws on Hartry Field’s nominalist program, finitist reservations about completed infinities, and practice based accounts of mathematics such as those of Penelope Maddy. My aim is to push these strands together and integrate them into a broader picture that already contains Logical Nihilism.
1. The finitude of mathematical practice
Start with a truism that is easy to ignore. Every proof, every definition, every theorem that has ever actually been written is a finite object. Each is a sequence of marks on a page or states in a machine. Even if we imagine an idealized community of mathematicians, their activity consists in manipulating finite configurations of symbols.
There is nothing surprising in this, but once I take it seriously, a pressure begins to build against the usual realist picture. On that picture, mathematics is about a nonspatiotemporal realm of objects: sets, numbers, spaces, functions. The infinite hierarchy of sets exists in full generality. The continuum is a completed totality of points. When I prove a theorem about real numbers, I am discovering a fact about those objects.
On the finitist picture I prefer, something different is going on. When I work with the real numbers, I am using a symbolic technology that allows me to reason efficiently about certain patterns in measurement and change. The completed continuum is an idealization that makes the technology powerful and convenient. It is not a further thing in the world.
The key is that nothing in our use of this technology forces us to treat the idealization as literally true. The success of calculus in physics does not require that uncountably many real numbers exist. It requires only that we can construct finite proofs in a formal system whose theorems, when interpreted in a suitable way, correspond systematically to successful empirical models. The heavy lifting is done by the finitary proofs and the physical adequacy of the models, not by an ontological fact about a realm of reals.
This is Field’s central insight in his program of nominalizing physics. He shows that, at least in some areas, we can reformulate physical theories in a way that quantifies only over spacetime regions and concrete entities, while using mathematics as a conservative extension. The mathematics simplifies the statements and derivations, but it does not add new physical content. If this kind of conservativeness holds widely enough, it undermines the Quine Putnam argument from indispensability to realism about mathematical objects. Mathematics can be indispensable as a tool without being true in the ontologically loaded sense.
Instrumental Finitism generalizes that move. I treat mathematics as a battery of symbolic extensions that are conservative, or at least approximately so, over the empirical content they help articulate. I then resist the temptation to treat the idealized structures as independent furniture of the universe.
2. Infinity as idealization
Nowhere is the contrast sharper than in the treatment of infinity. In everyday mathematical practice I talk freely about infinite sets, infinite sequences, uncountable cardinals, and limit processes that appeal to completed infinities. The realist takes this talk at face value. There really are infinitely many natural numbers. There really are uncountably many subsets of the naturals. ZFC set theory, or something like it, is a more or less accurate description of this realm.
Finitist and ultrafinitist traditions push back. They argue that the concept of a completed infinity is suspect. Some deny the meaningfulness of very large numbers. Others allow potential infinity, in the sense of unbounded finite extension, but refuse to reify an actually infinite totality.
Instrumental Finitism does not require me to take the most radical ultrafinitist route. I can acknowledge that talk of actual infinities is meaningful within our formal systems. I can even work inside set theories that postulate inaccessible cardinals if doing so yields useful results. What I refuse is the step from “our idealized system behaves as if there were infinite objects” to “there really are infinite objects.”
The status of infinity in my picture is like the status of frictionless planes or point masses in physics. These are idealizations that simplify models and proofs. They do not describe any literal object, but they can be approximated to a useful degree in the right regimes. Infinite sequences and sets play a similar role. They are conceptual tools that allow me to capture limiting behavior, approximate convergence, and structural relations in a compressed way.
If I ask what makes it correct to use these tools in a given context, the answer is not that the world contains infinite objects that they match. The answer is that the resulting finite proofs and models lead to successful predictions and explanations, and that when I unwind the idealization I can, in principle, recover finite approximations that are as close to the phenomena as I need.
3. Complexity, computation, and resource bounds
A further line of pressure toward Instrumental Finitism comes from computer science. Modern mathematics is entangled with computation. Proof assistants, automated theorem provers, and type theory treat proofs and programs as interchangeable. Complexity theory studies the resources required to carry out calculations and solve decision problems. In this environment, the idea that mathematics floats free of resource constraints looks increasingly implausible.
From a computational point of view, a proof is not just a static object. It is also the trace of a finite process. Some proofs are so long and complicated that no human can check them unaided. Others rely on algorithms whose complexity makes them intractable for any physically realizable machine. These constraints matter. They shape what parts of mathematics can be explored in practice, which conjectures can be tested, and which structures can be effectively represented.
Instrumental Finitism takes these facts seriously. It treats theorems not as timeless truths about abstract objects, but as the output of finite processes that consume time, space, and energy. Proof techniques are evaluated not only by their elegance, but by their computational profile. A theory that exists only on paper, whose consequences cannot in principle be computed or recognized by finite agents, has a different status than a theory whose machinery can be embodied in practice.
This does not mean that I collapse mathematics into computer science, but it does mean that I no longer regard resource insensitivity as a virtue. A mathematical framework that ignores complexity constraints is like a physical theory that ignores friction. It may be locally useful, but only under an understood idealization.
4. Indispensability without ontology
The most serious challenge to Instrumental Finitism comes from indispensability arguments. The rough form is familiar: our best scientific theories are deeply mathematical; we have good reason to believe those theories; therefore we have good reason to believe the mathematics they use. If I accept this reasoning, I seem pushed back toward some kind of realism about mathematical objects.
There are several well known responses. I do not need to recapitulate them all. What matters for my purposes is that once I adopt the engineering perspective, the argument begins to look question begging.
If mathematics is a tool for building models, not a description of a separate realm, then its indispensability is exactly what I would expect. Of course I cannot simply remove the tool and keep all the models. The fact that I cannot do without a tool, given my current practices, does not show that the tool describes a new kind of object. It shows that my current way of representing the world is tightly bound up with that tool.
Field’s conservativeness program aims to make this point precise. Where a mathematical theory is conservative over a nominalist base theory, its use does not augment the physical commitments of the base. Even where full conservativeness fails, we can often explain the failure by pointing to idealizations, limiting processes, or the introduction of new structure that simplifies the representation of old content.
Penelope Maddy’s practice based approach supports a similar conclusion from a different angle. She emphasizes that mathematical justification is internal to mathematical practice. Mathematicians adopt axioms and inferential norms because of their fruitfulness, coherence with existing results, and ability to organize large bodies of work, not because of a prior metaphysical picture of a realm of sets. Ontological questions are, at best, a late gloss on this practice.
Instrumental Finitism takes this seriously. It respects the internal standards of mathematics, but it refuses to inflate them into evidence for a separate ontology. My commitment is to the reliability of the tools in the contexts where they are used, not to the existence of the idealized structures that appear in their formulations.
5. Local Incompleteness
I must now explain why Gödel’s incompleteness theorems, which are usually treated as revelations of the fundamental limits of formal reasoning, do not carry that significance once logical ontology has undergone apoptosis. The familiar reading of Gödel depends on background assumptions that have already been displaced by Logical Nihilism and Instrumental Finitism. It assumes that there is a fixed logical framework shared by all correct reasoning, that arithmetic provides a privileged window on the structure of formal thought, and that incompleteness exposes a necessary boundary of rationality. What Gödel is usually taken to mark as a universal limit is, in my picture, only the boundary of a particular resource bounded and context dependent inferential device. Once logic is recognized as a finite, engineered practice, none of the traditional assumptions Gödel relies on survive.
The pressure begins with Russell’s diagnosis of why generalism fails. Her arguments start from a simple thought. If any principle of inference is to count as a genuine law of logic, then it must hold in complete generality across all subject matters and linguistic contexts. Her work demonstrates that no principle meets this standard. Even identity and conjunction introduction, which seem immune to revision, fail once we broaden the range of contexts to include minimal context sensitivity in demonstratives or indexicals. Logical form is not globally stable. It shifts with language and application. If there are no genuine laws of logic, then there is no universal consequence relation for Gödel to operate on. The theorems rely on a single background notion of derivability that classical arithmetic can represent. Russell shows that this background is an artifact, not a globally neutral point of view.
Gödel’s construction requires several commitments that generalists take for granted. It presupposes a fixed logical vocabulary, a sharp boundary between logical and non logical terms, and a notion of validity that transcends particular languages. It presupposes that quantification over the natural numbers is unproblematic and that the idealization of an infinite proof search space is legitimate. These commitments are not benign. They encode exactly the generalist ontology that Logical Nihilism rejects. Once logical ontology is suspended, Gödel’s diagonal argument has no privileged domain. It reveals something about classical arithmetic conceived as a very specific kind of formal machine. It reveals nothing about reasoning as such.
Cotnoir reinforces this conclusion from a different direction. His version of logical nihilism begins from the observation that no formal consequence relation succeeds at modeling natural language inference in its full generality. The standards of informal reasoning shift with pragmatic aims and conversational contexts. Syntax and semantics cannot be cleanly separated. If no formal calculus captures the inferential practice of ordinary agents, then Gödel’s theorems cannot possibly describe a universal boundary of thought. At most they describe limits internal to systems that already accept the idealizations Gödel’s proof depends on: infinite arithmetization, classical two valued logic, and a fixed relation of syntactic provability. The incompleteness of classical arithmetic is simply the incompleteness of one modeling strategy that insists on these idealizations. It is not an axiom on the limits of universal reason.
Once Russell’s and Cotnoir’s pressures are combined, the picture is clear. There is no universal logic for Gödel to constrain. There is no unified class of inferences whose closure properties could reveal a global limit. There is no framework independent notion of provability that carries authority across systems. Gödel’s theorems become local results about the behavior of a carefully designed formal calculus, not deep facts about the nature of rationality. They mark the limits of a particular inferential technology, not the limits of thought.
Instrumental Finitism pushes this further. If mathematics is a symbolic tool rather than an ontology of infinite objects, then Gödel’s theorems describe the design limitations of infinite proof systems. They do not reveal any metaphysical necessity or insight into the world as it is. The unprovable Gödel sentence arises only inside formal systems that treat infinite arithmetical structures as if they were available for unrestricted manipulation. Once we treat these structures as idealizations, Gödel’s sentence loses any claim to being a statement that thought cannot reach. It becomes a side effect of the assumptions that make classical arithmetic tractable for certain purposes. The limit is internal to the technology, not external to the agent.
The result is that incompleteness is not a philosophical boundary condition on reason. It is a local pathology of a formal device that pretended to be universal. Gödel’s theorems therefore do not challenge Logical Nihilism. They support it. They illustrate how claims of universality collapse under their own machinery. A system that insists it can represent all reasoning finds that its own idealizations generate contradictions or unprovable statements. The lesson is not that truth outruns proof in some transcendent sense, but that universality is a defective requirement. Logic and mathematics behave well as tools precisely when they do not pretend to describe the form of being.
From within the frame I have built, incompleteness is another moment in the apoptosis of logical ontology. Gödel shows what happens when we try to give a finite symbolic practice the status of a metaphysical structure. The practice breaks under the strain. The technology reveals limits that belong to its own design, not to the world. Once the demand for universality is abandoned, those limits no longer carry any special meaning. They are simply constraints of one device among many. Reason does not confront a cosmic boundary. It confronts the limits of tools that never had the right to legislate for it in the first place.
6. Mathematics, logic, and onto-suspension
At this point the parallel with logic should be clear. In both domains, I abandon the search for a framework independent structure that thought must mirror. In both domains, I treat formal systems as artifacts designed to help finite agents navigate complex environments. Logical formalisms help manage inference. Mathematical formalisms help manage measurement, structure, and transformation. Neither needs to be grounded in a special ontological realm.
This is where onto-suspension enters. Faced with the power of modern logic and mathematics, it is tempting either to reify their structures as the deep form of reality, or to react by declaring reality fundamentally irrational or intractable. I want to refuse both temptations. I do not claim that the world is intrinsically logical or mathematical. I also do not claim that it is intrinsically illogical or non mathematical. I suspend judgment on the question of what reality is like in itself, because I regard that question as badly formed.
Instrumental Finitism and Logical Nihilism prepare the ground for this suspension. Once I have given up on logical and mathematical ontology, it becomes natural to think of science itself in non ontological terms. In the next section I turn to scientific rationality. I will argue that the authority of science can be reconstructed in terms of robustness, model pluralism, and social objectivity, without appeal to a universal logic or a mathematical realm. Science becomes another finite practice that can be appraised from the inside, without the need for a metaphysical foundation.
Complex Numbers, Quantum Theory, and the Fate of the Subject
The abstract character of mathematics is nowhere more striking than in the role of complex numbers in quantum theory. If any case tempts one to say that mathematical structure reveals the inner form of reality, it is the ubiquity of complex amplitudes in quantum mechanics. I want to show how an instrumental finitist stance makes sense of this role, and how it connects back to the erosion of the belief-bearing subject.
1. The puzzle of complex amplitudes
In elementary quantum mechanics I represent the state of a system by a vector in a complex Hilbert space. Time evolution is given by unitary operators. Probabilities of measurement outcomes are given by the Born rule, which assigns to an outcome a probability proportional to the squared modulus of a complex amplitude. Interference phenomena are encoded in relative phases, which are differences of arguments of complex numbers.
It is natural to ask why the world should be written in terms of numbers of the form a + bi, where i^2 = -1. No direct measurement ever produces an imaginary outcome. Observables are represented by self adjoint operators, and their eigenvalues are real. Yet the machinery that connects preparations, evolutions, and measurements is shot through with complex structure.
Realist readings of quantum theory often take this as evidence that the world itself has a complex linear structure. The complex Hilbert space is treated as a literal state space. The inner product is a real physical relation. The presence of complex numbers is then an ontological discovery, a revelation that the universe is, at a deep level, a complex vector system.
Instrumental Finitism tells a different story. In my view the appearance of complex amplitudes shows that we have found a particularly effective symbolic technology for encoding a certain range of empirical regularities. The question I ask is not why the world is complex valued but why this complex valued formalism is such a powerful tool for finite agents who want to predict quantum phenomena.
2. Real reformulations and redundancy
One immediate reason to be cautious about reading complex structure back into ontology is that the same empirical content can be expressed in purely real terms. Any complex Hilbert space has an equivalent description as a real Hilbert space of twice the dimension, equipped with additional structure that plays the role of multiplication by i. Quantum mechanics can be reformulated in that real setting without losing empirical adequacy. The complex numbers do important compressive work, but they are not strictly forced upon us.
Something similar is true of global phase. The overall phase of a state vector is unobservable. Only relative phases matter. This means that distinct complex vectors represent the same physical situation. The formalism comes with redundancy built in. We quotient by a gauge symmetry to regain a one to one correspondence between equivalence classes and physical possibilities. That is a familiar pattern from other parts of physics. It is a sign that parts of the mathematical structure are representational scaffolding rather than mirrors of independent features of the world.
From an instrumental finitist perspective this is exactly what I should expect. The construction of complex Hilbert spaces is a way of packaging transformation rules and interference constraints in a compact, calculable form. The presence of redundancy is not a flaw. It is the price of a representation that is tractable for finite reasoners.
I can put the point in a way that echoes earlier sections. The success of the complex formalism does not show that there is a mathematical ontology of complex amplitudes that the world instantiates. It shows that there is a family of empirical constraints, found in scattering experiments, spectroscopy, interference patterns, and so on, that can be encoded very efficiently using this particular piece of symbolic technology. That is an achievement of engineering, not a revelation of necessary structure.
2.5. Psi-ontic physics as misplaced ontology
This is also the point at which a more familiar ontological temptation enters. Quantum theory is often interpreted through the psi-ontic (or ψ-ontic) lens. On a psi-ontic reading the wavefunction is not a representational device. It is a physical property of the system. A difference in the wavefunction is a difference in what the world is. Under this view the complex Hilbert space is not only a powerful modeling environment but the deep structure of being.
Instrumental Finitism has no use for this picture. If mathematical structures are symbolic technologies rather than metaphysical descriptions, then the wavefunction has no more claim to ontological dignity than any other element of the formalism. It earns its place because it compresses interference patterns and guides finite agents toward stable predictive expectations. It does not earn a metaphysical role simply because of its calculational success.
Much of the contemporary enthusiasm for psi-ontology derives from the PBR theorem and related experiments that appear to show that the wavefunction must be real. This is a misreading. PBR rules out only a narrow class of epistemic models that obey strong independence assumptions. It does not eliminate contextual or non-factorable hidden variable models. It does not entail that the wavefunction is a physical attribute in the strong sense advertised by psi-ontic readings. The recent quantum computer implementations of PBR protocols demonstrate only that quantum computers behave according to quantum mechanics. They reveal nothing about what the wavefunction is. They reveal only how the formalism behaves under engineered conditions.
Psi-epistemic views, which treat the wavefunction as an expression of incomplete knowledge about an underlying physical state, fare no better once the belief-bearing subject has undergone apoptosis. The contrast between epistemic and ontic characterizations presupposes an agent who knows or fails to know the actual condition of a system. My earlier arguments remove that subject from the foundation of quantum theory. There is no epistemic core that holds the wavefunction as information. There is only a modeling practice that uses the wavefunction as part of a collective inferential procedure.
Once the metaphysical and epistemic readings collapse, the wavefunction becomes neither fact nor ignorance. It becomes a component in an inferential instrument. From the standpoint of onto-suspension this is exactly where it belongs. To treat the wavefunction as a real property is to mistake a successful device for a metaphysical revelation. To treat the wavefunction as a repository of knowledge is to re import the folk psychological subject I have already dissolved. The sober view is that the wavefunction is a durable but revisable tool inside a finite practice. It will survive for as long as it yields robust predictive success and will die when better instruments arise.
This is not a demotion. It is a clarification. The wavefunction belongs to the same category as complex amplitudes more generally. It is a symbolic construction that encodes the operational constraints of a domain. It has no need for ontological endorsement. That very refusal of endorsement is part of the instrumental finitist stance. The power of the wavefunction does not reveal the structure of reality. It reveals the power of our representational technologies.
3. Interference, superposition, and model building
Consider what the complex amplitudes actually do for us. They allow us to add contributions from distinct paths or histories with relative phases, then square the modulus to obtain probabilities. In Feynman style path integral formulations, each path contributes a factor proportional to \exp(iS/\hbar), where S is the action. The interference pattern emerges from cancellations and reinforcements among these complex contributions.
From a realist point of view this tempts one to say that the system really is in a superposition, that each path is in some sense actual, and that the complex numbers measure a mysterious propensity to realize one outcome or another. From an instrumental finitist point of view I see something different. I see a powerful way of representing how finite agents should apportion expectations over possible outcomes when they are dealing with experiments that exhibit stable interference patterns.
The core regularity is that certain preparation and measurement procedures lead to probability distributions that cannot be represented by a classical Kolmogorovian measure over a single Boolean algebra of events. Instead, we have a non distributive lattice of projection operators and a rule for assigning probabilities that respects that lattice structure. The complex amplitudes encode this rule in a compact algebraic form. They are not new objects in the world. They are a way of writing compact instructions for how to expect outcomes in a given experimental setup.
This is not a subjective story in the narrow sense. The rule is not a mere summary of my personal opinions. It is a socially stabilized instrument. A community of experimenters calibrates and refines it because it yields robust predictions. Different agents who follow the same procedures and use the same formalism will converge on the same probability assignments. That is enough to give the formalism objective authority without treating its algebraic elements as metaphysical furniture.
4. Quantum states and the belief-bearing subject
This brings me to the relation with the belief-bearing subject. In folk psychological terms, a state is often understood as a kind of inward belief or knowledge condition. It is tempting to interpret the quantum state in a similar way. Interpretations that emphasize the epistemic or Bayesian character of the quantum state formalize this temptation. On those views the state is a compact representation of an agent’s credences about future experiences.
I agree with these approaches to the extent that they emphasize the role of the state as a tool for guiding expectations. I diverge when they treat that role as grounded in the special status of belief as a mental attitude. From the eliminativist side of my project I am already suspicious of belief as a basic category. Predictive processing models depict the brain as maintaining generative models that encode expectations. These models are implemented in patterns of neural connectivity and activity, not in discrete sentences of a belief language.
Seen from that angle, a quantum state is not a belief in the folk sense. It is an element of a public modeling practice. It is a compact vector or operator that finite agents construct and revise in order to coordinate predictions and interventions. It can be implemented in paper notebooks, computer memory, or neural circuitry. Its normativity is social and instrumental, not grounded in a special ontology of belief.
This decentering of the subject has two sides.
On the one hand, quantum mechanics appears to give a central role to measurement and to the knowledge of observers. Textbook formulations talk about wave function collapse upon observation. This has encouraged pictures in which an inner subject stands outside the quantum formalism and imposes classical definiteness on an otherwise indeterminate reality. That picture fits well with a robust belief-bearing subject.
On the other hand, the very same theory also undermines that subject. Decoherence theory and quantum Darwinism show how effectively classical records of outcomes can be spread through the environment without any privileged observer. The apparatus, the environment, and any recording systems are all treated within the same unitary dynamics. The special status of an inner subject begins to look like a projection of manifest image habits onto a formalism that does not require it.
In my framework the quantum state belongs not to an inner ego but to a distributed practice. Predictive brains, measurement devices, and shared mathematical formalisms are all parts of a larger inferential apparatus that finite agents use to navigate a quantum structured environment. Beliefs, in the folk sense, are rough summaries of how that apparatus behaves. The complex numbers in the formalism are features of the tool, not of a ghostly mental realm.
5. Quantum theory without ontological consolation
One common response to the strangeness of quantum theory is to seek ontological consolation in mathematics. If the world is quantum, perhaps it is at bottom a complex Hilbert space, a C star algebra, a quantum field, or a pattern in a universal wave function. Mathematical structure becomes the last refuge of a metaphysics that has lost classical intuitions.
Logical Nihilism and Instrumental Finitism close off that refuge. They do not deny the success of the quantum formalism. They deny that its success licenses the move from a powerful instrument to a metaphysical picture of what reality is.
This does not leave me with a formless chaos. It leaves me with a disciplined humility. Quantum theory is our best current tool for regimenting a vast range of microphysical phenomena. Complex numbers are a central part of that tool. The belief-bearing subject, as folk psychology conceived it, is not required to anchor this practice. What we have instead is a network of finite systems that implement and iterate an effective modeling scheme.
In this sense the role of complex numbers in quantum theory supports the broader project of the apoptosis of reason. Where the realist sees in them a deep imprint of mathematical ontology on being, I see a particularly clear example of how far finite, engineered symbolic technologies can reach without metaphysical backing. The subject who once took comfort in the thought that its beliefs mirrored a logical and mathematical order of reality finds itself displaced by a practice that functions perfectly well without that picture.
In the next section I turn explicitly to scientific rationality as a whole. I ask how science can retain its authority once we have surrendered both logical and mathematical ontology, and once the belief-bearing subject has been dissolved into predictive and modeling practices. The answer, I will argue, lies in robustness, pluralism of models, and social objectivity, not in any hidden logical or numerical structure of the world.
Scientific Rationality without Logical or Mathematical Foundations
I can now return to the question that has been lurking in the background from the beginning. If I give up a universal logic and a robust mathematical ontology, what is left of scientific rationality? If logic is a toolkit and mathematics is a finite symbolic technology, why think that science has any special authority at all?
The standard picture ties scientific objectivity to two familiar anchors. First, it is assumed that scientific reasoning is underwritten by a fixed logical structure - that good scientific arguments are those that conform to a universal consequence relation. Second, it is assumed that scientific theories describe a world whose structure is fundamentally mathematical - that the success of physics, in particular, reveals that nature is literally written in the language of mathematics.
Logical Nihilism and Instrumental Finitism cut both of these anchors. I am committed to the view that there is no global consequence relation that guarantees the correctness of scientific inference and no independent realm of mathematical objects that scientific theories faithfully describe. I do not, however, think that this leaves science as just one narrative among others. The question is what can replace those anchors.
My answer is that scientific rationality is grounded in three main features:
- The robustness of its inferential and experimental practices.
- The pluralism of its models and theoretical representations.
- The social objectivity of its institutions of criticism and correction.
None of these requires a logical or mathematical ontology. All of them fit naturally with the picture of logic and mathematics as finite technologies.
1. Robustness as a non metaphysical virtue
By robustness I mean the tendency of certain results to reappear across changes in method, representation, and background assumptions. In practice, scientists treat robust phenomena and robust results as more secure than those that depend sensitively on a specific model or instrument.
There are several familiar forms of robustness.
- Cross method robustness. A measured quantity, such as the charge of the electron or the Hubble constant, is estimated by multiple experimental techniques. These techniques rely on different apparatus, different physical principles, and different statistical analyses. When they converge on the same value within error bars, confidence in that value rises.
- Cross model robustness. A qualitative feature, such as the existence of phase transitions or the stability of planetary orbits, appears in multiple mathematical models that are mutually idealized or even mutually inconsistent. When the feature survives changes in modeling choices, it is treated as more than an artifact of any one model.
- Parameter robustness. A prediction remains stable under small changes in parameter values and boundary conditions. If tiny perturbations send the result wildly astray, the model is treated with suspicion.
These forms of robustness can be described and analyzed using local logics and local mathematics. They do not require that there be a universal consequence relation or a single mathematical structure that the world instantiates. What matters is that finite agents can vary their representational tools and still obtain convergent results.
From my perspective, robustness is a relational property of a practice. It is a feature of how theories, experiments, and instruments hang together under systematic variation. When a result is robust, I treat it as a stable output of the current inferential technology. That is enough to justify reliance on it for further modeling and intervention.
Notice that this justification does not appeal to an independent metaphysical story. I do not have to say that a robust result tracks a necessary structure of reality. I only have to say that, given the ways we know how to perturb our tools, the result has proven resilient. That resilience is contingent and revisable, but it is not arbitrary. It is anchored in the way our finite practices interact with their material environment.
2. Model pluralism instead of a single true description
Modern science is saturated with model pluralism. The same physical system is routinely represented by several incompatible models, each useful in a different regime.
I can list a few standard examples.
- Light can be modeled as rays in geometric optics, as waves in classical electromagnetism, or as quanta in quantum electrodynamics.
- Gases can be described by thermodynamic equations of state, by kinetic theory, or by molecular dynamics simulations.
- The solar system can be modeled using Newtonian gravitation for most purposes and general relativity in high precision or high gravity contexts.
These models are not straightforwardly reducible to one another. They use different variables, different idealizations, and sometimes different ontological pictures. Yet scientists move between them with ease. They are treated as tools, not as competing metaphysical portraits that must be reconciled at all costs.
Logical Nihilism fits this practice neatly. If there is no single core logic that all correct reasoning must instantiate, there is no reason to demand that all scientific models be harmonized into one ultimate representation. Instead I can treat each model as a device that encodes certain patterns of dependence and allows me to calculate certain quantities in certain regimes.
Instrumental Finitism plays a similar role. If mathematical structures are not reified as independent objects, I am free to use whatever structure best serves my current purpose. The Hilbert space of quantum mechanics, the phase space of classical mechanics, and the state space of statistical mechanics can all be used without asking which one matches the ultimate ontology of the world.
Model pluralism also contributes to robustness. When different models, based on different idealizations, agree on a prediction, that agreement is evidence that the prediction does not depend on the quirks of any one representation. Again, this is a feature of practice that does not require a metaphysical explanation.
3. Social objectivity without transcendental guarantees
The final piece is social. Science is not a private activity. Its rationality is distributed across institutions of publication, replication, criticism, and education. What gives scientific claims their special status is not that they can be derived from axioms of Logic and Mathematics, but that they have survived exposure to these institutions.
Several mechanisms are familiar here.
- Publicness and reproducibility. Methods and results are made public in sufficient detail that other groups can attempt to reproduce them. Failures of replication trigger further investigation.
- Organized skepticism. There are institutionalized incentives, however imperfect, for challenging established claims. New data, alternative models, and critical analyses feed into a dynamic process of revision.
- Calibration and standardization. Instruments are calibrated against shared standards. Procedures are codified. Statistical methods are subjected to meta scientific scrutiny.
These mechanisms are themselves imperfect. They are vulnerable to social bias, funding constraints, and political interference. But they are also corrigible. Scientists reflect on their own practices, redesign statistical tools, institute preregistration and data sharing, and so on. The rationality of science is embodied in this reflexive, self correcting pattern, not in a hidden logical skeleton.
From my point of view, social objectivity replaces the role that transcendental guarantees have traditionally played. Instead of saying that scientific knowledge is justified because it conforms to universal laws of thought, I say that it is justified because it is the product of a practice that has evolved complex mechanisms for error detection and control. Instead of saying that science is objective because it describes a mathematical structure that exists independently of us, I say that it is objective because its results are, in principle, accessible to any finite agent who can enter into the practice.
This is where the erosion of the belief-bearing subject matters. If I stop thinking of rationality as primarily an attribute of inner mental states, I am free to see it as a property of extended systems. A scientific community, with its instruments, archives, and theoretical tools, can exhibit rationality even if no individual member perfectly instantiates the ideal of a belief governed subject.
Predictive processing models of cognition reinforce this shift. They depict individual brains as prediction error minimizers, constantly updating generative models in light of sensory input. When such brains are embedded in a scientific culture that provides external symbol systems, formal logics, and mathematical tools, the result is a multi scale inferential network. Rationality belongs to that network, not to a ghostly Cartesian center.
4. Scientific authority after apoptosis
What remains of scientific authority once logic, mathematics, and the subject have undergone apoptosis?
What remains is a practice that has proven extraordinarily good at certain tasks: predicting regularities, building reliable technologies, coordinating interventions into the world, and uncovering long range dependencies between phenomena. The grounds for trusting it are empirical and historical. It has accumulated a track record of success under ever more stringent forms of criticism.
In my view, this authority is strictly finite. It is bounded by our cognitive capacities, by the physical limits of measurement and computation, and by the historical contingencies of our institutions. It is also fragile. It depends on maintaining the conditions under which robustness, model pluralism, and social objectivity can function.
This finitude is not a defect. It is precisely what one should expect if there is no deeper logical or mathematical ontology underwriting the practice. Science is a set of evolving technologies for navigating a world that does not care whether we describe it correctly. There is no guarantee that our methods will continue to work indefinitely. Extinction and collapse are live possibilities.
What I reject is the thought that in order to acknowledge this finitude, I must abandon science to relativism. I do not need a metaphysical foundation to recognize that some ways of modeling and intervening are better than others at achieving the aims we in fact have. Nor do I need to reconstruct those aims as universal norms written into the fabric of reason. They emerge from the kind of predictive, error minimizing, socially organized organism that we happen to be.
In that sense, scientific rationality fits naturally into the broader project of onto-suspension. I continue to use and refine the tools that work. I track the conditions under which they fail. I resist the urge to elevate their success into a story about the ultimate nature of reality. The authority of science is historical, procedural, and fallible. It does not need the backing of a universal Logic or a realm of mathematical objects.
In the next part of this project I want to connect this finite, non ontological picture of science more directly to the themes of extinction and deep time. If the very practices that make scientific rationality possible are contingent and possibly terminal, what should my attitude be toward their eventual disappearance? How does the prospect of the end of inquiry feed back into the way I understand logic, mathematics, and the subject in the present? These questions will bring the notion of apoptosis into its final register: not only as a metaphor for the death of certain concepts, but as a recognition of the mortality of the practices that replaced them.
Extinction, Deep Time, and the Ephemeral Subject
I can now say more clearly what it means to take scientific rationality as finite and non ontological in a universe whose long term future is extinction. The point is not just that our theories tell us that stars will burn out and entropy will increase. It is that the very practices through which I arrive at that picture are themselves fragile episodes in cosmic history. Human ontology is not a mirror of being. It is a temporary pattern that will vanish, leaving no remainder that could justify it from outside.
1. Extinction as the outer horizon of inquiry
The contemporary cosmological image is brutally simple. On any of the standard models, the span of conditions under which embodied, energy hungry, information processing systems like us can exist is finite. Stars form, live, and die. Energy gradients that support complex structures are used up. On some scales there may be cycles or local recurrences, but there is no serious prospect that our particular lineage of practices survives indefinitely.
If I take this seriously, extinction is not just an interesting prediction inside a theory. It is the outer boundary of every epistemic project I undertake. All cognitive work, including the construction of the very models that predict extinction, is carried out in a window that will close. There is no view from beyond that closure. There is no final standpoint from which our theories can be certified as having captured what reality is like in itself.
This is not a skeptical worry. It is a structural fact about where and when our practices are implemented. If reason is an activity of finite organisms in a hostile universe, then it has a birth and a death. What I am calling the apoptosis of reason is not just the elimination of certain concepts within philosophical theory. It is the historical fact that all of these concepts, along with the practices that made them useful, will eventually stop being used at all.
2. Deep time and the scale mismatch of ontology
Deep time worsens the dislocation. Human cognitive time spans are measured in seconds, years, perhaps generations. Geological and cosmological time scales make even the history of life on Earth look brief. My entire species occupies a thin slice in a temporal distribution that ranges from the formation of galaxies to the decay of black holes.
Traditional ontology tries to speak at the largest scale. It asks what being is, what exists necessarily, what structures reality has in all possible worlds. It pretends to a kind of temporal neutrality. On that stage, human practices appear as minor episodes that can be factored out without loss. Being is supposed to be what is there whether or not anyone thinks about it.
My picture reverses the direction of explanation. Instead of thinking of human practices as attempts to match an antecedent ontology, I think of ontological claims as artifacts produced by those practices. Assertions about what there really is reflect the constraints and affordances of a very local cognitive ecology. They are shaped by our sensory modalities, our energetic needs, our social forms, and the tools we have built.
Once I put human practices into deep time, this dependence is impossible to ignore. There were vast stretches of cosmic history in which no one spoke of being at all. There will be vast stretches in which whatever replaces us, if anything does, will not use our categories. The temptation to treat our current ontology as a privileged glimpse of the structure of reality now looks like a parochial fantasy.
This is not to say that there is no reality. It is to say that my grip on it is always mediated by finite, local practices whose persistence is contingent. The finitude of science and the prospect of extinction are not minor details inside an otherwise stable metaphysical picture. They are reasons to refuse the metaphysical picture in the first place.
3. Human ontology as a temporary interface
I can now state the key claim in a compact way. What I usually call ontology is better understood as a kind of interface layer between finite modeling practices and the world they interact with.
When physicists say that there are fields, or that spacetime has a certain metric structure, they are not reading labels off the furniture of the universe. They are introducing entities and properties that make their models tractable and connectible to measurement. When folk psychology says that there are beliefs, desires, and selves, it is doing something similar for the domain of social and individual behavior.
These interface layers are not arbitrary. They are constrained by the need to keep prediction error low enough to survive. They are shaped by the success of the modeling technologies I have already described: logical formalisms, mathematical frameworks, measurement systems, and social institutions of science. But they are also revisable and, in the long run, perishable.
To call human ontology ephemeral is to deny that these interface categories have any special claim to timelessness. The category of a belief-bearing subject is useful for a while, then it begins to look like an obstacle to clearer modeling of neural and social dynamics. The category of a universal logic is useful for a while, then it fractures under the weight of nonclassical systems. The category of a mathematical universe of sets is useful for a while, then it is outflanked by practice based and finitist accounts.
At each stage, there is a temptation to rescue the old interface by elevating it into an ontology. We say that there really are beliefs, or that reality really is logical, or that mathematics really describes a separate realm. The apoptosis of reason is the refusal of that rescue. Concepts that outlive their usefulness are allowed to die. They are not embalmed as metaphysical relics.
Deep time and extinction simply extend this pattern to its limit. Eventually all of our interface layers will drop away, not because they were false pictures that will be corrected, but because there will no longer be any systems that use them.
4. No consolation in the survival of structure
A familiar way of softening the blow of extinction is to say that while we will vanish, something about our rational structure might survive. Perhaps the truths we discovered will remain true. Perhaps the mathematical objects we posited will continue to exist. Perhaps the laws we inferred will still govern the dead universe.
Logical Nihilism and Instrumental Finitism block this consolation. If there is no universal logic and no robust mathematical ontology, then there is no sense in which our inferential structures continue to exist independently of the practices that implement them. Once there are no agents who use a given logical or mathematical tool, there is nothing left for the tool to be.
The same goes for scientific theories. Their content is not a set of timeless propositions waiting in Platonic space. It is a web of modeling capacities, experimental techniques, and inferential habits that live in concrete systems. When those systems disappear, so does the content.
This does not mean that the world changes when we disappear. It means that talk of laws, structures, and ontologies loses its foothold. There will be a sequence of physical events, but no reason to describe them as obeying this or that law. Law talk is itself part of an interface practice. Without that practice, the description does not apply.
Seen from within our present, this can feel intolerable. We want something about our cognitive activity to outlast us. We want our theories to have been about something that persists even after we are gone. The perspective I am defending denies that demand in principle. It does not offer a metaphysical backup copy of our rational achievements.
5. Apoptosis, not annihilation
It might seem that this yields a picture in which reason refutes itself. If reason tells me that reason is finite, contingent, and doomed, does that not undercut my trust in reasoning here and now?
I do not think so. The key is to distinguish between two different ambitions.
The first ambition is to ground rational practice in something that does not depend on that practice. This is the ambition of ontology in its classical sense. It looks for a structure of being that guarantees the correctness of our inferences and the truth of our theories. Once I see how deeply historical and fragile our practices are, this ambition becomes unrealistic. There is no place outside history from which it can be satisfied.
The second ambition is to make local sense of what we are doing while we are doing it. I want to understand why certain modeling strategies work better than others, how our tools are built, what their limits are, and how they might fail. This is an intra historical ambition. It does not require a metaphysical ground. It requires only that my current tools be good enough to investigate their own conditions of use.
Apoptosis is a model for this second ambition. In biological apoptosis, a cell dies for the sake of the organism. It is not annihilation in the sense of chaotic destruction. It is a regulated, internally intelligible process. The cell uses its own machinery to dismantle itself in an orderly way.
In an analogous manner, reason can turn its tools on its own pretensions. Logical analysis, mathematical modeling, and scientific investigation can be used to show that there is no universal Logic, no mathematical ontology, no immortal subject. They can also be used to situate themselves within a finite, fragile history. This does not refute their local authority. It clarifies its scope.
Human ontology, understood as the set of categories through which we currently understand ourselves and our world, is one of the targets of this apoptotic process. It is revealed as a historically local interface, tuned to the needs of a species that will not last. That revelation is not an argument for despair. It is an instruction to stop looking for metaphysical consolation and to attend instead to the finite work still available.
After a momentary dalliance I will in the final section turn explicitly to the stance I have been calling onto-suspension. I will ask what it is to live and think under the acknowledgement that our logical, mathematical, and scientific practices are both indispensable and terminal. The aim is not to replace lost foundations with a new doctrine, but to articulate a stable attitude for agents who have accepted the mortality of their own concepts.
Egoism after the Subject: Stirner, Ontology, and Apoptosis
I want to take a brief detour. Up to this point I have been dismantling the belief-bearing subject and dissolving logical and mathematical ontology into finite practices. I have emphasized extinction and deep time as horizons within which our concepts live and die. I now want to ask what becomes of a different strand of thought that matters to me, the anarchist egoism associated with Max Stirner. I find egoism compelling, but what can egoism be when the subject is no longer understood as a bearer of folk psychological beliefs or as a metaphysical core of selfhood?
1. Stirner against ontology
Stirner’s central move is familiar. He attacks abstract universals and essences as “spooks” that dominate the individual: Man, Humanity, State, Morality, even Reason. These abstractions are treated as higher powers that demand sacrifice. Stirner opposes to them the Einzige, the unique one, who acknowledges no higher law than his or her ownness. The egoist does not serve abstractions but treats them as usable or discardable depending on convenience.
There is already an anti ontological impulse here. Stirner does not offer an essence of the ego to replace the essences he destroys. On the contrary, he insists that the unique one cannot be captured in concepts at all. Every attempt to define the ego simply creates another spook. Ownness is not a property of a substance. It is a way of relating to whatever one in fact is.
In this respect Stirner is an ancestor of onto-suspension. He urges me to stop treating any conceptual structure as obligatory. Categories are to be used or ignored in so far as they serve my projects. What my framework adds is a much more explicit account of why those categories have no claim to necessity: because they are finite tools inside finite practices, not mirrors of an independent realm of being.
2. The end of the belief bearing ego
The problem is that Stirner still writes as if there is a subject who believes, wills, and appropriates. The egoist is the one who says “I have set my cause upon nothing,” who consumes spooks as his property. Folk psychology supplies the grammar of egoism. It talks of the self as a center of beliefs and desires that stands over against the world.
My eliminative and predictive processing commitments undercut that grammar. If there is no basic category of belief, and if the brain is better described as a prediction error minimizer that updates generative models, then the Stirnerian subject cannot be a metaphysical atom of belief and will. There is no inner spotlight that owns or disowns concepts. There are only coupled systems of neural, bodily, and social processes that maintain certain patterns and not others.
So the question becomes sharper. Can egoism survive once I give up the belief-bearing subject that Stirner seems to assume? Or does the apoptosis of folk psychology kill egoism along with other spooks?
3. Self models, indices, and functional “I”
Predictive processing gives me a way to reframe the issue. On that picture, the brain maintains a generative model of the causes of its sensory input. Part of that model is a representation of the organism itself: its body, its capacities, its typical interactions. This self model is not a ghostly inner subject. It is a functional construct that helps minimize prediction error. It tracks regularities in how actions change inputs.
Crucially, this model has indexical structure. It encodes information of the form “this organism here,” “my arm,” “my perspective.” It distinguishes between signals under its control and signals that are merely observed. In this sense there is an “I” in the machinery, but it is not a metaphysical soul. It is a locus of control and prediction defined by the boundaries of a particular system.
Egoism can be reinterpreted in terms of this indexical, functional “I.” To say that I value my ownness is to say that, within this self modeling architecture, patterns that threaten the integrity or flexibility of the system are treated as to be resisted, while patterns that enhance its capacities are treated as to be pursued. What Stirner calls appropriation can be redescribed as the integration of external tools, institutions, and concepts into the system’s control loops, on terms that preserve its ability to revise and withdraw.
On this reading there is no need to posit a belief-bearing subject in the old sense. The attitude that Stirner calls egoism can be implemented by a predictive system that never uses the concept of belief at all. It needs only a distinction between what is tightly coupled to its own regulatory dynamics and what is not.
4. Extinction of ontology, persistence of stance
Once ontology has been suspended, the entities that egoism resists look different. The spooks are no longer just Moral Law, State, and Humanity. They include Logic, Mathematics, and Scientific Truth, considered as independent structures to which the subject must submit. My framework shows that these too are finite instruments. The temptation is to treat them as higher powers because they are effective. Egoism in the post ontological sense is the refusal to let that effectiveness turn into unquestioned authority.
This does not mean discarding science or logic. It means treating them as Stirner treats religious and political abstractions: as one’s property. I use them when they are useful for my projects, I drop them when they are not, and I do not allow them to dictate my worth. They have no claim to obedience beyond the pragmatic.
What changes, once I integrate extinction and deep time, is the horizon of this stance. Stirner’s egoist often reads as if he could live indefinitely, endlessly appropriating and discarding. My picture insists that the very capacity to appropriate is temporary and resource bound. The predictive and self modeling systems that instantiate egoism will cease. There is no enduring ego beyond the life of the organism, and there is no timeless structure whose service could confer transcendence.
Egoism in this setting becomes a local strategy of orientation for a finite, doomed system. It is the decision to treat all abstractions as tools rather than masters, including the abstractions that describe one’s own mortality. It is consistent with seeing science as an instrument for anticipating extinction. What it rejects is any demand that one regard that instrument as a higher law.
5. Egoism without metaphysical self
The final step is to let go of the idea that egoism requires a substantial self at all. On the view I am developing, there are only processes that maintain certain statistical and structural regularities in their own trajectories. These processes can be more or less tightly integrated. They can include memories, habits, normative dispositions, and social roles. There is no fact of the matter, beyond a certain point, about where the self ends and its environment begins.
Egoism then is not a doctrine about what the self is. It is a stance taken from within such a process. It consists in continually reclassifying entities that present themselves as binding authorities into resources that can be modified, negotiated, or ignored. It is a pattern of resistance to the reification of one’s own models. In that sense it aligns naturally with onto-suspension*. Both refuse to turn tools into masters.
The disappearance of the belief-bearing subject does not remove the target of this stance. It clarifies it. Egoism is directed not at rescuing a metaphysical ego, but at protecting and expanding the adaptive flexibility of a finite modeling system against capture by its own abstractions. The extinction of ontology only makes the stakes clearer. There is no deeper court of appeal. There are only the transient conflicts and accommodations between such systems, played out within a universe that will eventually forget them.
In the light of this reinterpretation, the final section on onto-suspension can be read as a generalization of Stirner’s insight. Where Stirner taught me to treat moral and political abstractions as spooks, the apoptosis of reason teaches me to treat logical, mathematical, and ontological abstractions in the same way. Egoism survives as a local, finite posture within this landscape, stripped of metaphysical guarantees, but sharpened by the recognition that both the subject and its concepts are as mortal as the stars that briefly sustain them.
Refusing World Pictures
I can now say what I mean by onto-suspension. I use the term for a stance rather than a theory. It is a way of relating to my own logical, mathematical, and scientific practices once I have accepted two claims:
- These practices are indispensable for finite agents like me, given the kinds of problems I face.
- These practices are terminal. They are historically contingent, resource bound, and destined to lapse with the forms of life that sustain them.
Onto-suspension is what remains when I refuse both metaphysical consolation and metaphysical despair in the face of those claims.
I want to distinguish it from two more familiar responses.
The first is the traditional metaphysical response. Faced with the power of logic, mathematics, and science, I might conclude that they reveal the structure of being. Reality is said to have a logical form, to be fundamentally mathematical, to be governed by timeless laws. My practices are then justified by correspondence to that structure. The hope is that something about my work will outlast me in principle, even if I die.
The second is a kind of negative metaphysics that mirrors the first. Faced with the same finitude and fragility, I might declare that reality is fundamentally irrational, groundless, or chaotic. Logic and mathematics are then devalued as illusions, and science is treated as one story among others. The thought here is that because my tools do not reach the bottom, they reach nothing at all.
Onto-suspension rejects both. I do not take the success of my practices as evidence that reality is of their kind. I also do not infer that, in the absence of such evidence, reality must be their opposite. Instead, I stop asking the question in that form. I treat the demand for an ontological world picture as a symptom of the very metaphysical impulse I am trying to let die.
Here Pyrrhonism is a useful precedent. The ancient skeptics did not argue that reality is unknowable in principle. They questioned the coherence and stability of the claims that dogmatists put forward about the nature of things. Their suspension of judgment was supposed to yield ataraxia, a kind of calm that comes from not staking peace of mind on ungrounded necessities.
My version is harsher and more thoroughly naturalized. I do not aim at tranquility. I do not think there is a neutral human nature to be restored once dogma is removed. I am a finite predictive system in a universe that will eventually delete me. Suspension here is not a recipe for inner peace but a refusal to elevate any of my current models into a description of what reality is like in itself.
Stirner provides a different precedent. His egoist refuses to submit to abstract entities that demand obedience. He treats them as spooks and insists on his ownness. The egoist does not need an ontological picture of the world to do this. He needs only the ability to recognize when a concept has been reified into a master.
Onto-suspension generalizes this attitude. Instead of asking whether reality is really logical, really mathematical, or really chaotic, I treat all such claims as spooks that seek to fix my conceptual frame. They offer me the comfort of necessity. If I accept that comfort, I give up the ability to see my own practices as revisable. I let my tools solidify into masters.
The stance I am trying to describe has three intertwined features.
First, it is diagnostic. When I am tempted by an ontological picture, I ask what work it is doing. What fear does it soothe? What need for authority does it meet? In many cases, the picture is stabilizing an already successful practice by wrapping it in metaphysical rhetoric. Onto-suspension calls that bluff.
Second, it is permissive but not arbitrary. I do not declare all ontological language meaningless. I continue to speak of fields, particles, numbers, and laws. I simply treat this language as part of my modeling interface. I do not demand that it be underwritten by a deeper account of what there really is.
Third, it is historically awake. Every category I employ is recognized as having a time of emergence and a time of obsolescence. Folk psychological belief talk, universal logical laws, set theoretic ontology, scientific natural kinds all have histories. Onto-suspension refuses to pretend that any of them has stepped outside history into eternity.
To live under this stance is to accept that my best theories are both necessary and disposable. They are necessary in the sense that I have no better tools right now. They are disposable in the sense that I actively anticipate their replacement and eventual disappearance. The point is not to sit in judgment over them from nowhere, but to use them while keeping their mortality in view.
This brings me back to egoism. A Stirnerian egoist under onto-suspension does not ask what the world is ultimately like. He asks which conceptual tools he can appropriate without becoming their servant. Logical and mathematical formalisms, scientific models, and even nihilistic rhetoric itself are potential property. They are evaluated in terms of how they extend or constrict his capacities as a finite modeling system embedded in a hostile universe.
In this light, onto-suspension is both Pyrrhonian and egoist. It suspends judgment on the nature of being, not to achieve serenity, but to keep open the space in which tools can be adopted and discarded without metaphysical blackmail. It refuses to let any world picture claim more authority than it earns inside the finite practices that use it.
Practicing Finite Rationality
What does it look like, concretely, to think and live under onto-suspension? I want to make this as specific as possible. The stance has implications at at least three levels: how I relate to my own cognitive states, how I relate to scientific and technical institutions, and how I orient myself toward extinction and deep time.
At the level of my own cognition, onto-suspension means abandoning the idea that there is a hidden, belief bearing core that must achieve consistency with a metaphysical truth. Predictive processing already undermines that idea by describing cognition as a hierarchy of generative models tuned to minimize prediction error. There is no central storehouse of propositions. There are only more or less stable patterns in a dynamic system.
Within that system, onto-suspension suggests a particular discipline. When a model works, I use it. When it fails, I revise or replace it. I do not add a further question about whether the model corresponds to the structure of reality. The only questions that make sense are about its performance, its integration with other models, and its cost in terms of computational and social resources.
This does not reduce rationality to short term instrumentalism. Long term predictive success, explanatory depth, and unification of disparate phenomena all count as virtues from within the practice. What drops out is the demand for a metaphysical credential that would certify these virtues as tracking necessity rather than contingent fit.
Egoism plays a role here as a check on self capture. It is easy for models, especially self models, to become fixed points that resist revision. The image of myself as a certain kind of thinker, as committed to a certain metaphysical position, can function as a spook that stabilizes error by making it part of my identity. An egoist under onto-suspension treats even these self images as potential tools or obstacles. They can be appropriated, modified, or discarded like any other conceptual structure.
At the level of scientific and technical institutions, onto-suspension encourages two complementary attitudes: deep reliance and ongoing estrangement.
Deep reliance, because in practice I have no better instruments for navigating many domains than the collective apparatus of science and technology. Vaccination, semiconductor engineering, climate modeling, astrophysics, and chemistry are not optional if I want to understand and intervene in the world at anything beyond the scale of immediate perception. To refuse their authority wholesale in the name of anti metaphysics would be self defeating.
Ongoing estrangement, because I know that these institutions are finite, politically entangled, and historically specific. They can fail. They can be captured by economic or military interests. They can solidify into technocratic spooks that demand obedience on the basis of their supposed access to the structure of reality. Onto-suspension keeps this ambiguity in view. I rely on the tools while resisting their elevation into unquestionable authorities.
This is where Pyrrhonism is again instructive. The Pyrrhonist lives according to appearances and customs without endorsing them as necessary truths. She eats, obeys laws, practices a craft, and follows arguments when they seem compelling, but she does not add the extra claim that things must be as they appear. Onto-suspension replaces appearances with models and customs with institutions, but the pattern is similar. I use the tools that are to hand, but I do not take their present order as binding in principle.
At the level of existential orientation, onto-suspension faces the hardest test. Extinction and deep time present a challenge that metaphysics has traditionally tried to answer with stories about meaning, fate, or cosmic justice. If there is no ontology to guarantee any of that, what attitude is left?
The first impulse is often either resignation or defiance. Resignation says that if everything ends, nothing matters. Defiance says that even if everything ends, I will invest my practices with meaning by sheer will. Both share a hidden metaphysical assumption: that meaning would require some kind of endurance or recognition beyond the finite practice itself.
Under onto-suspension, that assumption is removed. The only sense in which anything matters is local to the systems for which it matters. Predictive and self modeling systems like us care about certain trajectories rather than others. They embody valuations in their very dynamics. There is no additional question about whether those valuations are justified from the standpoint of being.
This is not a claim that anything goes. A climate model that predicts catastrophic warming still provides reasons for action within the practice that produced it and for all those whose continued functioning depends on the environmental conditions it forecasts. The fact that both the model and the agents will eventually disappear does not erase the difference between trajectories that sustain them now and trajectories that destroy them now.
Stirner’s egoism can be read in this light as a refusal to look for value elsewhere. The unique one does not expect redemption from history, nature, or reason. He finds his projects worthwhile because they are his, not because they are inscribed in a larger order. Onto-suspension accepts the finite, terminal character of those projects and extends the same refusal to logical and mathematical structures. They are not sources of meaning beyond the practices that use them.
Pyrrhonism contributes a different ingredient: a loosening of attachment to dogmatic claims. If my models, including my models of my own life, are recognized as tools rather than truths, I can revise them in response to changing conditions without experiencing every revision as a crisis of identity. This does not guarantee calm, but it removes one major source of panic: the fear that any failure of a concept is a failure of being.
The result is an unstable equilibrium. I continue to think, calculate, model, and argue, fully aware that these activities are historically local, causally fragile, and cosmologically brief. I do not try to ground them in a story about the nature of reality. I also do not abandon them as illusions. I take them as what finite, self modeling, socially organized systems in my situation can do.
Onto-suspension, in this sense, is not a doctrine to be defended but a posture to be cultivated. It tells me to let my concepts die when their time is over, to resist turning tools into idols, and to accept that there will be no final reconciliation between thought and being because there is no viewpoint from which such reconciliation could be judged.
The apoptosis of reason is the name I give to this process when it is taken all the way down. Folk psychological beliefs, universal logical laws, mathematical ontologies, metaphysical subjects, scientific world pictures: each can be shown, by its own lights, to be a historical construct that does not need to be immortal to do its work. Once I have seen that, the task is no longer to replace them with something more solid. It is to learn how to go on thinking, as a finite agent in a mortal practice, without pretending that there is anything behind my tools except the world that will eventually forget they were ever applied to it.
What Is Left To Conclude?
At this point I have pulled apart several of the central images that usually organize talk about reason. I have dissolved the belief-bearing subject into predictive and self modeling processes. I have treated logic as a family of engineered tools rather than a universal law. I have recast mathematics as a finite symbolic technology rather than a window onto an abstract realm. I have taken scientific rationality to be a historically contingent practice that gains authority from robustness, model pluralism, and social objectivity rather than from any metaphysical foundation. I have folded all of this into a picture of extinction and deep time in which our concepts are as mortal as we are. Finally, I have tried to sketch onto-suspension as a stance that refuses to turn any of these tools into a world picture.
Before I state anything that deserves to be called a conclusion, it is worth asking what shape a conclusion can even take under these conditions.
A traditional philosophical conclusion aims at closure. It tells us what there is, what we are, and how we must think if we are to be in touch with reality. It tries to stabilize the conceptual terrain. By contrast, the whole drift of this paper has been toward instability. I have argued that there is no single logic that all thought must obey, no mathematical ontology that our theories must mirror, no subject that must be preserved at all costs. I have insisted that our most powerful practices are historically contingent and terminal.
If I tried to end with a set of positive theses about the nature of being, or even about the nature of rationality as such, I would be betraying the project. The point has not been to replace one foundation with another. It has been to show that the demand for foundations is itself one of the spooks that needs to undergo apoptosis.
So a conclusion here has to do something different. It has to summarize what has been dismantled, clarify what remains in use, and state as plainly as possible the attitude I am recommending without presenting it as a new metaphysical ground.
I can think of this in terms of three questions that will guide the closing section.
- What has actually been eliminated, and what has only been redescribed
- What, if anything, counts as progress in this kind of negative project
- How should a finite agent who accepts this picture orient themselves in practice
On the first question, I need to be clear that I have not eliminated reasoning, mathematics, or science. I have tried to dissolve particular ontological readings of them. Logical Nihilism does not say that inference disappears. It says that there is no single consequence relation that deserves to be called Logic with a capital letter. Instrumental Finitism does not say that numbers or sets cannot be used. It says that they should be treated as elements of a symbolic technology rather than as independent objects. The finite, non ontological picture of science does not say that we must abandon physics or biology. It says that their authority is practical and historical rather than metaphysical.
On the second question, I need to explain why this counts as progress and not as gratuitous destruction. Here Stirner and Pyrrhonism help again, but now at the level of method. Stirner insists that it is better to recognize abstractions as spooks than to live under their unacknowledged rule. The Pyrrhonist insists that it is better to suspend judgment where dogmatism outruns justification. My project extends both insights into logic and mathematics. If I am right, then many of the most respected philosophical pictures of reason owe their stability to unnoticed reifications. Showing this is a form of clarification. It frees my practices from a set of demands they cannot meet and do not need to meet in order to function.
On the third question, I need to say something constructive about how to live with the recognition that our concepts are mortal. This brings back the finite egoism of the previous section. Once I stop expecting my tools to be immortal, it becomes easier to use them intensively without reverence. Logic, mathematics, science, and even nihilism itself become resources that can be appropriated, combined, and discarded in light of changing projects and threats. The fact that extinction is inevitable does not empty these projects of urgency. It sharpens it, because there is no larger order in which failure will be redeemed.
The conclusion will have to weave these three strands into a single statement. It will have to say, in one place, what it means to accept the apoptosis of belief, logic, mathematics, and ontology, while still affirming the indispensability of the finite practices that survive them. It will have to mark the difference between a world without foundations and a world without reasons. It will have to admit that this entire construction is itself a historical episode, a way of thinking that will eventually cease to be used.
In other words, the conclusion has to perform onto-suspension rather than merely describe it. It has to speak from within the stance that it recommends. I cannot step outside my tools to endorse them from nowhere. I can only show, as clearly as possible, how they undermine their own pretensions to necessity while remaining effective for the time being.
What follows, then, will not be a last word about reason, or about reality, or about what we are. It will be a last word for this particular configuration of concepts. It will state how things look from the standpoint of an agent who has accepted that their own inferential frameworks are finite and terminal, that there is no ontological court of appeal beyond their practices, and that this lack of appeal is not a defect to be corrected but a condition to be inhabited.
Conclusion
Lineages and appropriations
I have not arrived at this configuration of Logical Nihilism, Instrumental Finitism, and onto-suspension by solitary inspiration. The entire project rests on appropriations, distortions, and recombinations of work that others developed for their own purposes. What is mine is not the raw material but the way I let it undergo apoptosis inside a single frame.
From Wilfrid Sellars I take the core suspicion of the given. “Empiricism and the Philosophy of Mind” taught me to see the manifest image as a theoretical construct rather than a transparent window on what is there. The idea that even perception is conceptually articulated clears the ground for treating folk psychological categories as revisable. From W. V. Quine I inherit the critique of analyticity and the naturalized picture of epistemology. “Two Dogmas of Empiricism” and “On What There Is” loosen the grip of a priori structure and encourage me to treat questions about what exists as continuous with theory choice.
The Churchlands push that line into explicit eliminativism. Paul and Patricia Churchland argue that folk psychology may be replaced rather than reduced. I borrow from them the readiness to let belief and desire die as theoretical posits if better neurocomputational schemes become available. Contemporary predictive processing and free energy accounts of cognition, associated with Karl Friston and developed by authors such as Andy Clark and Jakob Hohwy, provide the technical template for this replacement. They supply the picture of brains as hierarchically organized prediction error minimizers whose “beliefs” are just relatively stable elements in a generative model.
Ray Brassier’s work, especially Nihil Unbound, provides the most uncompromising statement of what happens when the scientific image is taken seriously at cosmological and thermodynamic scales. His transcendental nihilism, in which extinction becomes the horizon of thought, is one of the main pressures behind my insistence on the finitude and terminal character of rational practice. Where I diverge is in shifting the target from the manifest subject to the logical and mathematical structures taken to underwrite science.
The critique of logical ontology depends heavily on Gillian K. Russell’s work on logical nihilism and on varieties of logical consequence. Her arguments that no single consequence relation satisfies the generalist criteria, together with her taxonomy of how different logics resist nihilism, provide much of the raw material for my negative thesis. A. J. Cotnoir’s “Logical Nihilism” reinforces the sense that once we take the proliferation of logics seriously, the hope for a unique core relation looks empty.
Logical particularism, as developed by Nicole Wyatt and Gillman Payette in “Against Logical Generalism,” and by Neri Marsili Alhambra in work on particularism about arguments, supplies the alternative to generalism that I radicalize. Their insistence that good arguments need not be underwritten by a stock of exceptionless rules is a key step toward treating logics as engineered tools. Beall and Restall’s “Logical Pluralism” is another crucial point of reference. I accept their descriptive insight about the existence of multiple legitimate consequence relations, but I reject the unifying template that still treats them as instances of one general notion.
On the mathematical side, Hartry Field’s nominalist program in Science Without Numbers and later work on the conservativeness of mathematics over nominalistic physics is central. Field shows in detail how mathematical theories can function as conservative extensions that simplify reasoning without adding new physical commitments. I generalize that insight into Instrumental Finitism, treating all mathematical ontology as a technology for compressing proofs and models. Penelope Maddy’s practice based naturalism in Naturalism in Mathematics and Second Philosophy reinforces the point that mathematical justification is internal to practice rather than dictated by a prior metaphysical picture.
Finitist and ultrafinitist traditions in the philosophy of mathematics, from Hilbert’s program to more radical contemporary ultrafinitism, provide pressure against the reification of actual infinities. I do not adopt the most extreme versions, but I use their concerns to highlight the gap between what finite agents can in principle compute and the transfinite hierarchies posited in set theory. Complexity theory and the proof theoretic and type theoretic work that underlies modern proof assistants further feed into my sense that mathematics is constrained by resources, even when its official ontology ignores those constraints.
Stirner’s The Ego and Its Own provides the egoist vocabulary that I have reinterpreted in terms of predictive self models. His critique of “spooks,” abstractions that demand obedience, is a constant background figure in my treatment of logic, mathematics, and science as tools that must not be allowed to harden into masters. Sextus Empiricus and the Pyrrhonian tradition offer a different kind of precedent. The suspension of judgment they recommend, not on appearances but on dogmatic claims about reality, is reworked here into onto-suspension, a refusal to elevate any model into an image of being as such.
There are other figures in the background. Quine’s web of belief and rejection of a sharp analytic–synthetic boundary, Sellarsian and Brandomian inferentialism about meaning, predictive processing accounts of the self model as a locus of control and prediction, Brassier’s criticisms of phenomenological and correlationist attempts to center subjectivity, and various strands of contemporary philosophy of science that emphasize models, robustness, and pluralism rather than theory as description. I have not tried to be systematic about attribution in the main body, and I have freely simplified and repurposed arguments. The point here is to make explicit that this paper is an assemblage, not an origin.
Logical Nihilism as I present it is not simply Russell’s or Cotnoir’s position. It is Russell’s and Cotnoir’s work on the failure of generalism, combined with particularism about arguments, twisted together with an eliminativist attitude inherited from the Churchlands, and pressed under Brassier’s horizon of extinction. Instrumental Finitism is not Field’s nominalism or Maddy’s practice based naturalism. It is a hybrid of their insights with finitist concerns, computational constraints, and the engineering view of logic.
The finite, non ontological picture of science develops themes that are familiar in philosophy of science, including model based realism, robustness analysis, and social accounts of objectivity, but it does so inside a frame that has already abandoned the search for a single logical or mathematical foundation. Onto-suspension is not simply Pyrrhonian skepticism or Stirnerian egoism. It is what remains when their shared hostility to dogmatic abstraction is combined with a fully naturalized picture of cognition and a cosmology that gives reason no privileged place.
If there is anything new here, it is not a single original thesis but the way these elements are made to resonate with one another. Logical Nihilism, Instrumental Finitism, finite scientific rationality, egoism without a metaphysical self, extinction, deep time, and onto-suspension are arranged so that each undermines the attempt to turn another into a foundation. The result is not a system but a configuration, a particular way of using existing tools against one another until certain pretensions give way.
In that sense I am closer to Stirner’s craftsman than to the system builder. I take what I find useful in Sellars, Quine, the Churchlands, Russell, Cotnoir, Wyatt, Payette, Field, Maddy, Brassier, Sextus, and others, and I treat their concepts as property to be consumed. The originality, if there is any, lies in the pattern of consumption, not in the ingredients themselves.
Living without a court of appeal
I can now state, as plainly as I know how, what this configuration amounts to from the inside. It is the point of view of an agent who has accepted that every inferential framework they employ is finite, local, and terminal, and that there is no ontological tribunal beyond those frameworks to which they might appeal.
From this standpoint, Logical Nihilism is no longer a threat to reasoning. It is a reminder that inference is always situated. There is no single relation of consequence that every good argument instantiates. There are only practices in which some transitions are counted as moves worth keeping and others are rejected. Proof systems, model theories, and informal patterns of reasoning are judged inside those practices by how well they help us navigate. There is no further fact, outside any practice, about which transitions are valid in general.
Instrumental Finitism plays a parallel role. Numbers, sets, spaces, and functions are not denied. They are treated as devices for compressing and coordinating finite manipulations. Their apparent infinitude is an idealization that earns its keep only to the extent that it supports tractable proofs and workable models. There is no realm of mathematical objects that confers legitimacy on our methods. The only legitimacy they have comes from the reliability of the finite symbol games in which they appear.
Scientific rationality, under this view, loses its aura of metaphysical privilege and gains something more fragile and more concrete. Its authority lies in the robustness of results under variation, in the pluralism and cross checking of models, and in the social mechanisms that expose claims to criticism and correction. These features are not guaranteed. They are maintained, often poorly, by institutions that can be weakened or destroyed. There is no guarantee that the practices we now call science will survive, or that anything recognizably similar will replace them.
Egoism without a metaphysical self becomes a way of inhabiting this situation rather than a doctrine about ultimate reality. The predictive and self modeling system that says “I” can treat its concepts as property, including its most sophisticated theories. It can adopt, modify, and discard logical and mathematical tools without confusing them with authorities that stand above it. Stirner’s suspicion of spooks aligns with Pyrrhonian suspension here. Both become disciplines of not letting abstractions dictate more than their practical success warrants.
Onto-suspension is the name I have given to the resulting stance. It is not a new picture of the world. It is the refusal to mistake any picture for a map of what the world is in itself. Logical and mathematical structures, scientific models, and even the nihilistic vocabulary I have used to dismantle them are held at a certain distance. They are indispensable now, but they are not sacrosanct. They can be allowed to die when better tools arrive, and some day there will be no tools at all.
From within this stance, the absence of a court of appeal is not experienced as a hidden defect. It is simply how things are for a finite organism that models in order to survive for a while in a universe that will eventually erase both the organism and its models. There is no point at which a final verdict about being could be issued, even in principle. There are only sequences of conceptual reconfigurations whose value is exhausted by the work they do before they become obsolete.
This does not trivialize our projects. On the contrary, it sharpens their stakes. Climate predictions, medical interventions, political struggles, and theoretical inquiries all unfold in a window that will close. Their urgency comes precisely from the fact that nothing outside them will confer meaning or absolution if they fail. Extinction is not a metaphysical destiny that gives the present a secret purpose. It is a boundary condition that makes the present the only place where anything can matter at all.
If there is one lesson to take from the apoptosis of reason, it is that we do not need ontology to license our use of the tools we already have. Logic, mathematics, and science are not ladders that lead to a realm where justification finally ends. They are habits of a mortal animal that has learned, briefly, how to predict and intervene.
The dramatic thought is not that everything is nothing. It is that nothing stands behind what we do except the world that will go on without us and the finite patterns we inscribe in it for as long as we can.
Reason does not uncover a hidden order that makes our practices necessary. Reason is what our practices look like from the inside while they still have time to work.
References
These references are not exhaustive. They collect the main works I have drawn on or repurposed in developing Hard Logical Nihilism, Instrumental Finitism, the finite picture of scientific rationality, and the stance of onto-suspension.
Alhambra, José. “Particularism About Arguments.” Informal Logic 44, no. 3 (2024): 337–366.
Beall, J. C., and Greg Restall. Logical Pluralism. Oxford University Press, 2006.
Brassier, Ray. Nihil Unbound: Enlightenment and Extinction. Palgrave Macmillan, 2007.
Churchland, Paul M. “Eliminative Materialism and the Propositional Attitudes.” Journal of Philosophy 78, no. 2 (1981): 67–90.
Churchland, Patricia S. Neurophilosophy: Toward a Unified Science of the Mind–Brain. MIT Press, 1986.
Clark, Andy. Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press, 2016.
Cotnoir, A. J. “Logical Nihilism.” In J. C. Beall and B. Armour-Garb (eds.), Pluralisms in Truth and Logic, Oxford University Press, 2018, 301–329.
Field, Hartry H. Science Without Numbers: A Defence of Nominalism. Princeton University Press, 1980.
Friston, Karl. “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11, no. 2 (2010): 127–138.
Hohwy, Jakob. The Predictive Mind. Oxford University Press, 2013.
Maddy, Penelope. Naturalism in Mathematics. Oxford University Press, 1997.
Maddy, Penelope. Second Philosophy: A Naturalistic Method. Oxford University Press, 2007.
Quine, W. V. O. “On What There Is.” Review of Metaphysics 2, no. 5 (1948): 21–38.
Quine, W. V. O. “Two Dogmas of Empiricism.” Philosophical Review 60, no. 1 (1951): 20–43.
Russell, Gillian K. “Logical Nihilism: Could There Be No Logic?” Philosophical Issues 28, no. 1 (2018): 308–324.
Sellars, Wilfrid. “Empiricism and the Philosophy of Mind.” In Minnesota Studies in the Philosophy of Science, vol. 1, 1956; reprinted with introduction by Richard Rorty and study guide by Robert Brandom, Harvard University Press, 1997.
Sextus Empiricus. Outlines of Pyrrhonism. Various translations; e.g., trans. Julia Annas and Jonathan Barnes, Cambridge University Press, 2000.
Stirner, Max. The Ego and Its Own. 1844. Many English editions; e.g., trans. Steven T. Byington, ed. David Leopold, Cambridge University Press, 1995.
Wyatt, Nicole, and Gillman Payette. “Against Logical Generalism.” Synthese 198, Suppl. 20 (2019): 4813–4830.
Etchemendy, John. The Concept of Logical Consequence. Harvard University Press, 1990.
Russell, Gillian K. Barriers to Entailment: Hume’s Law and Other Limits to Logical Consequence. Oxford University Press, 2023.
Russell, Gillian K. “Varieties of Logical Consequence by Their Resistance to Logical Nihilism.” In Nicole Wyatt and Gillman Payette (eds.), Logic and Its Philosophy, forthcoming; draft available as “Safeguards Against Nihilism.”
Russell, Gillian K. “Deviance and Vice: Strength as a Theoretical Virtue in the Epistemology of Logic.” Philosophy and Phenomenological Research 99, no. 3 (2019): 548–563.
Williamson, Timothy. The Philosophy of Philosophy. Blackwell, 2007. (For background on exceptionalism about logic and theoretical virtues.)
Field, Hartry H. Realism, Mathematics and Modality. Blackwell, 1989.
Field, Hartry H. Truth and the Absence of Fact. Oxford University Press, 2001.
Bueno, Otávio. “Nominalism in the Philosophy of Mathematics.” In Stanford Encyclopedia of Philosophy, 2013.
Shapiro, Stewart. “Conservativeness and Incompleteness.” Journal of Philosophy 80, no. 9 (1983): 521–531.
Nelson, Edward. Predicative Arithmetic. Princeton University Press, 1986.
Yessenin-Volpin, Alexander. “The Ultra-Intuitionistic Criticism and the Antitraditional Program for the Foundations of Mathematics.” In A. Heyting (ed.), Constructivity in Mathematics, North Holland, 1959.
Cartwright, Nancy. How the Laws of Physics Lie. Oxford University Press, 1983.
Cartwright, Nancy. Nature’s Capacities and Their Measurement. Oxford University Press, 1989.
Giere, Ronald N. Explaining Science: A Cognitive Approach. University of Chicago Press, 1988.
Godfrey-Smith, Peter. Theory and Reality: An Introduction to the Philosophy of Science. University of Chicago Press, 2003.
Mitchell, Sandra D. Unsimple Truths: Science, Complexity, and Policy. University of Chicago Press, 2009.
Soler, Léna, Emiliano Trizio, and Andrew Pickering (eds.). Characterizing the Robustness of Science after the Practice Turn in Philosophy of Science. Springer, 2012.
Wimsatt, William C. “Robustness, Reliability, and Overdetermination.” In Soler et al. (eds.), Characterizing the Robustness of Science, 2012.
Weisberg, Michael. Simulation and Similarity: Using Models to Understand the World. Oxford University Press, 2013.
Hohwy, Jakob, Uffe Juul Jensen, and Karl Friston (eds.). Neural Network Perspectives on Cognition and Neuropsychiatric Disorders. Elsevier, 2002.
Clark, Andy. “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” Behavioral and Brain Sciences 36, no. 3 (2013): 181–204.
Friston, Karl, and Klaas Enno Stephan. “Free-Energy and the Brain.” Synthese 159, no. 3 (2007): 417–458.
Hohwy, Jakob. “The Self-Evidencing Brain.” Noûs 50, no. 2 (2016): 259–285.
Brassier, Ray. “Nominalism, Naturalism and Materialism: Sellars’ Critical Ontology.” In Contemporary Philosophers in Focus: Wilfrid Sellars, Routledge, 2015.
Brassier, Ray. “Concepts and Objects.” In The Speculative Turn: Continental Materialism and Realism, re.press, 2011.
Meillassoux, Quentin. After Finitude: An Essay on the Necessity of Contingency. Continuum, 2008.
Adorno, Theodor W. Negative Dialectics. Trans. E. B. Ashton, Continuum, 1973.
An Addendum on the Dialectic
I want to close with a brief clarification about something I have very deliberately not done in this paper. I have not tried to pull the Hegelian or Marxian dialectic into the scope of Hard Logical Nihilism or Instrumental Finitism, and this omission is intentional rather than accidental.
The positive and negative theses I develop here target what I have called logical and mathematical ontology. My concern is with frameworks that present themselves as systems of discrete consequence relations, proof rules, model classes, and mathematical structures that are taken to describe, or underwrite, the form of being. Hegelian and Marxian dialectic do not, in my view, belong to that family. They are not competitors in the same logical marketplace as classical, intuitionistic, relevant, paraconsistent, Bayesian, or ultrafinitist systems, nor are they rival candidates for the role of a fundamental mathematical ontology. Dialectic in Hegel and Marx functions first of all as a rhetorical and speculative tool, a way of narrating the development of concepts, social forms, and material relations through conflict, contradiction, and determinate negation. It is a style of immanent critique and historical reconstruction, not a worked out calculus of inference with explicit structural rules, semantics, and proof theory.
For that reason, I do not treat the dialectic as something that could be straightforwardly tested against the criteria that drive the nihilist arguments in this paper. When I ask whether there is a single, global, topic neutral consequence relation, or whether mathematics commits me to an infinite realm of abstracta, I am interrogating a specific kind of formal apparatus. Hegelian and Marxian dialectic lack the kind of discreteness and operational transparency that would make them candidates for inclusion in that apparatus. To treat the dialectic as if it were just another logic, or as if it were a hidden mathematical structure of reality, would already be to misread its role in those traditions.
This is not an implicit verdict on the value of dialectical methods for critical theory, social analysis, or political strategy. It is a bracketing. Within the frame of this project I suspend the claim that dialectic provides a deep logical or mathematical structure of being, in the same sense that logical generalists or mathematical realists imagine for their own formalisms. If anything, the stance I call onto-suspension would encourage me to regard dialectical talk about contradiction, negation, and sublation as another historically situated tool that can be used or discarded as it proves instrumentally effective, without granting it a special ontological privilege. But giving that claim the treatment it deserves would require a separate engagement with Hegel and Marx as theorists of history and social form, not a quick assimilation of their dialectic to the logics and mathematical frameworks I have been dissecting here.