Since the emergence of computing as a mode of investigation in the sciences, computational approaches have revolutionised many fields of inquiry. Recently in philosophy, the question has begun rendering bit by bit—could computation be considered a deeper fundamental building block to

The online version contains supplementary material available at

Given the cross-disciplinary nature of the topic, the terms used in this paper which may derive different meanings in different fields are clarified here. The intended interpretation of terms in this paper are typically related to the mathematical definition:

It is a hallmark of human behaviour to use the most advanced technology of the era to analogise our living experience. From the industrial age when we put the “wheels in motion" for a mechanised vernacular of body parts, to when the thought “sparked" to instead consider our “genetic programming" in view of our biology. As we continue using our inventions as our conceptualisations, perhaps we reach a level of technological maturity where the analogy in fact becomes reality. The question of whether our reality could in fact be a high fidelity computer simulation is fast becoming a subject of interest in philosophy, and more recently in physics and computer science. Whether or not reality is

The concept of a simulated reality was first posed formerly in a publication [

Computer generated fractal tree

Bostrom’s work does not directly argue that we inhabit a simulation (and nor does this work). However the argument supposes,

Alternatively, a large number of simulated universes could be initialised at once, and evolve in parallel. Other academics and authors have expanded upon this idea. Busey [

These modern scientific descriptions echo an earlier philosophical proposal of

Other physicists and computer scientists have theorised more specifically on the construct of spacetime as arising from computation. Most notably, MIT professor Seth Lloyd postulates that the geometry of spacetime is a construct which can be derived from underlying quantum information processing [

Brian Whitworth has published on “the emergence of the physical world from information processing" [

The conflict of relativity theory with quantum mechanics is widely considered to be

This paper proposes a new construction for the way in which the physics of our reality may inherently emerge from laws of computation. The proposed model is not one of a virtual reality, but rather, a model rooted in the algorithms applied to modern computational physics. The model goes further than providing analogies to classical physics in that it offers a mathematical underpinning for exploring consistencies between the nature of computed properties and the nature of observable reality. Like Whitworth, the continuum computing theory proposed in this paper is predicated on a concept of

This work adopts as basis equations only the most fundamental physical laws in the form of the non-relativistic conservation equations. Numerical solution of the form of differential equations given by the governing conservation laws, via the proposed computing construct, is outlined through a simple set of premises, predicated on numerical stability and logical optimisation. This inherently constructs a fused spacetime and relativistic effects on the macroscale, qualitatively consistent with our observational reality.

The conceptualisation underlying this work diverges in an important way from previous works in simulation theory. Previously, Bostrom’s “Ancestor Simulation" implies the

This work therefore proposes an explanation for the fused nature of time and space on the basis of numerical stability constraints derived from computational continuum mechanics. By constructing a numerical method which is both computationally stable and logically optimised, it is possible to demonstrate mathematically qualitative congruities with relativity, as is explored in this paper. This model also gives rise to the concept of a continuum-quantum border at the level of the computational cell, which offers a means of conceptually reconciling the disjunctive physics of the classical and quantum scales.

Though the proposed model must be classed as speculative, it provides a framework for reconsidering many of the puzzling paradigms of modern physics, namely the coupling of time and space in relativity. Ultimately, the proposed model aims to provide a new explanation for the observed physics of our universe, and a computing conceptualisation which could offer utility to other physicists and philosophers in the development of new theory.

At the heart of computational science lies the problem: how does one model a continuous world-view as a discrete set of data?

If we run our finger along a line in space, and stop at any given point, there are very many quantities ascribed to that point. If we were to write a long list of all of the kinds of descriptions we can assign to that point (e.g. location, temperature, state of matter, etc.) and then move our finger to any other point in space, we can add a column, turn our list into a table and populate the cells. No matter where we point, we can describe the position by assigning values to the same set of quantity types. We can point infinitely many times between those first two finger placements and assign a value; however, to what resolution are those values unique? As two data points become closer and closer together they may become indistinguishable depending on the nature of the property and our means of perception. That is, given our human perceptive limits, we would reach a resolution threshold where the representation via discrete data is indistinguishable from the continuous world.

What then if we replaced our finger pointing with a microscope? Take the example of pressure in a gas: generally at the macroscopic scale, pressure can be represented by a scalar field. Zooming into the microscopic or atomistic scale, where the length scale of view approaches the mean-free particle path, the thermodynamic description of pressure fundamentally shifts to a description of the motion (mass and velocity) of individual molecules. In other words, on different scales, the list of descriptors fundamentally shifts. However, the macroscale is always an emergent view of the microscale.

Consider then, all of space being divided into a finite number of cells. Like pixels on a screen containing a single scalar quantity (colour), except each cell contains a large number of properties which collectively describe a complete state in that region of space. This cell exists at the resolution limit where its data gives rise to all emergent properties at every broader scale. This is our defined

Continuous quantities defined in space, which evolve in time, can be described mathematically via partial differential equations (PDEs). Firstly, taking the simplified example of the motion of a fixed object or particle (think of a perfectly rigid stone) its position can be determined through 6 parameters (6 degrees of freedom: 3 translational and 3 rotational axes) at every point in time. The dynamics of this rigid object occur within a finite-dimensional configuration space. The configuration of a continuous medium however (think of a fluid) occurs within an infinite-dimensional configuration space. This is the key difference between a particle based description and a continuum based description. It renders the latter much more difficult to solve, and typically goes beyond the limits of pen-to-paper mathematics, requiring numerical methods (and computing power) for practical solution.

The numerical (computational) solution, can only ever achieve an approximation of the true continuous solution. However, the higher the resolution, the closer this approximation converges to reality.

A PDE for a single given variable

The most fundamental laws of nature, which govern how quantities evolve from processes and whether processes can occur, are the conservation laws. A conservation law is a continuity equation, expressed mathematically via PDEs. These equations define the relationship between the “amount” of a quantity and the “transport” of a quantity:

In classical physics, these laws must minimally include the conservation of: mass (matter), momentum, energy, and electric charge.

A generalised conservation law is given by:

A conservation law applied across two neighbouring computational cells, defined with initial data, naturally defines a Riemann problem [

In one dimension we define our computational cells in

Finite volume discretisation with one spatial dimensional and one temporal axis, with positions in space defined as

For a set of interdependent variables (i.e. which describe a state), we obtain a

The integral form of equations applied over the defined finite volume can be equated to a surface integral via Green’s theorem:

It is clear from this time-explicit numerical update formula how all subsequent time solutions evolve from given initial data (

If the time step is too large, and the state data of a given cell propagates beyond the adjacent cell within a single time step, then state information is lost. This leads to error and ultimately, instability. The dimensionless CFL number which ensures stability depends on the numerical scheme and the number of dimensions. For the simplest one-dimensional problems is derived as

Therefore the maximum time step in a computational cell, is the time at which the fastest characteristic wave crosses a cell boundary (Fig.

The construction of the proposed continuum computational model is laid out in the following premises:

Governing equations and computability

Firstly, it is important to note that the system being modelled is constituted only of the most fundamental physical laws: full set of non-relativistic conservation laws. That is, the relativistic phenomena (and corresponding form of the equations) is not assumed a priori. The stable and optimised numerical solution of the system (locally across cells) under the proposed computing construct leads to the resultant relativistic effects macroscopically, as will be presented.

Before presenting the computational construct applied to the solution of these system equations, we first consider the computability of such equations. The mathematical nature of the governing equations determines the class of numerical methods required for solution. These different classes of numerical methods can be assessed in terms of their computational complexity. To define this concept clearly: computational complexity, or algorithmic complexity, refers to the amount of resources required to execute an algorithm. Memory is the resource for the storage of data (space complexity), and computational time is the resource for the mutation of that data (time complexity) and depends upon the number of elementary operations required to compute the data update [

Therefore, as a 0th step to the outlayed premises, we first assess any constraints or requirements on the governing non-relativistic continuity equations in terms of their computational complexity and computability. To put it another way- could all conceivable laws of physics be computed, or rather, are there computational constraints on the form of the governing laws of physics as we observe them? Specifically we explore the computational basis for the existence of a finite maximum signal velocity (the speed of light—

By counterpoint, let’s first consider the case where the fastest signalling wave i.e. maximum speed of information propagation is permitted to be infinite. Take as an example equation, the rearrangement of the electromagnetic wave equation in the limit of

Let’s now consider the case where the fastest signalling wave of the system has a finite maximum value. This defines the fundamental laws of physics of the universe as hyperbolic. The domain of dependence for a specified point in a hyperbolic system of equations, is the space–time envelope enclosed by the maximum and minimum speeds of information propagation, where therefore, the maximum stable time-step depends upon the absolute maximum wave speed. This is the concept enforced by the CFL condition. Limiting the time-step in this way therefore permits the construction of a computational stencil, which spans a small local collection of contiguous cells which contains the physical domain of dependence, and which the numerical method uses as input data for the computation of a cell-state update. The exact numerical method defines the size of the computational stencil (

Further, for a time-explicit hyperbolic system, where the applied numerical update scheme is of the form of Eq.

From this, let’s therefore consider a fundamentally computational universe, where all of space space

To summarise (where:

Underlying discretisation of space and time

The underlying computational construct assumes all of space is a continuous medium which is discretised by a mesh of computational cells. In three dimensional space this represents a three dimensional finite volume cell. Therefore, the cell represents a finite region of space, containing a fundamental data set of discrete quantities, which collectively gives rise to all emergent properties at the continuum macroscale (Fig.The data associated with each computational cell is evolved through some set of elementary operations, which results in the mutation of that data. This data mutation represents a discrete evolution in time. This treatment of space and time as inherently different things—space as the underlying data structure, time as the actual computation—is conceptually cornerstone to a simulation theory grounded in computational physics. The key issue in this defined systemisation is the apparent de-coupled treatment, therefore, of space and time. It is through the subsequent premises of this work, based on principles of continuum computing, that a logical construct for spacetime coupling for such a discretisation is proposed.

This computational cell construct, in combination with the numerical update formulas used to compute the system solution, implies a model which is fully deterministic. That is, a computed state is determined completely from the preceding state. As per the 0th premise, the solution of hyperbolic continuity equations implies a finite domain of dependence (where

Necessary stability constraints

As per the introduction, the deterministic evolution of the system in time from the initial state data defined in space, is governed by the system of conservation equations. For the system to evolve in a computationally stable manner, the CFL condition must be satisfied. The CFL condition requires that a maximum wave speed be identified, and serve to limit discrete time steps.As detailed in the 0th premise, the finiteness of a maximum signalling velocity for information can be assumed, based on underlying computability requirements on the governing system equations.

The value of the maximum wave-speed depends on the system being modelled. Considering the defined computational cell which contains the fundamental state data set, the wave speeds of information propagation depend upon the characteristics of the full set of conservation equations computed across the cells. The fastest wave speed (

Optimisation of the computational step

In a standard continuum computing model, the time step is restricted globally by the overall smallest stable time step computed in a given cell, in order to achieve numerical stability and such that time evolves in steps uniformly across the domain. After every evolution we have global time equivalence. Therefore, every cell which has aDue to this observed time step inefficiency, mixed resolution meshes with local time-stepping was a concept proposed by Osher in [

Diagram of a typical local-time-stepping scheme. Cells are refined spatial by a refinement factor, and corresponding time reduction factor achieves global time equivalence with the base cell after sub-cycled smaller time steps

It is proposed here that the system is not constrained at all by time equivalence. By relaxing this constraint we are able to maximally optimise computation over freely defined cell-sizes. By grounding the algorithm in the logical basis of (i) numerical stability, (ii) freely varying cell-size, and (iii) computationally optimised time-stepping, the construction permits information fluxes evolved by time steps of different sizes at cell boundaries. Though sizes of time steps may differ, the integer number of steps in the system evolve simultaneously. That is, we may consider the simulation to evolve in iterates as discrete global updates, with the stable time step being computed at every cell boundary.

The idea can be summarised as enforcing computational optimisation instead of enforcing time-equivalence:

Under a continuous time-value axis, for a non-linear system or non-uniform grid, the maximum stable time step varies between cells, and in a time-equivalent simulation, becomes limited by the smallest global

Under a time-iterate axis, the characteristics can be considered scaled with respect to

As shown in the diagram of Fig.

The derivation of the conservative numerical update formula presented earlier in Eqs.

The physics of every individual cell pertains to its own locally determined

This initial analysis presents a mathematically qualitative assessment of how the proposed continuum computing model naturally gives rise to aspects of observed reality, consistent with Einstein’s theory of relativity. This paper explores the premise of fused spacetime, aspects of special relativity, and some general relativistic effects as a minimal basis for further explorations.

One of the fundamental conceptual difficulties in proposed simulated reality theories, is that computational simulations tend strongly to decouple the concepts of space and time, whereas relativity implies that space and time should not be decoupled. The core postulate of this paper is the proposal of a logical construction whereby stability constraints naturally enforce a strict coupling between space as the underlying data structure, and time as the computational evolution.

This coupling emerges as follows: within a reality comprised of continuum-type computational cells, the interdependency of time and space arises necessarily from the stability constraints of the numerical method applied to solve conservation equations via a discrete quantity formulation. If we suppose reality behaved as a continuum-type computation-then time step must become a function of discrete space under the CFL condition. The dependency of time on space must therefore also be a function of the fastest speed of

The coupling of space and time in forming 4-dimensional fused spacetime underlies the relativistic phenomena of our macroscale universe. One could consider this postulate in more philosophical terms and noting the key difference to other works in simulation theories. Rather than explicitly

The concept of a

We consider first the flat spacetime in the absence of gravitational fields. The core postulates of special relativity are that the laws of physics are invariant in all inertial frames of reference and that the speed of light in a vacuum is the same for all observers [

With reference to our unit cell, we consider a flat spacetime 1D mesh comprised of constant sized

(Left) 1D computational unit-cell as defined above. (Right) 1D cell containing a non-zero matter-velocity with reduction in stable computed

For a cell region containing a non-zero matter-velocity (

The general qualitative argument is as follows: where a non-zero matter velocity increases the total rate of information propagation, computed real time must dilate to maintain the numerical stability condition. The exact manner in which the matter-velocity and the speed of light are positively constructive

In relation to the core postulates, it is important to note the following: in an inertial reference frame where a local domain of cells contain a uniform matter velocity, all cell data evolves uniformly in discrete steps of real

Where the flux associated with base matter-velocities is non-zero, is where there is a velocity differential across a cell border. Where information about an event occurring in a fast-moving reference frame crosses the cell border into a stationary reference frame (or vice-versa), information flux and stability depends upon these velocities. The processing of the information (as it passes across cell borders) defines a local Riemann problem between fixed computational cells. In the time explicit model defined by the numerical update formula of Eq.

Additionally, within locally computed stable time steps, the speed of light is constant within every cell,

It is important to note here that the arguments are being presented in a qualitatively mathematical form to demonstrate the rudimentary consistencies with relativity theory. Further developments such as deriving Lorentz factor scaling, proving general covariance is preserved, and quantitatively demonstrating synchronisation within frames, would require the definition of specific update schemes of the simulation, and construction of an underlying mesh topology. The core arguments presented in this work are limited to the core qualitative and conceptual bases. However further explorations are invited, such as; including different, more complex, and even unstructured, mesh topologies paired with various numerical update schemes in order to derive further consistencies with properties of relativity.

In this section we explore how the computational construction produces gravitational time dilation and the gravitational deflection of light. Again, it is somewhat futile to reason

(Left) 2D visual depiction of spacetime curvature in the presence of massive bodies, source: [

General relativity describes the manner in which stress or energy in the universe (specifically in the presence of large masses) curves and warps spacetime. The geometry of spacetime is described through the metric tensor

In regions of a numerical simulation where higher precision is required, this can be achieved through higher resolution (or higher order methods). Where higher precision is required in regions of high complexity, greater resolution then results in greater information density. Though it seems reasonable to assume that regions in space of high density and energy are associated with regions of greater complexity (requiring higher numerical precision), a definitive relationship cannot be drawn without a deeper understanding of the specific computational construction. Therefore, it is simply assumed from the logical computing argument, in order to optimise computational resources, regions of higher precision (smaller cell size) and lower precision (larger cell size) are freely permitted. Regions of refinement and dispersement of the underlying computational mesh creates the topology of the fabric of space in a computational reality. Where a uniform mesh represents the flat spacetime, a non-uniform mesh produces a warped spacetime. The prototypical depiction of the fabric of space, warped in the presence of large masses, is shown in Fig.

Equivalent wave speeds

The computational explanation of gravitational time dilation can be laid out very simply: where we have defined local real time to be computed as a function of cell size and maximum wave speed, we observe that the smaller the finite computational cell volume the smaller the local computed stable time step

Thus far, we have presented all of the theory in one spatial dimension and one time dimension. If local time is computed as the optimum time step based on the given cell dimension, what then when the mesh is extended to multiple spatial dimensions? We extend the theory logically in a second dimension. Maintaining the same principle, consider local time to be computed optimally at the borders defining

Under the proposed computational model and its logical extension to higher dimensions, we observe that matter dynamics are influenced by a non-uniform mesh, and furthermore, light automatically bends according to regions of refinement and dispersement. The dynamics and stability of this model are best demonstrated through an example two-dimensional numerical simulation.

Reducing the complex system of conservation equations to the simplest possible demonstration equation, let’s consider the conservative linear advection of a massless photon at the speed of light. Our test equation is given by:

For this single advection equation there is simply one characteristic wave speed

The scalar advected variable

Setting CFL = 1 and the photon to have a velocity

The real local time step and scaled wave speeds are computed for both

In Fig.

(Left) initial condition for test simulation with photon region set to 1.0. (Right) final time iterate solution. Mesh consists of 160

Subsequent time steps, from iterate number 15 to iterate number 89. A ‘tail’ of 0.5 is added in order to visualise the path traced by the photon

As can be seen, the motion of the photon takes a curved path through space, and distorts in shape moving through the mesh geometry. The evolution of local real time is tracked, and the final real time landscape across the domain is represented in Fig.

Evolved local real time for cells across the domain in 2D (left) and projected as a surface over the given mesh (right)

This simulation demonstrates how the advection of any quantity (including light) takes a path which is dependent on the underlying mesh construction of space. Regions of compression, expansion and distortion of the mesh directly impact the path of propagating quantities. It also demonstrates that the formulation of the numerical method is stable. That is, where space is non-uniform, real time is computed as a function of space and wave speed, and under the time iterate transformation the advection constant is scaled, we show the solution evolves explicitly and in a stable manner by discrete time iterates.

This particular mesh geometry was not representative of anything physical. The purpose was to demonstrate that mesh distortion influences the path of light (and masses) but not the specific hyperbolic path light takes around a massive body due to gravitational effect. In theory, an underlying mesh geometry could be constructed as consistent with observed gravitational effect, causing light to take a hyperbolic deflected path around massive bodies. Such a construction would require a relationship to be defined between energy and mesh resolution. As such, there may not be a unique solution, rather a viable solution space. The solution space may consider different mesh types (not necessarily a Cartesian structured mesh as shown), different finite volume cell types (not necessarily comprised of quadrilateral cells) and different numerical update schemes. This should be the subject of future work.

This paper proposes a new explanation for the fused nature of space and time as we observe it. By exploring the idea of reality as generated from laws of computation, it proposes how spacetime may be a construct invoked from numerical stability constraints as they arise in continuum computing. Such explorations, and the congruities they derive with relativistic physics, raise intriguing questions both scientifically and philosophically.

A continuum-type numerical method is described and proposed as a logical and viable construction of an underlying computational model. From established computing theory, the Courant–Friedrichs–Lewy stability restriction creates a dependency of time on space. The dependency is also related to the fastest wave of information propagation, which, when applied to the full set of (non-relativistic) conservation laws, derives a dependency on matter velocity and its fastest signalling wave: the speed of light. By relaxing time equivalence to therefore optimise every local computational step, time steps are computed at cell boundaries and vary across the domain depending on both the spatial resolution and maximum wave speed. According to the cell-data which describes the state in space and time, local characteristics are preserved whilst scaled characteristics apply relative to the time

The construction of such a numerical method leads to observations consistent with our known reality: fast travelling objects or reference frames produce time dilation (special relativistic phenomena), as does the compression or expansion of the fabric or mesh of space (a general relativist effect). In both cases the time dilation is produced as a condition of maintaining numerical stability within a construct which optimises computational efficiency.

Extending the model logically in multiple dimensions renders real time step computed at dimensional cell interfaces effectively acting as a vector relative to the cell. Where time steps are computed according to the topology of the spatial mesh, the consequent state scaling causes light (and other entities) to move along paths influenced by the geometry of the mesh defining space. The time dilation and path of advection is demonstrated through an example simulation programmed via the described method. Though a deflected path of motion was demonstrated, the question remains—is it possible for a mesh topology to produce the type of unbound hyperbolic orbits light is observed to take around massive bodies? There is potentially a large solution space of viable meshes which produce dynamics consistent with gravitational effect. Further explorations on this are warmly invited.

In terms of mesh topology there is a logical argument for mesh refinement in regions of massive bodies. It is standard in continuum computing to have high resolution in regions where high precision is required (typically related to peak property gradients). In regions where high precision is required, this implies regions of greater complexity, and the local computational cells are of higher resolution in these regions, and low resolution elsewhere to best optimise computational resources. In a perfectly optimised simulation, mesh refinement and dispersement would be related to regions of high and low information density. Where matter is fundamentally stored as data, a notable advancement would be to develop a logical relationship between matter and complexity, whereby the corresponding optimally refined mesh contained regions of refinement and dispersement related to the vicinity of massive bodies. Such a relationship could be tested for its prediction of numerical time-stepping in line with gravitational time dilation. The proposed continuum computational model therefore invites a new perspective when considering some of the deeper unanswered questions invoked by Einstein’s theory of relativity, such as:

Intrinsic to this proposed model is the implication that time evolves as discrete iterates. That is, locally each cell computes a real time step, and globally every cell evolves simultaneously by a time

The implication of the proposed construct is that the computational stability constraints of the individual cells result in necessary spacetime coupling, observed as relativistic effects on the broader scale. The philosophical interpretations of this are intriguing. Rather than programming the physics of the universe as we observe and understand—as is implied by the “Ancestor Simulation" of Bostrom’s Simulation Argument—the premise is reversed. That is, this work implies that aspects of our observed reality could arise inherently

The proposed computing construct provides a new basis for which further developments can be built upon in producing effects in alignment with observable physical phenomena. Quantitative developments will require supposing specific mesh topologies and numerical update schemes in order to derive further consistencies with properties of relativity. This paper focusses specifically on the outlined congruities as a minimal basis for such further explorations.

Given the core contribution is that of a new

I would like to acknowledge and thank Viv Bone, whose ongoing support and discussions ultimately enabled these ideas to become a paper. I would also like to thank Joris Witstok for his advice and support through the writing process.

The code developed to generate the results of Sect.

Below is the link to the electronic supplementary material.

Electronic supplementary material 1 (TEX 2 kb)

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.