The religion of prediction and the knowledge of the slave

In calculus, infinitesimal quantities are an idealization, and the concept of limit, provided to support the results obtained, is a rationalization. This dynamics going from idealization to rationalization is inherent to the liberal-materialism or material liberalism of modern science. Idealization is necessary for conquest and expansion; rationalization, to colonize and consolidate all that conquered. The first reduces in the name of the subject, which is always more than any object x, and the second reduces in the name of the object, which becomes nothing more than x.

But going to the extremes does not grant at all that we have captured what is in between, which in the case of calculus is the constant differential 1. To perceive what does not change in the midst of change, that is the great merit of Mathis’ argument; that argument recognizes at the core of the concept of function that which is beyond functionalism, since physics has assumed to such an extent that it is based on the analysis of change, that it does not even seem to consider what this refers to.

Think about the problem of knowing where to run to catch fly balls—evaluating a three-dimensional parabola in real time. It is an ordinary skill that even recreational baseball players perform without knowing how they do it, but its imitation by machines triggers the whole usual arsenal of calculus, representations, and algorithms. However, McBeath et al. more than convincingly demonstrated in 1995 that what outfielders do is to move in such a way that the ball remains in a constant visual relation —at a constant relative angle of motion- instead of making complicated time estimates of acceleration as the heuristic model based on calculus intended [65]. Can there be any doubt about this? If the runner makes the correct move, it is precisely because he does not even consider anything like the graph of a parabola. Mathis’ method is equivalent to put this in numbers.

Continuar leyendo “The religion of prediction and the knowledge of the slave”

Questions of program —and of principle, again: calculus, dimensional analysis and chronometrology

In physics and mathematics, as in all areas of life, we have principles, means and ends. The principles are the starting points, the means, from a practical-theoretical point of view, are the different branches of calculus, and the interpretations the ends. These last ones, far from being a philosophical luxury, are the ones that determine the whole contour of representations and applications of a theory.

As for principles, we have already commented, if we want to see more closely where and how the continuous proportion emerges, we should observe as much as possible the ideas of continuity, homogeneity and reciprocity. And this includes the consideration that all systems are open, since if they are not open they cannot comply with the third principle in a way that is worthy of being considered “mechanical”. This is the main difference between Nature and manmade machines.

These three-four principles, are nonspecifically included in the principle of dynamic equilibrium, which is the way to dispense with the principle of inertia, and incidentally, the principle of relativity. If we talk about continuity, this does not mean that we are stating that the physical world must necessarily be continuous, but that what seems to be a natural continuity should not be broken without need.

Actually, the principles also determine the scope of our interpretations, although they do not specify them.

As for calculus, which in the form of predictions has become for modern physics the almost exclusive purpose, it is always a matter of justifying how to achieve results known in advance, so reverse engineering and heuristics always won the day over considerations of logic or consistency. Of course there is a laborious foundation of calculus by Bolzano, Cauchy and Weierstrass, but it is more concerned with saving the results than with making them more intelligible.

On this point we cannot agree more with Mathis, who stands alone in a battle to redefine these foundations. What Mathis proposes can be traced back to umbral calculus and the calculus of finite differences, but these are considered as sub-domains of the standard calculus and ultimately have not brought a better understanding to the field.

An instantaneous speed is still an impossible that reason rejects, and besides there is no such thing on a graph. If there are physical theories, such as special relativity, that unnecessarily break with the continuity of classical equations, here we have the opposite case, but with another equally disruptive effect: a false notion of continuity, or pseudo-continuity, is created that is not justified by anything. Modern calculus has created for us an illusion of dominion over infinity and motion by subtracting at least one dimension from the physical space, not honoring that term “analysis” which is so proud of. And this, naturally, should have consequences in all branches of mathematical physics [45].

Mathis’ arguments are absolutely elementary and irreducible; these are also questions of principle, but not only of principle as the problems of calculus are eminently technical. The original calculus was designed to calculate areas under curves and tangents to those curves. It is evident that the curves of a graph cannot be confused with the real trajectories of objects and that in them there are no points or moments; then all the generalizations of this methods contain this dimensional conflation.

Finite calculus is also closely related to the problem of particles with extension, without which it is nearly impossible to move from the ideal abstraction of physical laws to the apparent forms of nature.

Mathis himself is the first to admit, for example in his analysis of the exponential function, that there is still a great deal to do for a new foundation of calculus, but this should be good news. At any rate the procedure is clear: the derivative is not to be found in a differential that approaches zero, nor in limit values either, but in a sub-differential that is constant and can only be 1, a unit interval. A differential can only be an interval, never a point or subtraction of points, and it is to the interval that the very definition of limit owes its range of validity. In physical problems this unit interval must correspond to an elapsed time and a distance traveled.

Trying to see beyond Mathis’ efforts, it could be said that, if curves are defined by exponents, any variation in a function should be able to be expressed in the form of a dynamic equilibrium whose product is unity; and in any case by a dynamic equilibrium based on a constant unit value, which is the interval. If classical mechanics and calculus grew up side by side as almost indistinguishable twins, even more so should be in a relational mechanics where inertia always dissolves in motion.

The heuristic part of modern calculus is still based on averaging or error compensation; while the foundation is rationalized in terms of limit, but works due to the underlying unit interval. The parallel between the bar of a scale and a tangent is obvious; what is not seen is precisely what should be compensated. Mathis method does not work with averages; standard calculus does. Mathis has found the beam of the scale, now it comes down to set the plates and fine tune the weights. We will come back to this later.

The disputes that still exist from time to time, even among great mathematicians, regarding standard and non-standard calculus, or the various ways of dealing with infinitesimals, at least reveal that different paths are possible, but for most of us they are remote discussions far removed from the most basic questions that should be analyzed first.

We mentioned earlier Tanackov’s formula to calculate the constant e way faster than the classic “direct method”; but the fact is that amateur mathematicians like the already mentioned Harlan Brothers have found, just twenty years ago, many different closed expressions that calculate it faster, besides being more compact. The mathematical community may treat it as a curiosity, but if this happens with the most basic rudiments of elementary calculus, what cannot happen in the dense jungle of higher order functions.

A somewhat comparable case would be that of symbolic calculus or computer algebra, which already 50 years ago found that many classical algorithms, including much of linear algebra, were terribly inefficient. However, as far as we can see, none of this has affected calculus proper.

“Tricks” like those of Brothers play in the ground of heuristics, although it must be recognized that neither Newton, nor Euler, nor any other great name of calculus knew them; but even if they are heuristics they cannot fail to point in the right direction, since simplicity use to be indicative of truth. However with Mathis we are talking not only about the very foundations, which none of the revisions of symbolic calculus has dared to touch, but even about the validity of the results, which cross the red line of what mathematicians want to consider. At the end of the chapter we will see whether this can be justified.

In fact, to pretend that a differential tends to zero is equivalent to say that everything is permitted in order to get the desired result; it is the ideal condition of versatility for whatever heuristics and adhocracy. The fundamental requirement of simplified or unitary calculus —plain differential calculus, indeed- may at first seem like putting spanners in the works already in full gear, but it truthful. No amount of ingenuity can replace rectitude in the search for truth.

Mathis’ attempt is not Quixotic; there is here much more than meets the eye. There are reversible standards and standards irreversible in practice, such as the current inefficient distribution of the letters of the alphabet on the keyboard, which seems impossible to change even though nobody uses the old typewriters anymore. We do not know if modern calculus will be another example of a standard impossible to reverse, but what is at stake here is far beyond questions of convenience, and it blocks a better understanding of an infinite number of issues; overcoming it is an indispensable condition for the qualitative transformation of knowledge and the ideas we have —or cannot have- of space, time, change and motion.

*

Today’s theoretical physicists, forced into a highly creative manipulation of the equations, tend to dismiss dimensional analysis as little more than pettifogging; surely this attitude is due to the fact that they have to think that any revision of the foundations is out of question, and one can only look ahead.

In fact, dimensional analysis is more inconvenient than anything else, since it is by no means irrelevant: it can prove with just a few lines that the charge is equivalent to mass, that Heisenberg’s uncertainty relations are conditional and unfounded, or that Planck’s constant should only be applied to electromagnetism, instead of being generalized to the entire universe. And since modern theoretical physics is in the business of generalizing its conquests to everything imaginable, any contradiction or restriction to its expansion by the only place it is allowed to expand must be met with notorious hostility.

It is actually easy to see that dimensional analysis would have to be a major source of truth if it were allowed to play its part, since modern physics is a tower of Babel of highly heterogeneous units that are the reflection of contortions made in the name of algebraic simplicity or elegance. Maxwell’s equations, compared to the Weber force that preceded him, are the most eloquent example of this.

Dimensional analysis is also of interest when we delve into the relationship between intensive and extensive quantities. The disconnection between the mathematical constants e and φ may also be associated with this broad issue. In entropy, for example, we use logarithms to convert intensive properties such as pressure and temperature into extensive properties, converting for convenience multiplicative relations into more manageable additive relations. That convenience becomes a necessity only for those aspects that are already extensive, such as an expansion.

Ilya Prigogine showed that any type of energy is made up of an intensive and an extensive variable whose product gives us an amount; an expansion, for example, is given by the PxV product of pressure (intensive) by volume (extensive). The same can be applied to changes in mass/density with velocity and volume, and so on.

The unstoppable proliferation of measurements in all areas of expertise already makes simplification increasingly necessary. But, beyond that, there is an urgent need to reduce the heterogeneity of physical magnitudes if intuition is to win the battle against the complexity with which we are accomplices.

All this is also closely related to finite calculus and the equally finitist algorithmic measurement theory developed by A. Stakhov. The classical mathematical measure theory is based on Cantor’s set theory and as we know it is neither constructive nor connected with practical problems, let alone the hard problems of the modern physical measurement theory. However, the theory developed by Stakhov is constructive and naturally incorporates an optimization criterion.

To appreciate the scope of the algorithmic measurement theory in our present quantitative Babel we must understand that it takes us back to the Babylonian origins of the positional numbering system, filling an important gap in the current theory of numbers. This theory is isomorphic with a new number system and a new general theory of biological population. The number system, created by George Bergman in 1957 and generalized by Stakhov, is based on the powers of φ. If for Pythagoras “all is number”, for this system “all number is continuous proportion”.

The algorithmic measure theory also raises the question of equilibrium, since its starting point is the so-called Bachet-Mendeleyev problem, which curiously also appears for the first time in Western literature with Fibonacci’s Liber Abacci of 1202. The modern version of the problem is to find the optimal system of standard weights for a balance that has a response time or sensitivity.

According to Stakhov, the key point of the weight problem is the deep connection between measurement algorithms and positional numbering methods. My impression however is that it still supports a deeper connection between dynamic equilibrium, calculus and what it takes to adjust a function; the weights for the plates needed by the beam of the scale identified by Mathis. Maybe there is no need to use powers of φ or to change the number system, but very useful ideas might be developed about the simplest algorithms.

Alexey Stakhov, The mathematics of Harmony

Of course, the algorithmic theory of complexity tells us that we cannot prove an algorithm is the simplest for a task, but that does not mean that we do not look for it continually, regardless of any demonstration. Efficiency and formal demonstration are largely unrelated.

Human beings inevitably tend to optimize what they measure most; however, we do not have a theory that harmonizes the needs of metrology with those of mathematics, physics and the descriptive sciences, whether social or natural. Today there are many different measure theories, and each discipline look for what is best for it. However, all metrics are defined by a function, and functions are defined by calculus or analysis, which does not want to have anything to do with the practical problems of measurement and pretends to be as pure as arithmetic even though it is far from it.

This may seem a somewhat absurd situation and in fact it is, but it also places a hypothetical measure theory that is in direct contact with the practical aspects, the foundations of calculus and arithmetic in a strategic situation above the drift and inertia of the specialities.

Calculus or analysis is not pure math, and it is too much to pretend that it is. On the one hand, and as far as physics is concerned, it involves at least a direct connection with questions of measurement that should be more explicit in the same math; on the other hand, the highly heuristic nature of its most basic procedures speaks for itself. If arithmetic and geometry have large gaps, being incomparably clearer, it would be absurd to pretend that calculus cannot have even much greater gaps.

*

On the other hand, physics will never cease to have both statistical and discrete components —bodies, particles, waves, collisions, acts of measurement, etc- besides continuous ones, which makes a relational-statistical analysis advisable.

An example of relational statistical analysis is the one proposed by V.V. Aristov. Aristov introduces a constructive and discrete model of time as motion using the idea of synchronization and physical clock that Poincaré already introduced precisely with the problem of the electron. Here each moment of time is a purely spatial picture. But it is not only a matter of converting time into space, but also of understanding the origin of the mathematical form of the physical laws: “The ordinary physical equations are consequences of the mathematical axioms, ‘projected’ into physical reality by means of the fundamental instruments. One can assume that it is possible to build different clocks with a different structure, and in this case we would have different equations for the description of motion”.

Aristov himself has provided clock models based on non-periodic, theoretically random processes, that are also of great interest. A clock based on a non-periodic process could be, for example, a piston engine in a cylinder; and this could also include thermodynamic processes.

It should also be noted that cyclical processes, despite their periodicity, mask additional or environmental influences, as we have seen with the geometric phase. To this, a deductive filter of unnecessarily restrictive principles is added, as we have already seen in the case of relativity. And as if all this were not enough, we have the fact, hardly recognized, that many processes considered purely random or “spontaneous”, such as radioactive decay, show discrete states during fluctuations in macroscopic processes, as has been extensively shown by S. Shnoll and his school for more than half a century.

Indeed, all kind of processes, from radioactive decay to enzymatic and biological reactions, through random number generators, show recurrent periods of, 24 hours, 27 and 365 days, which obviously correspond to astronomical cyclic factors.

We know that this regularity is filtered and routinely discounted as “non-significant” or irrelevant, in an example of how well researchers are trained to select data, but, beyond this, the question of whether such reactions are spontaneous or forced remains. An answer may be advanced: one would call them spontaneous even if a causal link could be demonstrated, since bodies contribute with their own momentum.

The statistical performance of multilevel neural networks —ultimately a brute force strategy- is increasingly hampered by the highly heterogeneous nature of the data and units with which they are fed, even though dynamic processes are obviously independent of the units. In the long run, the pureness of principles and criteria is irreplaceable, and the shortcuts that theories have sought for prediction accumulates a huge deal of deadweight. And again, it is of little use what conclusions machines can reach when we are already incapable of seeing through the simplest assumptions.

The performance of a relational network is also cumulative, but in exactly the opposite sense; perhaps it should be said, rather, that it grows in a constructive and modular way. Its advantages, such as those of relational physics —and information networks in general – are not obvious at first sight but increase with the number of connections. The best way to prove this is by extending the network of relational connections. And indeed, it is about collective work and collective intelligence.

With arbitrary cuts to relational homogeneity, destructive interference and irrelevant redundancy increase; conversely, the greater the relational density, the greater the constructive interference. I don’t think this requires demonstration: Totally homogeneous relations allow higher order degrees of inclusion without obstruction, just as equations made of heterogeneous elements can include equations within equations as opaque elements or knots still to unravel [46].

*

Let us go back to the calculus, but from a different angle. Mathis’ differential calculus does not always get the same results as the standard one, which would seem sufficient to rule it out. Since the principle is unquestionable, errors, if possible, might be due to an incorrect application of the principle, leaving the criteria still to be clarified. On the other hand, that there is a “dimensional reduction” of the curves in standard calculus is a fact, which however is not widely recognized because after all now the graphs are supposed to be secondary and even dispensable.

Are they really? Without graphs and curves calculus would never have been born, so that is enough. David Hestenes, the great advocator of geometric algebra and geometric calculus, says that geometry without algebra is dumb, and algebra without geometry is blind. We should add that not only algebra, but calculus too, and to a greater extent than we think, provided we understand that there is more to “geometry” than what the graphs usually tell us. We can now look at another type of graph, this time of vortices, due to P. A. Venis [47].

Peter Alexander Venis

In the transformation sequence Venis makes an estimate of its dimensionality that at first may seem arbitrary, although it is based on something as “obvious” as the transition from point to line, line to plane and plane to volume. The fractional dimensions seems at first sight striking, until we recognize it is just an estimate of the continuity within the order of the sequence, which could not be more natural.

Although Venis does not look for a proof, his transformation sequence is self-evident and more compelling than a theorem. One need only a minute of real attention to understand its evolution. It is a general key to morphology, regardless of the physical interpretation we want to give it.

For Venis the appearance of a vortex in the physical plane is a phenomenon of projection of a wave from a single one field where the dimensions exist as a compact whole without parts: a different way to express the primitive homogeneous medium of reference for dynamic equilibrium. It is clear that in a completely homogeneous medium we cannot characterize it as either full or void, and that we could say that it has either an infinite number of dimensions or no dimensions at all.

Thus, both the ordinary dimensions, as well as the fractional ones, and even the negative dimensions are a phenomenon of projection, of projective geometry. The physical nature is real since it participates in this one field or homogeneous medium, and it is a projected illusion to the extent that we conceive it as an independent part or an infinity of separate parts.

Peter Alexander Venis

Negative dimensions are due to a projection angle lower than 0 degrees, and lead to toroidal evolution beyond the bulb in equilibrium in three dimensions —that is, dimensions greater than the ordinary three. So they form a complementary projective counter-space to the ordinary space of matter, which with respect to unity is not less a projection than the first. Light and electricity are at opposite ends of manifestation, of evolution and involution in matter: light is the fiat, and electricity the extinction. Much could be elaborated on this but we will leave something for later.

Arbitrary cuts in the sequence leave fractional dimensions exposed, coinciding with the shapes we can appreciate. Since Mathis himself attributes the differences in results between his calculus and the standard calculus to the fact that the latter eliminates at least one dimension, and in the sequence of transformations we have a whole series of intermediate dimensions for basic functions, this would be an excellent workbench to compare both.

Michael Howell consider that fractal analysis avoids the usual dimensional reduction, and translates the exponential curve into “a fractal form of variable acceleration” [48]. It is worth noting that for Mathis the standard calculus has errors even in the elementary exponential function; the analysis of the dimensional evolution of vortices gives us a wide spectrum of cases to settle differences. I am thinking about fractional derivatives and differentiable curves, rather than fractals as non-differentiable curves. It would be interesting to see how the constant differential works with fractional derivatives.

The history of fractional calculus, which has gained great momentum in the 21st century, goes back to Leibniz and Euler and is one of the rare cases where both mathematicians and physicists ask for an interpretation. Although its use has extended to intermediate domains in exponential, wave-diffusion, and many other types of processes, fractional dynamics presents a non-local history dependence that deviates from the usual case, though there is also local fractional calculus. To try to reconcile this divergence Igor Podlubny proposed a distinction between inhomogeneous cosmic time and homogeneous individual time [49].

Podlubny admits that the geometrization of time and its homogenization are primarily due to calculus itself, as the intervals of space can be compared simultaneously, but the intervals of time cannot, and we can only measure them sequentially. What may be surprising is that this author attributes non-homogeneity to cosmic time, rather than to individual time, since in reality mechanics and calculus develop in unison under the principle of global synchronization and simultaneity of action and reaction. In this respect relativity is not different from Newtonian mechanics. According to Podlubny, individual time would be an idealization of the time created by mechanics, which is to put it upside down: in any case the idealization is the global time.

On the one hand, fractional calculus is seen as a direct aid for the study of all kinds of “anomalous processes”; on the other hand, fractional calculus itself is a generalization of standard calculus that includes ordinary calculus and therefore also allows to deal with all modern physics without exception. This makes us wonder if, more than dealing with anomalous processes, it is ordinary calculus what enforces a normalization, which affects all quantities it computes, time among them.

Venis also speaks of non-homogeneous time and temporal branches, though his reasoning remains undecided between the logic of the sequence, which represents an individualized flow of time, and the logic of relativity. However, it is the sequential logic that should define time in general and individual or local time in particular —not the logic of simultaneity of the global synchronizer. We shall return to this soon.