*Since the times of the Manhattan Project one cannot expect transparency in applied areas such as particle physics. Such concepts as the point particle, special relativity, or the segregation of thermodynamic problems, not to mention quantum mechanics itself, render a great service as a smokescreen for the real physical processes. This veil was found convenient and has been maintained so far in the technology sectors of strategic value. We’ll give here both a historical perspective and future prospects*.

The electron theory, already since the works of Lorentz of 1895, seems to close definitively the problems of Maxwell classic electrodynamics and lead directly to the problems of Relativity; on the other hand this particle is the very workshop in which quantum mechanics is developed. Thus, the entire foundations of twentieth-century physics, at the micro and macroscopic levels, find here a common Gordian knot. Later, the relativistic extension of classical mechanics and quantum mechanics acquired such an unfolding that the open questions of electromagnetism seemed too distant, if not irrelevant. That this was not so, and still is not so, is conclusively demonstrated by the fact that the main numbers of quantum electrodynamics, such as the mass, the electron charge or the fine structure constant, are still put by hand and are not derived from anything, which leaves this whole outstanding construction hanging in the air.

That enormous unknown that we call «electron», the faithful and common servant of all our technology, so apparently ordinary that it leaves no room in us for suspicion, so rubbed, used and abused even in our most trivial devices, is a genie unlike any other: the only genie, perhaps, that would allow us to realize certain desires not by taking it out, but by taking it back to the lamp.

The basic problem of self-interaction in current models is that of radiative reaction: an accelerated charge radiates electromagnetic energy and moment and therefore there must be a reaction on the particle, or from its electromagnetic field on itself. The causes of radiation do not necessarily have to be external, they can also be due to interaction with other particles.

For some of the authors of QED, Feynman for example, the idea of self-interaction is just silly, and the only sensible thing to do is to get rid of it and ignore the whole problem simply stating that an accelerated electron does not radiate at all. The issue has always been highly controversial and here we would not like to take side on any particular stance, but it seems to us that the position above mentioned is first and foremost a matter of convenience, and that there is no clear admission of the reasons for this convenience.

It is not said, for example, that Special Relativity, in reality the general case, is only a local framework for punctual events, in which extended particles don’t make any sense —in the same way that in General Relativity, in reality a particular case for gravity, it doesn’t make sense to speak of point masses.

Maxwell’s equations have general covariance but this does not lead to the Lorentz transformation that gives rise to Special Relativity. The original Maxwell equations have integral form and contain more information than their later differential version, but even in this last one they do not yet consider discrete charges which is the case that Lorentz introduces. However, Maxwell’s equations, in their most universal form and free of the metric expressly introduced for the electromagnetic force have natural invariance as Cartan, Kottler and van Dantzig recognized in their time.

Maxwell’s theory, limited as it is, does not cease to be a theory of the physical (not geometric) Continuum, but in Special Relativity this is the first victim, no matter how much one speaks of a space-time Continuum. Its operationalist character entails just that: the discretional cutting of that continuum with the introduction of arbitrary postulates incompatible with the continuum such as the invariance of the speed of light. This creates an additional contradiction with the principle of global synchronization that is not stated here but that since Newton always has been assumed.

If in Newton this principle is just a metaphysical statement, in STR, since it is a local framework of punctual events, only local conservation laws can be considered. In fact, if even today a extended particle model does not seem viable, it is because the special relativity already excludes it right from the start. Such basic phenomena as inductance and self-inductance, that obviously require circuits with a defined extension, don’t have a place in this theory. Then we find ourselves in the strange situation that a global synchronization is assumed that is excluded by the theory itself and that this very theory makes impossible to implement. Minkowski invariance is incompatible with any classical equation of motion, and, moreover, it does not seem that Special Relativity can give an appropriate transformation for accelerated frames of reference, and therefore, for radiation.

The question here is not to make a diatribe against Special Relativity, but to see that this is incompatible with the idea of an extended particle and with other basic features of the classical field theory such as acceleration and radiation. From the very beginning, STR is highly abstract and divorced from physical context, a sort of Solomonic judgement to move beyond the impasse of the frames of reference problem, later generalized without the slightest restraint to other cases with a rich material context. In all this, Special Relativity catches no mice, and a high price is paid by sacrificing everything to its formalisms. But this is not admitted, and attention is drawn to subordinate and secondary matters.

We don’t really know whether an extended particle model or a point particle one would be preferable, whether to contemplate the problems of radiation or to forget about them; but in any case to decide it we would have to start from a sufficiently impartial frame. Special Relativity is not impartial, neither Maxwell or Maxwell-Lorentz’s electrodynamics. In the classical electromagnetic field there is no place for point particles, and on the other hand, if Bohr was still proposing circular orbits in 1913, it was because Maxwell equations don’t tell under which precise conditions radiation is or is not emitted.

There is a gap in the balance of forces in Maxwell equations if they radiate whenever accelerated, and the radiated energy is the compensation of the work done by the auto-force (although in Maxwell’s original theory is not explicitly a force law). It is obvious that if we don’t know yet when there is radiation or not, auto-force and auto-energy constitute possible adjustment terms for the big numbers whose origin still remains unclear, besides enabling a reopening and a redefinition of the thermodynamics and irreversibility topics which were «outsourced» from fundamental physics. Both things seem to be as inconvenient for the QED version that pretend to be the last word on the issue, as they are promising for other approaches free of such commitments.

On the other hand, the opposition between point particle and extended particle is surely not as sharp as we sometimes want to put it, because there has never been anything like a «point particle» to begin with, but a particle-in-the-field whose center admits a point on which a force can be applied. Hertz had already seen the need to distinguish between material particle and material point: the first is an indestructible point of application of forces, the second is an extended variable set internally connected. These are different meanings and application domains, in a sense very similar to the one later raised by de Broglie.

What is an electron supposed to be made of? Of electromagnetic fields, of what else. This is why the particle cannot be out of the continuum, and this is also a good reason for self-interaction to raise. And as electromagnetic fields, electrons are not point particles; the vast majority of their energy lies within a radius of 2.8×10-11 meters, which for long distances gives a good approximation as a point particle that cannot be maintained for distances short enough. The wave of an electron in a superconductor can occupy meters and even kilometers, showing to what extent a particle is what its environment allows it. In any case the density of its field decreases as 1/r2, and if the magnetic field around a current is appreciable at meters, the same goes with each electron. In the limit this field extends to infinity to the greatest size and minimum structure; and it is in the opposite direction, with confinement, that the details are to be find —details that to some extent mirror of the confinement itself.

So there is no point particle without field. It is assumed that quantum electrodynamics was just about that, but then came the Aharonov-Bohm effect and the confusion among the champions of local action was more eloquent than millions of words; and it is even more eloquent if we consider that the effect can be derived entirely from the classical Hamilton-Jacobi equation and that Berry and others even found its exact analogous on the surface of the water, with illustrations so reminiscent of those of Bjerkness at the Paris International Exposition of 1881. In spite of everything, and who knows why, it is still common to hear that the well-known effect is a clear exponent of the unique character of quantum potentials.

We are told once and again that quantum mechanics is the fundamental level and classical electromagnetism necessarily follows from it, as it should be, but in practice without a classical view quantum mechanics is basically blind, a standard calculation formalism to which one has to tell how to operate with, and furthermore the transition zone between both is extremely diffuse and lacks a general criterion. Quantum mechanics is not a vision, but a recipe.

**Self-interaction without radiation: Weber’s electrodynamics and retarded potentials**

Weber’s electrodynamics, which precedes Maxwell’s by quite a number
of years, is a theory of direct action with many similarities with the
direct electrodynamics arising after Special Relativity but without any
of its contraindications, which still makes its revision so
recommendable today. Maxwell himself finally recognized in his *Treatise* the basic equivalence between field theories and the then called action at a distance theories such as Weber’s one.

Wilhelm Weber developed in passing an elliptical atom model fifty or sixty years before Bohr drew his circular orbit model and without the benefit of any of the data known to the Danish. Moreover, the nucleus remained stable without need of nuclear forces.

Weber’s law is the first case in which central forces are defined not only by distances but also by relative radial accelerations; beyond a critical distance, which converges with the classical electron radius, the inertial mass of the charge change sign from positive to negative.

A decisive point is that Weber’s electrodynamics, although largely equivalent to Maxwell’s, allows us to see another side of self-interaction that does not necessarily pass through radiation. Rather, it is a self-interaction of the whole system under consideration, a global feedback of the circuit, rather than self-interaction as a local reaction. In Maxwell’s original conception everything should be global and circuital as well, but you can’t work with point charges. Weber’s formulation allow both point and extended particles.

As Assis likes to remember, Weber’s electric force law of 1846 is the first equation of motion completely homogeneous and relational, that is, expressed in known quantities of the same type. The generalized use of heterogeneous quantities already tells us how far we are from transparency, just as the use of dimensional or «universal» constants shouts from the rooftops just that a theory is not universal. Weber’s force law of 1846 is fully relativistic 59 years before the not-so-relativistic Special Theory of Relativity, without any postulate or rupture with the continuum.

It is also the first law in dynamics that is a natural extension,
rather than an amendment, of Newton’s central forces law. As usual,
Newton’s followers were more newtonian than Newton himself, but the
first definition in the *Principia* don’t ask that the magnitude
of central forces should depend only on the distance between points, as
with a string attached, so to speak. This they took from a mere metaphor
like that of the slingshot, but not from definitions.

The extension of the law of central forces, apparently suggested by Gauss in 1835 and later reformulated by Weber, is in fact the most natural form of evolution of Newtonian mechanics: it involved only the static case (gravitostatic), and thus introduces the dynamic case in which force is not invariable but depends on relative velocities and accelerations. This combination of dynamic and static factors, represented by kinetic and potential energy, will also be key in Maxwell’s later formulation.

However, by the time physicists had come to grips with the idea of central forces, they also had needlessly overdetermined it. Weber’s law was criticized by Helmholtz and Maxwell for not complying with energy conservation since the delay of the potential grows with the increase in the relative velocity of bodies, and more potential energy seems to be lost than is kinetically recovered. In 1871 Weber succeeded in demonstrating the conservation of energy in cyclic operations, but various events, such as Hertz’s experimental findings, finally tipped the scales in favor of Maxwell’s view.

The same type of force law and retarded potential was applied by Gerber to gravity in 1898 to explain the anomaly of Mercury’s precession; to claim, as so many historians have done, that this was an empirical correction without theoretical basis is, unless one understands by theoretical grounds the introduction of arbitrary postulates, truth upside down.

It has been said that the greatest limitation of Weber’s model is that it does not include electromagnetic radiation, but the truth is that if it does not predict it, it does not exclude it in any way. In fact it is with Weber’s formulas that the term for the speed of light is introduced for the first time, something Maxwell knew perfectly. The dynamic extension of Weber’s force has a discrepancy with respect to the static case of the same order as the Lorentz factor, which is why results similar to those of Special and General Relativity can be obtained in a much more natural way.

Weber’s mechanics is not free of problems and ambiguities either. The most obvious, already noted by Poincaré, is that if we are obliged to multiply the velocity squared we no longer have a way of distinguishing between kinetic and potential energies, and even these cease to be independent of the internal energy of the bodies under consideration. On the other hand, if we are obliged to work with point particles we can hardly consider internal energies or forces. If Relativity predicts an increase in mass with velocity, Weber’s law, the purely relational one, predicts a decrease in force and an increase in internal energy, which we can perhaps attribute to frequency. Without this ambiguity there would be no non-trivial feedback cycle, no possibility of real self-interaction.

Of course, the increase in frequency is not Weber’s prediction, but
that of nuclear engineer Nikolay Noskov, who took up the track of
retarded potentials in various articles since 1991 and gave it a
universal range of validity. Since Weber’s law gives correct
predictions, and energy is after all preserved, Noskov assumes that the
non-uniform character of the retarded potential causes longitudinal
vibrations in moving bodies, which are a normal occurrence at the most
diverse levels: «*This is the basis of the structure and stability of
nuclei, atoms, and planetary and stellar systems. It is the main reason
for the occurrence of sound (and the voices of people, animals and
birds, as well as the sound of wind instruments), electromagnetic
oscillations and light, tornadoes, hydrodynamic pulsations and wind
blows. It finally explains the orbital motion, in which the central body
is in one of the focuses rather than in the center of the ellipse.
Moreover, the ellipse cannot be arbitrary since the lengths of the
cyclic and longitudinal oscillations have different values with a
resonance v1 = v2. This circumstance determines the ellipticity in each
concrete case*.»

Remarkably, Noskov, who doesn’t restrain himself of making generalizations either, dares mention the case of the elliptical orbits of celestial mechanics, an inconvenient subject by nearly all standards. It has been repeated so many times by history and publicity that Newton definitively explained the shape of the ellipses, that saying otherwise must be met with disbelief. But it is evident that Newton did not explain the case, which in turn gives rise to a very pertinent comment.

Whoever has become accustomed to the Newtonian description of orbits, assumes that (for the two-body problem) ballistics and a force dependent only on distance are enough to grant them stability. But the truth is that in Newton the innate motion of the body —unlike orbital velocity – is by his very definition invariable, which only allows two options. The first one is that the planet increases and decreases its velocity like a rocket with an autonomous impulse, which I don’t think anyone is willing to admit. This option also has a comic reverse: let us accept, instead of self-propulsion, that centripetal vectors can stretch and shrink under a certain quantitative easing. The second option is that which Newton himself proposes with his sleight of hand, and which everyone has accepted, combining in a single variable orbital velocity and innate motion.

What is not noticed in this second circumstance is that if the centripetal force counteracts the orbital velocity, and this orbital velocity is variable even though the innate motion does not change, the orbital velocity is in fact already a result of the interaction between the centripetal and the innate force, so that then the centripetal force is also acting on itself. We do not see how self-interaction can be avoided. According to modern relativistic field equations, gravity must be non-linear and able to couple with its own energy; however, all this is already present at the most elementary level in the old original problem of the ellipse, which General Relativity has never dared to touch.

So even Newton’s equations would be hiding a self-interaction and, whether we think in terms of action at a distance, or in terms of fields, what would distinguish the fundamental forces of Nature from those we humans apply by external contact in the Three Principles of Mechanics is precisely this self-interaction of the system as a whole. Hertz, the creator of contact physics, already noticed that precisely in celestial mechanics there is no way to verify the Third Principle.

Then, rather than disputing over which theory best «predicts» the tiny anomaly in the precession of Mercury, which after all is subjected to many other incidences, we could take the larger case of the planetary ellipse and its elementary asymmetry. For our so-called fundamental laws have still been unable to account for this the most fundamental asymmetries of nature in terms of contemporary forces instead of initial conditions which beyond the innate force and its vector here are completely irrelevant.

The Lagrangian of an orbital system, the difference between kinetic
and potential energy, is a positive value, and not zero as one would
expect. The Lagrangian is the quantity conserved, but nobody explains us
why the heck there is more motion than its corresponding potential
energy. According to the logic of the *Principia*, kinetic and
potential energy, directly derived from appreciable motion and position,
should be as equal as action and reaction. Is the retarded potential of
any use to explain the difference?

It serves to save the time lapse of the forces in terms of energy, as the Lagrangian but in a more explicit way. With retarded potentials it is the energy what is not present at a given moment, which is another way of demanding an action principle. So this seems to be the limit of application for contemporary forces themselves.

It is said of Newtonian gravity that it is the first fundamental law expressed as a differential equation, but in the specific case of the orbit it is the differential or contemporary description what fails notoriously. According to the vectors of an invariable force, the orbit should open and the planet move away; there is no way to grant stability. The Lagrangian version in terms of energy arises to obviate this difficulty, not because it is more convenient for complex systems. So in the strict sense there is no local conservation of forces here, what there is is the discretional derivatives based on the integral action followed from calculus; and this is the criterion for a theory to be considered as «local».

The «unreasonable effectiveness of mathematics in the natural sciences»,as Wigner put it, is only unreasonable if we forget what the procedure was.The fundamental forces of nature, independent of human mechanics, and which we try to restrict within the three principles of mechanics, do not satisfy these but indirectly —Newton’s gravity and Maxwell’s electromagnetism are theories with an indisputably integral origin and an interpretation as differential as their application. Nature does not obey our equations, our equations are instead the inverse engineering of nature, and it’s only normal that our engineering, like our knowledge, always falls short.

Noskov’s longitudinal oscillations depend on three variables: distance, force of interaction and phase velocity. Its length is directly proportional to the phase velocity and inversely proportional to the distance between the bodies and the force of interaction. Two cardinal formulas can be derived naturally from this: the Planck radiation law and the de Broglie correlation, which should include a phase velocity. This phase factor could also be applied to the so-called mass defects of nuclear physics.

We know that the de Broglie correlation applies without discussion to diffraction experiments showing waves in matter from electrons to macromolecules. Schrödinger’s equation itself is a hybrid of oscillations in a medium and in the moving body, which led Born, famous for his statistical interpretation, to consider it in terms of longitudinal waves too; Planck’s constant is reduced to a local constant relevant only for the electromagnetic force and masses of the order of an electron. Another proof that Weber’s force has something to say about the electron at the atomic level is the straight derivation that as early as 1926 Vannevar Bush made of the fine structure of the hydrogen atom levels using Weber’s equations without the assumption of mass change with velocity. Broadly speaking at least, is not that hard to reconcile quantum mechanics and classical relational mechanics.

This way Weber’s force and its retarded potential is in the best position possible at the crossroads of Maxwell’s field or classical continuum theory, Special and General Relativity, and Quantum Mechanics —being earlier and immensely simpler than all of them. And although it will surely be said that the aforementioned connection with quantum mechanics is too generic, the mere fact that it occurs without forcing things and without a chain of ad hoc postulates is already something.

There are, of course, many ways to rewrite quantum hieroglyphics and send them back to a classical conception; but few, if any, as direct as the royal road opened by Weber, which has historical and logical precedence over the others. One has complete freedom to tread other paths, but it is always advisable to contrast them with the original one.

Weber’s law can easily be transformed into a field theory integrating over a fixed volume as shown by J. P. Wesley, and then it is revealed that Maxwell’s equations are simply a particular case of the former. In this way and with retarded potentials high speed variations and cases with radiation are fully workable. Wesley also proposed other modifications of Weber’s law that we will not touch here.

Noskov’s connection with Planck’s radiation formula is unavoidable since the retarded potential demands by itself an action principle. On the other hand, if the case of the ellipse in the Newtonian terms of an invariable force and an invariable innate motion leads to a gradually open orbit, what would be hopelessly lost here is the closed and reversible system, as if we had a dissipation rate. A virtual dissipation rate, it is understood, because that the orbit is conserved is something we already know. Since we have ellipses both in the micro and macrocosm, this is, therefore, the most obvious form of connection between the reversibility of mechanics and thermodynamic irreversibility. Thus, we can situate relational mechanics at the junction of classical mechanics, quantum mechanics and thermodynamics.

One might fear, as Robert Wald says, that an extended particle irreducible to the point particle case will be lost in details and unable to yield a universal equation; but in Weber’s electrodynamics, unlike Maxwell’s, one can work with extended particles knowing that one can always get back to point particles with solutions that make sense. Such neutrality is absolutely desirable in any possible respect.

**Quantum thermodynamics and open meta-stable systems**

Physicists and mathematicians nearly in equal shares try to get away from thermodynamics as much as they can, and if they have to deal with it, at least it is expected to be the most habitual equilibrium thermodynamics, not that of open systems far from equilibrium and in exchange with the environment. This last case is supposed to be that of living beings, characterized by levels of complexity far away from anything germane to fundamental physics.

Distrust could be justified if what is at risk is the closed and strictly reversible character of the so-called fundamental laws —the most precious treasure of science. But on the other hand, there is no loss that cannot be turned into a gain; and in this case, the prize would be nothing less than the return of physics to the general current of fire and life.

Macroscopic irreversibility in no way can be derived from microscopic reversibility. Boltzmann and many after him have assumed a highly refined statistical argument, but there are also topological arguments, free of statistics, that show irreversible aspects in electromagnetism independently of scale or metric. Beyond any sophistication, statistical or not, we never see the emitted light rays returning to their source, which already tells us that theoretical physics has a very peculiar criterion about what is irreversible and what is fundamental.

Finally the experimental evidence has begun to arrive and could multiply in the next few years if interest really exists —and interest exists for obvious technological reasons. Needless to say, the experimental and technological environment in microscopic physics has changed beyond recognition. New disciplines such as continuous quantum measurement, quantum feedback and quantum thermodynamics are flourishing, enabling an increasing filtering of noise and an always better distinction between quantum and thermal fluctuations.

The print of atomic irreversibility has to be proportionally small
and only become macroscopically relevant with the usual large numbers,
but there are and there will be more and more ways to measure and detect
it. The subject is closely related to that of the point electron —one
more idealization- which is an excellent approximation for long
distances but which in close enough distances is bound to failure.
Macroscopic irreversibility should be only «*the global superposition of the microscopic irreversibility due to the large number of atoms and molecules in matter*«.

Following the logic of miniaturization, that of nanomachines or quantum information, there is much more interest in taking into account dissipation than in ignoring it, since it sets the very limits of operation for this devices. A different story, though, is that of reporting data transparently. Once again, applied physics will find new problems for theoretical physics, but we can predict that people will keep saying everything are fantastic new confirmations of the incredible quantum mechanics.

It seems then that it is the same interaction between photons and matter particles, what else, what affects in an irreversible way the electron and the center of mass of the atom; some already have advanced an estimate of the order of 10-13 joules. On the other hand, if we can only imagine electrons being made of electromagnetic fields —or by the constitutive tensions and deformations that translate it into its most material limit-, sooner or later it’s inevitable to think, as it has done so often, that particles of matter are only trapped light transforming linear into angular momentum. At least laboratories has been trying to create electrons and positrons from photons for years and the prospect of doing so seems reasonable.

The idea that reversible mechanisms depend on irreversible dynamics, that closed systems are drawn on an open background, or that irreducible and ideal particles are not such, sounds to the ears of the theoretical physicist as a loss of status, a fall from the mathematical firmament of pure ideas. But it is the only outcome to expect if things are taken far enough, even respecting the margin of validity of closed and reversible systems or point particles. Real things use to have structure.

This does not mean that an electron should decay, photons apart, into other material particles. It is enough that its limiting surface and its spin have a differential structure, not necessarily geometric, that can account for its continuous evolution and its always ephemeral configurations. That is, the structure describes the momentary relationship with the environment, not an internal composition in terms of other equally abstract particles. Models for this have already been suggested. Once we have a limiting surface for a volume, we also have a framework for its changes, the structure of its spin, density, statistics, etcetera, etcetera. I don’t know why we should give all this up in the name of a mere principle divorced from physical reality.

As for the magical ring of reversibility, as long as physicist believes is their own right they will never be able to receive it as a gift. Since the only thing that makes it valuable is the background from which it is conditionally, precariously constituted.

V. E Zhvirblis studied the osmotic and electrical rings in perpetual operation and came to the conclusion that systems where there are stationary forces cannot be isolated. Quantum mechanics, which pretend to do that, is, from a thermodynamical point of view, illegitimate. It is curious that this is considered completely normal at the quantum level but is banished from the macroscopic level in cases as clear as Lazarev’s koltsar when the only thing we have to accept is that isolated systems are simply not possible.

The problem is that in these cases the quantities, although measurable, are not controllable, and physics is based first and foremost on controllable quantities such as forces. The only way to solve the paradox, as Zhvirblis observes, is if the interaction forces in thermodynamic systems are described only in terms of thermodynamics itself.

Thus all systems, from particles and atoms to living beings and stars, could be seen as metastable systems, islands temporarily far from equilibrium supported by their own internal equilibrium laws.

Current physics constantly speaks of energy conservation but atoms theirselves seem unverifiable perpetual motion machines, not to mention the same energy and matter, which are conserved but have emerged from nowhere in a remote past. It would have to be more interesting to see what a system is capable of doing in the present than twelve billion years ago, to try to guess why a real entity such as a particle is not a point, than to imagine how the whole universe comes out from a point without extension. Both things are more closely related than we think, but some lead to answers here and now, while the others just take us as far away as possible.

It is not so difficult to find the thermodynamic print and the origin in continuum of particles: it’s enough to look for it with a zeal similar to the one that has been put in ignoring both -in separating them from what is really fundamental.

**Relational Clock, Piston Engine, Whirlwind Computer**

**Relational statistics**

For the ancients, to ask why the world existed was like imagining how fire came out of water; and to ask about life was like figuring out how water incorporated fire again without extinguishing it. Two apparent impossibilities that nevertheless balanced each other and acquired the intangible consistency of facts. In physics, the only way to create a similar relationship would be to try to see how the reversible comes out from the irreversible, and how a closed mechanism makes the omnipresent trace of heat disappear for a while. Surely we will never be able to explain one or the other, but perhaps their perspicuous balance would free us from the compulsive need for explanations.

Nothing can replace rectitude in reasoning, but in modern physics it is impossible to go very far without the use of a certain statistical apparatus that is, we might say, the inseparable companion of calculus. Not only quantum mechanics, even something as basic as the classical electromagnetic field has undeniable and basic statistical aspects. And above all, we would like to find a different general position for thermodynamics.

The reasons for creating a competent statistical-relational apparatus are multiple and range from the most obvious to the deepest.

The pure relational, in its very emptiness, reminds us reversibility itself; on the contrary, of heat we are left only with its statistical fingerprint. To see how they interpenetrate we need first to make a broad detour. In physics we will never have too much perspective since perspective itself is already the best part of knowledge.

Relational statistics can be seen, in its simplest version, as a modality of dimensional analysis, which tries to bring the equations, constants and units of the formulas of the current theories as close as possible to Fourier’s Principle of Homogeneity, generalized in more recent times by Assis as Principle of Physical Proportions: the desideratum that all the laws of physics must depend only on the ratio between known quantities of the same type, and therefore cannot depend on dimensional constants. Such were, for example, Archimedes’ laws of statics, Hook’s original constitutive law, or, shifting towards dynamics, Weber’s force law of electrodynamics.

It can be said that this principle of homogeneity is an ideal, not only of Greek science, but of science in general; it’s a universal zenith or pole. Needless to say, nearly all the laws of modern physics violate this requirement, so the issue is not to discard them, which is out of the question, but to bear this factor in mind for what might allow us to see one day.

It is not necessary to expand on the more standard modalities of dimensional analysis, already well known, and which over time have extended to other fundamental combinatory branches such as group theory. On the other hand it is not superfluous to bear in mind that when we say «relational statistics» we are uniting in a single concept ideas that in physics are rather antagonistic: the purely relational is the most transparent, the purely statistical is the most opaque at the physical level.

In any case, it is always useful to start with the dimensional analysis before proceeding with the major complexities of statistical analysis. Even today the interaction between these two branches of the analysis is scarce, due in part to the fame of superficial that it has among the highly creative theoretical physicists, more concerned also about the predictive capacity of their equations than about their cleanliness or legitimacy. In addition, an elementary dimensional analysis often calls into question the basis of many of their assumptions, such as the uncertainty principle.

An example of relational statistical analysis is that proposed by V.
V. Aristov. Aristov introduces a constructive and discrete model of time
as motion using the idea of synchronization and physical clocks that
Poincaré introduced precisely for the electron problem. Here every
moment of time is a purely spatial picture. But it is not only a
question of converting time into space, but also of understanding the
origin of the mathematical form of the laws of physics: «*The
ordinary physical equations are the consequences of the mathematical
axioms, which are “projected” onto physical reality by means of the
fundamental instruments. One may assume that it is possible to construct
different clocks with different structure, and in this case we would
have different equations for the description of motion*.»

With a model of rigid rules for geometry and clocks for time, a space-time model of dimensionless

variables is created. In the exposition of his idea Aristov deals
basically with the Lorentz transformations, the axiomatic construction
of a geometry and the most basic relation of the quantum uncertainty
principle; and if this is mandatory on one side of the map, on the other
side we can put Weber’s own relational mechanics with its derivations,
just as we can place as a clock at the center an extensive model of the
electron, using its vectors for many other correlations.

A chronometric approach to relational statistics is opportune because of the many discrete aspects that will never cease to exist in physics and that are independent of quantum mechanics: real particles, diverse aspects of the waves, collisions, acts of measurement and time measurements in particular, or cuts imposed at the axiomatic level are discrete.

The performance of a relational network is cumulative. Its advantages, such as those of the physics that bears that name —and information networks in general- are not noticeable at first glance but increase with the number of connections. The best way to prove this is to extend the network of relational connections. All this is about collective work and intelligence. With arbitrary cuts to relational homogeneity, destructive interference and irrelevant redundancy increases; on the contrary, the greater the relational density, the greater the constructive interference. I don’t think this requires demonstration: Totally homogeneous relationships allow higher order degrees of inclusion without obstruction, just as equations made of heterogeneous elements can include equations within equations as opaque elements or unravelled knots.

From continuity and homogeneity comes the unwritten legitimacy of laws, just as from primordial waters came the sovereignty of ancient kings and emperors.

In his sketch of relational statistics Aristov has no considerations for thermodynamics, but it is precisely this that would have to give a special relevance to statistics. Instead of a Poincaré-style clock, we could have introduced, for example, a «piston engine» like the one exemplified by Zhvirblis in order to generate forces without leaving the thermodynamic realm.

Let us return to our peculiar perspective exercise. Physics would have a «relational» north pole from which it aspires to explain everything as mere relations of homogeneous motion, and a «substantial» south pole from which it could give an authentic mechanical explanation of phenomena, generally with a kind of medium that provides continuity for the transmission of forces between separate parts of matter.

Trying to satisfy both extremes, the commitments of the history of physics left us in the middle, with Newton’s physics of absolute magnitudes, or with modern field theories, which attempt to maintain continuity but make use of the wrongly termed universal constants, in fact absolute magnitudes as in Newton.

This apparently philosophical disquisition contains the issue of the universal synchronizer. Between fire and water one would not put a cylinder and a piston, but perhaps a whirlwind, as in the famous Newton’s own bucket experiment, distant heir to a more daring one already conceived by the father of the theory of the four elements. Empedocles noticed that turning a bucket of water vertically prevented it from falling, counteracting, in other words, gravity.

The Newtonian bucket experiment and its centrifugal force obliges us to take a stance. What is the reason for the curvature of water? Newton says that absolute space; Leibniz, Mach and relational physics, that the relationship with the rest of the objects, including distant stars. For Newton, even if we could eliminate all the surrounding matter, the same phenomenon would occur; for the relational physics such a thing is impossible. And what would be the substantialist position? This one would say that an absolute medium of reference is required, and that it is that medium in any case the only that can transmit the influence of bodies from the environment, whether distant or not.

There do not seem to be more conceivable positions than these. And yet none of the three seems satisfactory to us. The assertion that there are absolute magnitudes independent of the environment, though preserved in all modern physics, is metaphysical in nature. On the other hand, pure kinematic relationships will never be able to explain physical reality, however desirable the principle of homogeneity may be. Finally, the determination of a framework independent of measuring devices seems to violate the relational principle —and the relativist one that came later; apart from the fact that the non-uniqueness of the action principles precludes the identification of unique causes.

But a fourth position, such as that of Mario Pinheiro, can be held; you can affirm that there is no kinematics without irreversibility. Pinheiro observes that what is important here is the transport of angular momentum holding the balance between centrifugal force and pressure.

I believe that this answer reveals at least both the part of truth that can be found in any of the three postures and its manifest insufficiency. Pinheiro advocates the use of a new variational principle for out-of-equilibrium rotating systems and a mechanical-thermodynamic time in a set of two first-order differential equations. There is a balance between the minimum variation of energy and the maximum production of entropy that fits in the simple classical examples such as free fall and that would have to be relevant in dynamics in general and electrodynamics in particular, because the conversion of angular and linear momentum should be central to it, no matter if this is not how is told.

As a sign of the times, the efforts to integrate dynamics with the the entropy/information concept are increasing dramatically. The reasons for this are diverse but convergent: the ever-increasing tendency to consider matter as mere support for information, the exhaustion of viable dynamic models, the gradual introduction of statistical factors and physical magnitudes of higher level. These trends are not going to subside. However, so far the use of entropy helps little or nothing to understand where the dynamical regularities we observe arises from.

It is also said that gravity takes entropy to its limit by area -in singularities-, though our ordinary experience says quite the opposite, that it is the only force that seems to compensate and revert it. In this case entropy reaches its extreme because the theory of gravity as an absolute force inescapably have to dispose of everything else; in a relational theory there is no place for gravitational singularities, because the force decreases with the increase of velocity. On the other hand it is curious that in Newtonian physics the forces that produce deformation, which is the only thing to expect, are considered pseudo-forces, while the fundamental force does not cause deformation of bodies when in motion, but only through potential energy in the static case.

Surely it is impossible to understand the relationship between the reversible and the irreversible, great key to nature, while subordinating nature exclusively to prediction. It is clear that what is predictable is regular and that this tend to confer to a certain extent the rank of law. But to what extent? Without due contrast, we will never know. We pride ourselves on the predictive power of our theories, but prediction alone needs not to be knowledge, and can rather be blindness.

Yes, power of prediction is also power of obfuscation. There is nothing more practical than a good theory, and a good theory is the least artificial and most respectful with the case presented. Our knowledge is always very limited, even regardless of our degree of information, and the respect for the little we know, being aware that it is intrinsically incomplete, is also respect for everything we do not know. It is clear that the ad hoc postulates, the teleology of our principles of action, or the inverse mathematical engineering, drastically reduce the quality of our generalizations no matter what degree of universally we attribute to them. It is not a matter of detracting from its merits any human achievement, but of being aware of its limitations.

R. M. Kiehn already spoke in 1976 of «retrodictive determinism»: «*It seems that a system described by a tensor field can be statistically predictive, but retrodictively deterministic*.»
It is conceivable, for instance, that we can deduce initial conditions
from the final global deformation of a solid, while conversely we can
only calculate probabilities from the initial condition and the applied
force.

The irreversible, dissipative part of electromagnetism seems to be included within the intrinsic covariance that Maxwell’s (and Weber’s) equations show in the language of outer differential forms. There has to be another way of reading «the book of nature» and this form cannot be simply an inversion —it cannot be merely symmetrical or dual with respect to predictive evolution. Ultimately it is neither a matter of going forwards nor backwards, of future or past, but of what can be revealed between both.

The whole quantum electrodynamics can be retrospectively derived from the classical Huygens principle, which is a principle of homogeneity, just as the conservation of momentum derives directly from the homogeneity of space. However, everything suggests that gravity exists due to the heterogeneous character of time and space. This brings us back to our previous considerations in terms of extremes.

The enigma of the relationship between dissipation and usable mechanical work, externally seen as a problem of natural laws, is only one with that of the relationship between effort —which is not the same as work-, work and reversible transactions in the logic of capital. It is not surprising that thermodynamics developed in the very years of the vindication of work as an autonomous category, and that that happened between both considerations of physiology, blood and heat (Mayer) and those of metal and machines (Joule) in the 1840s. There is still an unexpected circle to be closed here, and the more we manage to go full circle, the greater will be the repercussions on our vision of «external» nature (exploitable resources) and «internal» nature (society). And perhaps then we will begin to fully understand to what extent exploiting nature is the same as exploiting ourselves.

**The double-speak of modern physics**

It is well known that since the 17th century scientists have been pleased in encrypt their communications in order to be able to claim priority while at the same time avoiding giving an advantage to their competitors. Moreover, even in the time of Galileo or Newton there was a great awareness of the strategic, commercial and military value entailed even in a knowledge as apparently detached from the ground as astronomy or the maritime calculation of length.

Thus, for centuries science developed in the West in a delicate balance between its eagerness to expand and communicate, and the convenience of masking to a greater or lesser extent its procedures. If this happened even in the most pure and abstract disciplines, such as number theory, we can imagine a little of what happened with applied sciences.

When in the first decades of the twentieth century quantum mechanics and the theory of relativity developed, although certainly a reinvention of the public image of science was already underway, it can still be assumed that physicists communicated their theories with some spontaneity and that the rush to advance rapidly and at any cost created a kind of «natural selection» in the supply of methods, hypotheses, and interpretations. But then the Second World War came, with things like the Manhattan Project —Big Science in short- and scientists, as Oppenheimer admitted, lost whatever little amount of innocence they had left.

It is to be assumed that it was more or less around that time or shortly thereafter that it was fully understood that the two great new theories served as much to conceal as to communicate, which for such compromised areas as particle physics was specially convenient.

Why someone as practical and well-informed as Bush was wasting his precious time with the forgotten Weber theory in 1926, just when quantum mechanics was in full swing? The same inevitable Feynman, spokesman for the new algorithmic style in physics and man of the Manhattan Project, once admitted that the law of retarded potentials covered all cases of electrodynamics including relativistic corrections. Even Schwinger worked for the government in radar development. It is no exaggeration to say that the gestation years of QED make up the epoch when the dividing line between theoretical and applied physics, with all that implies for both, fades away forever.

Simply put, it’s too hard to believe that armies of talented physicists have not dared to leave the superfluous relativistic framework in particle physics when any amateur who considers it can see that it is a full-fledged embargo to any steady progress in the field. That this is not said is even more significant. Of course, in order to convince us otherwise, an unprecedented new specimen of physicist came out from nowhere, extrovert, casual, persuasive, fantastically gifted for communication and public relations. The spitting image of superficiality —and with a remarkable resemblance to Cornel Wilde. Each shall judge for himself.

We will ignore the things that have been said about the atomic bomb and the theory of relativity, because the atrocities of the advertising industry could still embarrass physicists. We can always blame it on the journalists. It is obvious that relativity has practically nothing to do with the development of nuclear physics, and just as obvious should be how little quantum electrodynamics had to do with the myriad of applied achievements of all these last decades.

The famous «shut up and calculate» of the aforementioned physicist and spokesman conveys wonderfully what is expected of the new researcher. The calculation would have to be only a third of the work in physics, theoretical or not. If we divide the sequence of any human activity into principles, means and ends, in physics calculation or prediction would be only the means between the intelligent use of principles (which are present at all times), and interpretations, which, far from being a subjective or philosophical luxury, are inexcusable when it comes to making sense of the mass of empirical data, continuing research and motivating the synthesis of applications. Ends as purposes are both interpretations and applications.

So you don’t see why in physics interpretation should be of less practical interest than calculation, and I think these arguments are so basic that anyone can understand them. Those only concerned with the means will also be used only as a means. Besides that, quantum mechanics and relativity, rather than facilitate calculations, often make them unusually difficult. Compare, for example, the calculations required for the simplest problem in General Relativity with those of a Weber law for gravity; not to mention the somewhat more complicated cases where it becomes completely unmanageable. The prescription of the relativistic particle has a very clear meaning and it is to make impossible any calculation with physical meaning. It goes without saying that experimental and applied achievements have been obtained in spite of this stumbling block, ignoring it rather than observing its prescriptions.

«Shut up and calculate» simply means «calculate what I tell you and don’t even think of anything else». Curious slogan coming from someone who has been glorified as an incarnation of unrestricted originality and of the genius on his own. He could also have said, «Keep it superficial and forget about getting to the bottom of the matter forever.» Calculation is completely blind if it is not properly coordinated with principles and interpretations; you can even make sure that this coordination is more important than everything else. But we should not see in this nothing inconsistent, but on the contrary a faithful translation of Big Science double-speak and its defined priorities behind the ever-present screen of public relations.

Known is how in 1956 Bohr and von Neumann came to Columbia to tell Charles Townes that the idea of the laser, which required the perfect phase alignment of a large number of light waves, was impossible because it violated the inviolable Heisenberg’s Uncertainty Principle. The rest is history. Now of course what is said is that the laser is just one more triumph of quantum mechanics, from which is a trivial particular case.

The above case has not been the exception but the general trend. We are told that quantum mechanics is something very serious because its mass of experimental evidence surpasses that of any theory, but for me it seems more serious the work of engineers and experimenters trying very hard to figure out causal relationships and applications without the support and even with the obstruction of an interpretation that prohibits physical interpretation and that seems expressly conceived to sabotage any attempted concrete application.

And as for the incomparable predictive power of quantum mechanics and QED, which cannot even imagine the collapse of the wave function it postulates, we must understand, what else, a power of a posteriori prediction. Today they also predict the Berry phase, that even appears in the soup, although with the opportune extensions of that which is not even «a theory», but hard facts only. It is clear that procedures that subtract infinite from infinite in a recurrent and interminable way, can be bent to obtain practically any result; but in spite of all this we are still told that it is a very restrictive theory. Perhaps it is, in the light of some abstract principle of symmetry among other principles still more abstract. It will be, no doubt, after you have adjusted all the nuts and bolts imaginable to the case in hand. Who said self-interaction? This is already a self-adjusting theory. And the best of all, you can make say anything to an incoherent theory.

Nothing could be more castrating for the physicist than to ask him exclusive fidelity to the calculations. All the more so if these calculations are so unscrupulous that they are used in an openly teleological way to replicate certain results —pure reverse engineering, hacking of nature to which one wants to give the category of Law.

If what is hacked from nature soon becomes Law, then it is not surprising that it is used as Law lest others hack into it.. And that is why today the Great Theories are used as the more effective blockade to technology transfer.

Nothing is more practical than a good theory. But those who are only interested in governing nature are not worthy of illuminating it, let alone understanding it. So surely we have the level of understanding that our social structure can tolerate, and this is the natural thing, first because scientific knowledge is a pure social construction, and second because all social construction is second nature trying to isolate itself from a supposed first nature.

Today, transformation optics and the anisotropy of metamaterials are used to «illustrate» black holes or to «exemplify» and «design» —it is said – different space-times. And yet they are only manipulating the macroscopic properties of the old Maxwell’s equations. Can distortion and distraction be taken further? And the fact is that all this could serve to investigate the uncontrollable aspects of the electromagnetic continuum that are but the classical form of the famous non-local aspects of quantum mechanics. Haven’t we seen that if Schrödinger’s wave function describes vibrations in a moving body and in the medium, classical electromagnetic waves are also a statistical average of both things?

In something as apparently well-trodden as the electron we can find not only the key to particle physics, but also the limits of application of nanotechnologies, quantum computing, and a legion of emerging new technologies. But if all these are matters of power and interest, we can already forget the truth.

That you can’t serve simultaneously both truth and power is something known to all. That scientific publicity develops its stories and narratives as if this conflict did not exist is the only thing you would expect. If the current accounts delay scientific development a little or a lot, it is not something we should regret either, since more technological advances, in the absence of other things, can only mean more disorder.

Fortunately, those who in one way or another obstruct knowledge are not capable of developing second-order knowledge free from their own distortions either; the complexity of their compromises strictly limits them. All the cunning of the world, all the experimental-statistical-mathematical-computational arsenal, cannot replace the sense of rectitude, the only one capable of delving into the unlimited promise of simplicity.

**References**

N.K. Noskov, *The theory of retarded potentials versus the theory of Relativity*

N.K. Noskov, *The phenomenon of the retarded potentials*

J. P. Wesley, *Weber electrodynamics*

Assis, A. K. T, *Relational Mechanics -An implementation of Mach’s Principle with Weber’s Gravitational force*, Apeiron, Montreal, 2014

T. B. Batalhao, A. M. Souza, R. S. Sarthour, I. S. Oliveira, M. Paternostro, E. Lutz, R. M. Serra

*Irreversibility and the arrow of time in a quenched quantum system*

Lucia, U, *Macroscopic irreversibility and microscopic paradox: A Constructal law analysis of atoms as open systems*

V. E. Zhvirblis, V. E, *Stars and Koltsars*, 1996

N. Mazilu, M. Agop,* Role of surface gauging in extended particle interactions: The case for spin*

N, Mazilu, *Mechanical problem of Ether* Apeiron, Vol. 15, No. 1, January 2008

V.V. Aristov, *On the relational statistical space-time concept*

R. M Kiehn, *Retrodictive Determinism*

M.J. Pinheiro, *A reformulation of mechanics and electrodynamics*