Golden mean, statistics and probability

It has been said for some time that in today’s science “correlation supersedes causation”, and by correlation we obviously mean a statistical correlation. But even since Newton, physics has not been concerned that much with causation, nor could do it, so that more than a radical change we only have a steady increase in the complexity of the variables involved.

In the handling of statistical distributions and frequencies it makes little sense to talk about false or correct theories, but rather about models that fit the data better or worse, which gives this area much more freedom and flexibility with respect to assumptions. Physical theories may be unnecessarily restrictive, and conversely no statistical interpretation is truly compelling; but on the other hand, fundamental physics is increasingly saturated with probabilistic aspects, so the interaction between both disciplines continues to tighten.

Things become even more interesting if we introduce the possibility that the principle of maximum entropy production is present in the fundamental equations of both classical and quantum mechanics —not to mention if basic relations between this principle and the continuous proportion φ were eventually discovered.

Possibly the reflected wave/retarded potential model we have outlined for the circulatory system gives us a good idea of a virtuous correlation/causation circle that meets the demands of mechanics but suspend —if not reverse- the sense in the cause-effect sequence. In the absence of a specific study in this area, we will now be content to mention some more circumstantial associations between our constant and the probability distributions.

The first association of φ with probability, combinatorics, binomial and hypergeometric distributions is already suggested by the presence of the Fibonacci series in the polar triangle already mentioned.

When we speak of probability in nature or in the social sciences, two distributions come first to mind: the almost ubiquitous bell-shaped normal or Gaussian distribution, and the power law distributions, also known as Zipf distributions, Pareto distributions, or zeta distribution for discrete cases.

Richard Merrick has spoken of a “harmonic interference function” resulting from harmonic damping, or in other words, the square of the first twelve frequencies of the harmonic series divided by the frequencies of the first twelve Fibonacci numbers. According to the author, this is a balance between spatial resonance and temporal damping.

In this way he arrives at what he calls a “symmetrical model of reflexive interference”, formed from the harmonic mean between a circle and a spiral. Merrick insists on the transcendental importance for all life of its organization around an axis, which Vladimir Vernadsky had already considered to be the key problem in biology.

Richard Merrick, Harmonically guided evolution

Merrick’s ideas about thresholds of maximum resonance and maximum damping can be put in line with Pinheiro’s thermomechanical equations, and as we have indicated they would have a wider scope if they contemplated the principle of maximum entropy as conducive to organization rather than the opposite. Merrick elaborates also a sort of musical theory on the privileged proportion 5/6-10/12 at different levels, from the organization of the human torso to the arrangement of the double helix of DNA seen as the rotation of a dodecahedron around a bipolar axis.

*

The powers laws and zeta distributions are equally important in nature and human events, and are present, among many other things, in fundamental laws of physics, the distribution of wealth among populations, the size of cities or the frequency of earthquakes. Ferrer i Cancho and Fernández note that “φ is the value where the exponents of the probability distribution of a discrete magnitude and the value of the magnitude versus its rank coincide”. It is not known at this time if this is a curiosity or if it will allow to deepen the knowledge of these distributions [37].

Zipf or zeta distributions are linked to hierarchical structures and catastrophic events, and also overlap with fractals in the space domain and with the so-called 1/f noise in the time domain. A. Z. Mekjian makes a broader study of the application of the Fibonacci-Lucas numbers to statistics that include hyperbolic power laws [38].

I. Tanackov et al. show the close relationship of the elementary exponential distribution with the value 2ln φ, which makes them think that the emergence of the continuous proportion in nature could be linked to a special case of Markov processes —a non-reversible case, we would advance. It is well known that exponential distributions have maximum entropy. It can be obtained an incomparably faster convergence to the value of the number e with Lucas numbers, a generalization of Fibonacci numbers, than with Bernoulli’s original expression, which is enough food for thought; we can also get with non-reversible walks a faster convergence than with the usual random walk [38].

Edward Soroko proposed a law of structural harmony for the stability of self-organized systems, based on the continuous proportion and its series considering entropy from the point of view of thermodynamic equilibrium [39]. Although here we give preference to entropy in systems far from equilibrium, his work is of great interest and can be a source of new ideas.

It would be desirable to further clarify the relationship of power laws to entropy. The use of the principle of maximum entropy seems to be particularly suitable for open systems out of balance and with a strong self-interaction. Researchers such as Matt Visser think that Jaynes’ principle of maximum entropy allows a very direct and natural interpretation of powers laws [40].

Normally one looks for continuous power laws or discrete power laws, but in nature we can appreciate a middle ground between both as Mitchell Newberry observes with regard to the circulatory system. As usual, in such cases reverse engineering is imposed on the natural model. The continuous proportion and its series offer us an optimal recursive procedure to pass from continuous to discrete scales, and its appearance in this context could be natural [41].

The logarithmic average seems to be the most important component of these power laws, and we immediately associate the basis of the natural logarithms, the number e, with the exponential growth in which a certain variable increases without restrictions, something that in nature is only viable for very short periods of time. On the other hand, the golden mean seems to arise in a context of critical equilibrium between at least two variables. But this would lead us rather to logistic or S-curves, which are a modified form of the normal distribution and also a scaled compensation of a hyperbolic tangent function. On the other hand, exponential and power laws distributions look very different but sometimes can be directly connected, which is a subject on its own.

As already noticed, we can also connect the constants e and Φ through the complex plane, as in the equality (Φi = e ± πi/3). Although entropy has always been measured with algebras of real numbers, G. Rotundo and M. Ausloos have shown that here too the use of complex values can be justified, allowing to treat not only a “basic” free energy but also “corrections due to some underlying scale structure” [42]. The use of asymmetric correlation matrices could also be linked with the golden matrices generalized by Stakhov and applied to genetic code information by Sergey Pethoukov [43].

In the mechanical-statistical context the maximum entropy is only an extreme referred to the thermodynamic limit and to Poincaré’s immeasurable scales of recurrence; but in many relevant cases in nature, and evidently in the thermomechanical context, it is necessary to consider a non-maximum equilibrium entropy, which may be defined by the coarse grain of the system. Pérez-Cárdenas et al. show a non-maximum coarse-grained entropy linked to a power law, the entropy being so much lower when finer is the grain of the system [44]. This graininess can be linked to the constants of proportionality in the equations of mechanics, such as the same Planck’s constant.

*

Probability is a predictive concept and statistics a descriptive, interpretative one, and both should be balanced if we do not want human beings to be increasingly governed by concepts they do not understand at all.

Just to give an example, the mathematical apparatus known as the renormalization group applied to particle and statistical physics is particularly relevant in deep learning, to the point that some experts claim both are the same thing. But it goes without saying that this group historically emerged to deal with the effects of the Lagrangian self-interaction in the electromagnetic field, a central theme of this article.

For prediction, the effects of self-interaction are mostly “pathological”, since they complicate calculations and often lead to infinity —although in fact we should put the blame for this in the inability to work with extended particles of special relativity, rather than in self-interaction. But for the description and interpretation the problem is the opposite, it is about recovering the continuity of a natural feedback broken by layers and more layers of mathematical tricks. The conclusion could not be clearer: the search for predictions, and the “artificial intelligence” thus conceived, has grown exponentially at the expense of ignoring natural intelligence —the intrinsic capacity for self-compensation in nature.

If we want to somehow reverse the fact that man is increasingly governed by numbers that he does not understand —and even the experts are bound to trust in programs whose outputs are way beyond their understanding- it is necessary to work at least as hard in a regressive or retrodictive direction. If the gods destroy men by making them blind, they make them blind by means of predictions.

*

As Merrick points out, for the current theory of evolution, if life were to disappear from this planet or have to start all over again, the long-term results would be completely different, and if a rational species were to emerge it would be totally different from our own. That is what random evolution means. In a harmonically guided evolution conditioned by resonance and interference as Merrick suggests, the results would be fairly the same, except for the uncertain incidence that the great cosmic cycles beyond our reach might have.

There is not something like pure chance, there is nothing purely random; no matter how little organized an entity is, be it a particle or an atom, it cannot fail to filter out the environmental “random” influences according to its own intrinsic structure. And the first sign of organization is the appearance of an axis of symmetry, which in particles is defined by axes of rotation.

The dominant theory of evolution, like cosmology, has emerged to fill the great gap between abstract and reversible, and therefore timeless, physical laws and the ordinary world showing an irreversible time, perceptible forms and sequences of events. Today’s whole cosmology is really based on an unnecessary and contradictory assumption, the principle of inertia. The biological theory of evolution is based on a false one, that life is only governed by chance.

The present “synthetic theory” of evolution has only come into existence because of the separation of disciplines, and more specifically, due to the segregation of thermodynamics from fundamental physics despite the fact that there is nothing more fundamental than the Second Law. It is not by chance that thermodynamics emerged simultaneously with the theory of evolution: the first one begins with Mayer, who elaborated on work and physiology considerations, and the second one with Wallace and Darwin starting, according to the candid admission of the latter in the first pages of his main work, from Malthus’ assumptions of resources and competition, which in turn go back to Hobbes —one is a theory of work and the other of the global ecosystem understood as a capital market. The accumulated capital in this ecosystem is, of course, the biological inheritance.

Merrick’s harmonic evolution, due to the collective interference of waves-particles, is an updating of an idea as old as music; and it is also a timeless, purpose-free vision of the events of the world. But to reach the desired depth in time, it must be linked to the other two clearly teleological, but spontaneous domains, of mechanics and thermodynamics, which we call thermomechanics for short.

It would be enough to unite these three elements for the present theory of evolution to start becoming irrelevant; and not to mention that human and technological evolution is decidedly Lamarckian beyond speculation. Even DNA molecules are organized in the most obvious way along an axis. And as for information theory, one only has to remember that it has come out of a peculiar interpretation of thermodynamics, and that it is impossible to do automatic computations without components with a turning axis. Whatever the degree of chance, the Pole rules and defines its sense and meaning.

However, in order to better understand the action of the Pole and the spontaneous reaction involved in mechanics it would be good to rediscover the meaning of polarity.

Questions of interpretation – and of principle

According to Proclus, when Euclid wrote the Elements his primary objective was to elaborate a complete geometrical theory of the Platonic solids. Indeed, it has been said on several occasions that behind the name “Euclid” there could be a collective with a strong Pythagorean component. The existence of only five regular solids is possibly the best argument to think that we live in a three-dimensional world.

                        Dodecahedron

Another great champion of the concept of harmony in science was Kepler, the astronomer who introduced the ellipses into the history of physics —we only need to remember the title of his opus magnum, Harmonices Mundi. He discovered the convergence of Fibonacci’s series, combined Pythagoras’ theorem with the golden ratio in the triangle that bears his name, and theorized extensively about Platonic solids, which he even placed between the orbits of the planets, and which have been considered by a certain tradition as the arcana of the four elements plus the quintessence or Ether.

Icosahedron

The latter would be represented by the pentagon and the pentagram, and by the dodecahedron dual of the icosahedron, corresponding to the element water.

a/b = b/c= c/d = (1+√ 5)/2

The cascade of identical proportions at different scales immediately evokes the capacity to generate self-similar forms like the logarithmic spiral that displays its proportion indefinitely in a series of powers: φ, φ2, φ3, φ4… If a recursion is that which allows something to be defined in its own terms, the pentagram shows us the simplest recursive process in a plane.

As late as 1884 Felix Klein, the reconciler of analytic and synthetic geometry, gave a group of Lectures on the Icosahedron in which he considered it to be the central object of the main branches of mathematics: “Every single geometrical object is connected in one way or another to the properties of the regular icosahedron”. This is another of those “surprisingly well connected” objects and it would be interesting to trace the links with the two representations of the Pole from which we started. The Rogers-Ramanujan’s continuous fraction, for example, plays a role for the icosahedron analogous to the exponential function for a regular polygon.

The orientation proposed by the influential Klein did not get as wide a reception as the famous Erlangen program; to properly deepen it required the dynamic guidance of nature, of applied physics and mathematics. Even current times are more propitious to recover this program, in spite of the fact that physics itself has become more abstract by leaps and bounds. The best way to revive Klein’s second major program today might be with the use of a very down-to-earth version of geometric algebra.

The golden section emerges under the unequivocal seal of the fivefold symmetries, which were believed to be exclusive of living beings until the discovery of the quasi-crystals. That ordinary crystals, static structures by definition, exclude this type of symmetry seems to indicate long-range conditions of equilibrium. The optical properties of photonic quasi-crystals and other less periodic structures —often with a zero or double-zero refraction of permittivity and permeability- are being studied intensively in 1D, 2D, and 3D. As expected, phases with spiral holonomy have been found. As in graphene, although the Berry curvature has to be zero, the phase shift can be zero or π [26].

It would be interesting to study all these new states from the point of view of thermomechanics, quantum thermodynamics and retarded potentials, the latter being able to offer a more convincing interpretation of, for example, the so-called relativistic effects of graphene; just as they allow to deal with singular points in a more logical way. On the border between periodic and random structures, the secrets of fivefold symmetry will not be opened without a careful, unbiased interpretation.

*

Except to confuse minds and things, the proscription of the Ether by the special relativity is anything but relevant. First of all, because what makes special relativity work is the Lorentz-Poincaré transformation, conceived expressly for the Ether. Second, because although special relativity, which is the general theory, dispenses with the Ether, general relativity, which is the special theory for gravity, demands it, even if it is in a rarefied theoretical way.

Weber’s electrodynamics coincides closely with the Lorentz factor up to speeds of 0.85 c without having to dispense with the third principle of mechanics. But in any case, and even if we stick to Maxwell’s equations, which is the very crux of the matter, what we have is that there are not and cannot be electromagnetic waves moving in space with one component ortogonal to the other, but a statistical average of what occurs between space and matter. This is Mazilu’s conclusion, but we only need to acknowledge the complete failure of any attempt to specify the geometric description of the field and the waves [27].

The strange thing, once again, is that this has not been seen clearly before. But it turns out that the idea of the Ether around 1900, at least the idea of Larmor, among others, was not only as a medium between particles of matter, but something that also penetrated those particles, which in fact were seen as condensations of the Ether —as now we can see the particles as condensations of the field. There was Ether outside in space and Ether inside matter — as in the electromagnetic waves, among which is light.

Ether is nothing but light itself, but it can also be other things than the light we see, and the whole electromagnetic spectrum. It’s just that we cannot know anything without the help of light. Light is the mediator between a space that we cannot know directly, but gives us the metric, and a matter that is in the same situation, but is the subject to measurement.

And now that we know that we are in the midst of the Ether, like the bourgeois gentleman who discovered that he had always been speaking prose without knowing it, perhaps we can look at things more calmly. Only a consummately dualistic mentality that thinks in terms of “this or that” has been able to remain perplexed for so long on this question.

This idea of the Ether in medias res could not be very clear at the beginning of the 20th century, otherwise physicists would not have opened their arms to relativity as they did. If it was welcomed, leaving other reasons aside, it was because it seemed to end once and for all with an endless series of doubts and contradictions —or so they thought at the time, until insoluble “paradoxes” began to emerge one after another. One could think that by then Weber’s law, which did not even need the existence of a medium because it did not consider waves either, had largely fallen into oblivion —though certainly not for researchers as exhaustive as Poincaré.

Of course, instead of Ether we can also use the word “field” now, as long as we do not understand it as the supplement of space that surrounds the particles, but as the fundamental entity from which they emerge.

In any case, special relativity proper practically does not come into contact with matter, and when it does through quantum electrodynamics, since Dirac, we are confronted with vacuum polarization and an even more crowded, strange and contradictory medium than in any of the previous avatars of the Ether.

Today, transformation optics and the anisotropies of metamaterials are used to “illustrate” black holes or to “design” —it is said- space-times in different flavors. And yet only the macroscopic parameters of Maxwell’s old equations, such as permeability and permittivity, are being manipulated. And why should empty space have properties if it is really empty? But, again, what this is all about is statistical averages between space and matter. Only the prejudice created by relativity prevents us from seeing better these things.

It should be much more interesting to study the properties of the space-matter continuum accessible to our direct modulation than exotic aspects of hypothetical objects of a theory, the general relativity, which is not even unified with classical electromagnetism.

And if the ether of 1900 could prove to be inconvenient, what can we say about a theory that breaks with the continuity of the equations of classical mechanics, and that to alleviate it introduces infinite frames of reference? Certainly, it is not an economic solution, and it seems even worse if we think that with the principle of dynamic equilibrium we can dispense with inertia and the distinction of frames of reference —a move that is routinely used to rule out other theories and clear the field.

On the top of this, Maxwell’s equations only are valid for extended portions of the field; special relativity is only valid for point-events, and in the field equations of general relativity point particles again become meaningless. Transformation optics takes advantage of this threefold incompatibility with a bypass that leaves special relativity in limbo, to link Maxwell with another incompatible theory. And yet, although this is not said, it is because of special relativity that quantum mechanics has been unable to work with extended particles. In contrast, starting from Weber mechanics there are no problems in working with both extended and point particles.

For those who still believe that the fundamental framework of classical mechanics must have four dimensions, it can be recalled that well into the 21st century, consistent gauge theories of gravity have been developed that satisfy the criterion formulated by Poincaré in 1902, namely to elaborate a relativistic theory in ordinary Euclidean space by modifying the laws of optics, instead of curving the space with respect to the geodesic lines described by light. Light is the mediator between space and matter; and if light can be deformed, which is obvious, there is no need to deform anything else.

Maxwell’s equations are not even a general case, but a particular case, both of Weber and of Euler’s equations of fluid mechanics. Within fluid mechanics, Maxwell sought the case for a static or motionless medium; if Maxwell’s equations are not fundamental, the principle of relativity cannot be fundamental either [28]. The reciprocity of special relativity is purely abstract and kinematic, not mechanical, since it is not bound to centers of masses, and does not allow a distinction between internal forces that comply with the third law and external forces that do not need to comply with it. The principle of relativity, which asserts the impossibility of finding a privileged frame of reference, is valid if and only if there are no external forces to those considered within the system —but on the other hand, by neglecting the third principle, the internal forces are not defined mechanically either.

The so-called Poincaré stress that the French physicist introduced for the Lorentz force to comply with the third principle plays the same role in the relativistic context as Noskov’s longitudinal vibrations for the Weber force. The fact that such stress was later considered irrelevant for special relativity shows conclusively its total divorce from mechanics.

Maxwell’s equations, as Mazilu says, are a reaction to the partially or totally uncontrollable aspects of the Ether. In physical theories the quantities that matter are not those we can measure, but those we can control; but in this way we dispense with information that could be integrated into a broader theoretical framework.

Questions of interpretation inevitably bring us back to questions of principle; without changing the principles we are condemned to work for them.

The principle of relativity is contingent and therefore unnecessarily restrictive, depending also on arbitrary synchronization procedures. The equivalence principle of general relativity also does not put an end to the problems of reference frames, and, in combination with the principle of relativity, rather multiplies them.

The principle of dynamic equilibrium of relational mechanics radically simplifies this situation without creating unnecessary restrictions. Leaving generality aside, a principle should not be restrictive, but, first of all, necessary. On the other hand, the inability of the equivalence principle to get rid of the principle of inertia automatically subordinates it to the latter.

And it is no coincidence. If we know the same about inertia as we know about the Ether, it is rather because inertia itself inadvertently overlaps with the idea of the Ether and supplants it. Then, the Ether could only emerge without mystifications from a physics that completely dispenses with the idea of inertia, something that is perfectly feasible and compatible with all our experience.

No doubt each theory has its own virtues, but Maxwell’s theory and relativity have already been more than sufficiently extolled. Here we prefer to look at the theory that has historical precedence over both, since the presentation made today could not be more biased.

Returning to the past, we see that Lorentz’s non-dragging medium, Fresnel and Fizeau’s partial dragging medium, and Stokes’ total dragging medium are not contradictory and refer to clearly different cases. There are experiments, such as those of Miller, Hoek, Trouton and Noble, and many others, which can be carried out again under much better conditions and provide invaluable information from many points of view, provided that our theoretical framework allows us to contemplate them, which is not the case now [29]. In addition, these experiments are thousands of times less expensive, simpler and more informative than the current “confirmations” of special and general relativity.

There is also an inevitable complementarity between the constitutive aspects of electromagnetism in modern metamaterials, with their mixture of controllable and uncontrollable parameters in matter, and the measurement of uncontrollable parameters in a free environment out in space. But this complementarity cannot be appreciated without principles and a framework that can make them compatible, to begin with. On the other hand there is no need to say that between transformation optics and general relativity we have more of a flimsy parallelism than of real contact.

Another way of talking about a geometric phase is that it is a transformation or holonomy around a singularity. This singularity can be a vortex, which provides a natural connection with the entropy or attenuation of certain magnitudes, which obviously cannot reach infinite values.

An interesting case is the so-called transmutation of optical vortices, that is, the qualitative change of their most intrinsic feature, which is vorticity, and which has recently been carried out even in free space [30], also involving pentagonal symmetries. Vortices occur in the four states of matter —solid, liquid, gaseous and plasma, which are our version of the ancient four elements. Taking into account that their characteristic behavior can be described as a function of constitutive stress/strain relationships, a quantitative description of the transmutation of the states of matter is also feasible, which is widely different from the nuclear transmutation of the elements, without prejudice that also the nuclei can be described more or less classically with vortices like the skyrmions.

The so-called geometrical phase, this so universal phenomenon that it even manifests itself as vorticity on the surface of the water, applied to classical electromagnetism becomes, so to speak, “Maxwell’s fifth equation”, since it brings into play and encompasses the four known ones. The very name “geometric phase” seems clearly a euphemism, since it is not the geometricians who usually deal with it, but the physicists, and applied physicists for that matter. I also prefer to call it holonomy instead of anholonomy, since the latter refers to the fact that it cannot be integrated within the frame of a theory, while holonomy refers to a global aspect that can be recognized even by the naked eye.

Berry himself admits that the geometrical phase is a way of including the (uncontrollable) environmental factors that are not within the system defined by the theory [31]. In this sense, to get to Maxwell’s “fifth equation” we don’t need to add terms, but only refer to the “predynamics” of the less restricted or more general equations from which they originate —Weber’s and Euler’s in this instance. And the same applies to relativity.

No doubt the phenomenology of light is so vast that it never ceases to surprise us, but all this would have an incomparably greater transcendence if a parallel effort were dedicated to the uncontrollable but complementary aspects that are now outlawed or masked by the dominant theories. But in fact there is no part of physics that is not being contemplated today under an unnecessarily distorted optic.

*

There is also place in the theory of black holes for the golden ratio to emerge, just at the critical turning point when the temperature goes from rising to falling: J2/M2 = (1+√ 5)/2, with M and J being the mass and angular momentum when the constants c and G equals 1. The meaning and relevance of this is not clear, but it echoes the custom of this constant of appearing at critical points [32].

In Weber-type forces there is no room for theoretical objects such as black holes since the force decreases with the speed, and it would be interesting to see if transformation optics is able to find a laboratory replica for the evolution of these parameters. In relational mechanics the most that can be expected are different types of phase singularity, such as the optical vortices mentioned.

If there is any interest for us in the emergence of φ in black holes, even if they are purely theoretical calculations, it is because of the direct association with angular momentum, entropy and thermodynamics. It shows us at least that the continuous proportion can also emerge in accordance with the principle of maximum entropy that we consider fundamental for understanding nature, quantum mechanics, or the thermomechanical formulation of classical mechanics. If it can emerge here, it can also do so in other types of singularities, such as the phase of vortices, in the optical model that de Broglie built for the light ray, or in holograms.

For perhaps the greatest interest of the theoretical study of black holes has been to introduce this principle of maximum entropy into fundamental physics, albeit as a final term at extreme conditions, when surely it has a presence at any time in the past and present. In modern theoretical physics, this could be expressed through the so-called holographic principle, which makes sense since this principle makes extensive use of the light phase, and, after all, we have already seen that there is no knowledge of our physical world that does not pass through light. However, there are all sorts of doubts about how to apply such a principle to ordinary low-energy physics.

Black holes are extreme theoretical objects of maximum energy, but they have been reached through the “ordinary” physics of gravity, governed by action principles of minimum energy variation. Technically, this does not involve any contradiction, but makes us wonder about the very nature of the action principles, something that still worried a conservative physicist like Planck.

The action principle of Weber’s law, or Noskov’s extension of it, does not allow the existence of these extreme objects because, applying reciprocity in a strictly mechanical way to the centres of mass, force and speed get balanced. In the breaking down of the Lagrangian by Mathis into two forces, something similar happens. In Pinheiro’s thermomechanics, in which there is a balance between minimum energy variation and maximum entropy, this does not seem to be possible either, provided there is free energy available.

The ordinary Lagrangian is the most void of causality, and Mathis’ theory, which wants to dispense with energy and action principles in order to remain only with vectors and forces, would be obviously the “fullest” model; if it is viable is a different matter. The other two are in between. We know that with action principles univocal causes are impossible. One can choose the path one prefers and see how far it goes, but my position on this is that although a univocal determination of causes is not possible, we can have a statistical but certain sense of causality, related to the Second Law of Thermodynamics. Noskov’s and Pinheiro’s action principles seems compatible.

The third principle that defines what a closed system is, but the ironic twist is that this principle cannot be applied without the help of an environment with free energy contributing to close the balance. This happens even in Mathis’ model, where free charge is recycled by matter. Thus, any reversible mechanics emerges as an island from an irreversible background, and it is this superposition of levels that gives us our intuition of causality.

The propagation of light is based on the homogeneity of space, but the masses on which gravity acts involve a non-homogeneous distribution. If we assume a primitive homogeneous medium, no type of force, including gravity, can alter that homogeneity except in a transitory way. The self-correction of forces, which is already implicit in Newton and the original Lagrangian, leads in that direction and seems the only conceivable way, if there is any, to cancel out the infinities arising in calculations. The entropic and thermodynamic treatment of gravity would also necessarily have to follow that direction.

The emergence of the continuous proportion and its series in plant growth, in pinecones or sunflowers, makes us think of vortices made up of discrete units, while at the same time it brings us back to the considerations about the optimal, not maximum, use of resources, matter and data collection that nature seems to display.

Yasuichi Horibe demonstrated that the Fibonacci binary trees were subject to the principle of maximum information entropy production [33], something that might be extended to thermodynamic entropy and, perhaps, to other branches such as optics, holography, or quantum thermodynamics. The question is whether these series can simply emerge from the principle of maximum entropy or are at some variable optimum point between minimum energy and maximum entropy, as suggested by Pinheiro’s equations.

Since Pinheiro begins by testing its mechanics with some very elementary models, such as a sphere rolling on a concave surface, or the period of oscillation of an elementary pendulum, it would be of great interest to determine the simplest problem, within this mechanics, in which the continuous proportion appears with a critical or relevant role. With this we could take up again the Ariadne’s thread of this ratio for action variables and optimization problems.

Planck was still concerned that the principles of action seem to imply a purpose. And the same was true of the Second Law of Thermodynamics for Clausius, although he was not at all bothered by that. It is plain to see that both types of processes, apparently so far apart, are effectively teleological, and this is not a coincidence since they are not even separated, as Pinheiro’s thermomechanics shows. It seems that the simultaneous inclusion of two undeniable propensities of nature is more natural than their separate treatment.

In the West there has been a strong rejection of any teleological connotation because teleology has always been confused either with theology and the providential invisible hand or with the intentional hand of man. Tertium non datur. However, it is clear that here, for both mechanics and thermodynamics, we are talking about a tendency as unquestionable as it is spontaneous. Understanding this third position, which already existed before the false dilemma of mechanism, leads us to radically change our understanding of Nature.

Two kinds of reciprocity

The Taijitu, the emblem of the action of the Pole with respect to the world, and of the reciprocal action with respect to the Pole, inevitably reminds us of the most universal figure in physics; we are naturally referring to the ellipse —or rather, it should be said, to the idea of the generation of an ellipse with its two foci, since here there is no eccentricity. The ellipse appears in the orbits of the planets no less than in the atomic orbits of the electrons, and in the study of the refractive properties of light it gives rise to a whole field of analysis, ellipsometry. Kepler’s old problem has scale invariance, and plays a determining role in all our knowledge of physics from the Planck constant to the furthest galaxies.

In physics and mechanics, the principle of reciprocity par excellence is Newton’s third principle of action and reaction, which is at the base of all our ideas about energy conservation and allows us to “interrogate” forces when we are obliged to assume the constancy or proportionality of other quantities. The third principle does not speak of two different forces but of two different sides of the same force.

Now, the story of the third principle is curious, because we are forced to think that Newton established it as the keystone of his system to tie up the loose ends of celestial mechanics —particularly in Kepler’s problem- rather than for down-to-earth mechanics based on direct contact between bodies. The third principle allows us to define a closed system, and closed systems have been the given for all fundamental physics since then —however, it is precisely in celestial orbits, such as that of the Earth around the Sun, that this principle can be least verified, since the central body is not in the center, but in one of the foci. The force designated by the vectors would have to act on the void, where there is no matter.

Since the very first moment it was argued in the continent that Newton’s theory was more an exercise in geometry than in physics, although the truth is that, if physics and vectors were good for something, the first thing that failed was geometry. That is, if we assume that forces act from and on centers of mass, instead of on mere mathematical points. But, despite what intuition tells us —that an asymmetric ellipse can only result from a variable force, or from a simultaneous generation from the two foci-, the desire to expand the domain of calculus prevailed over anything else.

In fact the issue has remained so ambiguous that attempts have always been made to rationalize it with different arguments, either the system’s barycenter, or the variation in orbital velocity, or the initial conditions of the system. But none of them separately, nor the combination of the three, allows to solve the issue satisfactorily.

Since no one wants to think that the vectors are subjected to quantitative easing, and they lengthen and shorten at convenience, or that the planet accelerates and brakes on its own as a self-propelled rocket, in order to keep the orbit closed, physicists finally came to accept the combination in one quantity of the variable orbital speed and innate motion. But what happens is that if the centripetal force counteracts the orbital velocity, and this orbital velocity is variable despite the fact that the innate motion is invariable, the orbital velocity is in fact already a result of the interaction between the centripetal and the innate force, and then the centripetal force is also acting on itself. Therefore, the other options being ruled out, what we have is a case of feedback or self-interaction of the whole system.

So it must be said that the claim that Newton’s theory explains the shape of the ellipses is at best a pedagogical resource. However, this swift pedagogy has made us forget that our so called laws do not determine or “predict” the phenomena we observe, but try to fit them at most. Understanding the difference would help us to find our place in the overall picture.

The reciprocity of Newton’s third principle is simply a change of sign: the centrifugal force must be matched by an opposing force of equal magnitude. But the most elementary reciprocity of physics and calculus is that of the inverse product, as already expressed by the formula of velocity, (v = d/t), which is the distance divided by time. In this very basic sense, those who have pointed out that velocity is the primary fact and phenomenon of physics, from which time and space are derived, are absolutely right.

The first attempt to derive the laws of dynamics from the primary fact of velocity is due to Gauss, around 1835, when he proposed a law of electric force based not only on distance but also on relative velocities. The argument was that laws such as Newton’s or Coulomb’s were laws of statics rather than of dynamics. His disciple Weber refined the formula between 1846 and 1848 by including relative accelerations and a definition of potential —a retarded potential, in fact.

Weber’s electrodynamic force is the first case of a complete dynamic formula in which all quantities are strictly homogeneous or proportional [8]. Such formulas seemed to be exclusive to Archimedes’ statics, or Hooke’s elasticity law in its original form. In fact, although it is an specific formula for electric charges and not a field equation, it allows to derive Maxwell’s equations and the electromagnetic fields as a particular case, simply integrating over volume.

The logic of Weber’s law could be applied equally to gravity, and in fact Gerber used it to calculate the precession of Mercury’s orbit in 1898, seventeen years before the calculations of General Relativity. As is well-known, General Relativity aspired to include the so-called “Mach principle”, although in the end it did not succeed; but Weber’s law was entirely compatible with that principle in addition to explicitly using homogeneous quantities, well before Mach wrote about these issues.

It has been said that Gerber’s argument and equation was “merely empirical”, but in any other era not having to create ad hoc postulates would have been seen as the best virtue. In any case, if the new proportional law was used to calculate a tiny secular divergence, and not for the generic ellipse, it was for the simple reason that in a single orbital cycle there was nothing to calculate for either the old or the new theory.

Weber’s purely relational formula cannot “explain” the ellipse either, since force and potential are simply derived from motion —but at least there is nothing unphysical in the situation, and the fulfillment of the third principle is guaranteed while permitting a deeper meaning.

Ironically, as this new law changed the prevailing idea of central forces, understood with a string attached, Helmholtz and Maxwell blamed Weber’s law for not complying with energy conservation, although finally in 1871 Weber showed that it did so on the condition that the motion was cyclical —which in this issue was already the basic requirement for Newtonian or Lagrangian mechanics too. Conservation is a global property, not a local one, but the same was true for the orbits described in the Principia, not less than those of Lagrange. Strictly speaking there is no local conservation of forces that can make physical sense. Newton himself used the analogy of a slingshot, following Descartes’ example, when he spoke of the centrifugal motion, but nowhere in his definitions is there any talk that the central forces should be understood as if connected by a string. However, posterity took the simile at face value.

Why claim that there is in any case feedback, self-interaction? Because all gauge fields, characterized by the invariance of the Lagrangian under transformations, are equivalent to a non-trivial feedback between force and potential —the eternal “information problem”, namely how does the Moon know where the Sun is and how does it “know” its mass to behave as it does.

Indeed, if the Lagrangian of a system —the difference between kinetic and potential energy – has a certain value and is not equal to zero, this is equivalent to say that action-reaction is never immediately fulfilled. However, we use to assume that Newton’s third principle is immediate and takes place automatically and simultaneously, without mediation of any time sequence, and the same simultaneity is assumed in General Relativity. The presence of a retarded potential indicates at least the existence of a sequence or mechanism, even if we can not say anything else about it.

This shows us that additive and multiplicative reciprocity are notoriously different; and the one shown by the continuous proportion in the diagram of the Pole includes the second kind. The first is purely external and the second is internal to the order considered.

All the misunderstandings about what is mechanics come from here. And the essential difference between a mechanical system in the trivial sense and an ordered or self-organized system lies precisely at this point.

At the time it was believed that Hertz’s experiments confirmed Maxwell’s equations and disproved Weber’s, but that is another misunderstanding because if Weber’s law —which was the first to introduce the factor of the speed of light- did not predict electromagnetic waves, it did not exclude them either. It simply ignored them. On the other hand, some perceptive observers have noted that the only thing Hertz demonstrated was the reality of the action at a distance, not of waves, but that is another story.

As a counterpoint, it is worth remembering another fact that shows, among other things, that Weber had not fallen behind his time. Between the 1850s and 1870s he developed a stable model of the atom with elliptical orbits —many decades before Bohr proposed his model of the circular atom, without the need to postulate special forces for the nucleus.

Weber’s relational dynamics shows another aspect that may seem exotic in the light of the present theories: according to its equations, when two positive charges approach a critical distance, they produce a net attractive force, rather than a repulsive one. But is not the very idea of an elementary charge exotic in the first place, or should we just say a mere convention? In any case, this fits very well with the Taijitu diagram, in which a polarized force can potentially become its opposite. Without this spontaneous reversal, hardly we could speak of truly alive forces and potentials.

Physics and the continuous proportion

We already see that there are purely mathematical reasons for the continuous proportion to appear in the designs of nature independently of causality, be it physical, chemical or biological: in fact the convenience of logarithmic growth is independent even of the form itself, as is the elementary fact of the discrete and asymmetric division of cells.

In this light, it would be an emergent property, just a parallel plane to physical causation and becoming. On the other hand, the idea of parallel planes with a merely circumstantial connection with physical reality looks odd, and in any case very distant from what the diagram of the Pole express so well —that no form or nothing apparent is free from dynamics.

The fact is that the connection between physics and the continuous proportion is very dim, to say the least. However we have important occurrences of this ratio even in the Solar System, where it is almost impossible to avoid celestial mechanics. A better understanding of the presence of the continuous proportion in nature should not ignore the framework defined by fundamental physical theories, nor what these can leave out.

We have three possible approaches with increasing degrees of risk and depth:

The continuous proportion in nature can be studied independently of the underlying physics as a purely mathematical question; this would be the most prudent, but somewhat limited position. The aforementioned A. Stakhov has developed an algorithmic theory of measurement based on this ratio that can be used to analyze in turn other metrological theories of cycles, continuous fractions and fractals as for example the so called Global Scaling.

This proportion can be studied according with views compatible with known mainstream physics; for example, as Richard Merrick has done, in a neopythagorean rereading of the collective harmonic aspects of wave mechanics, such as resonances, and in which phi would be a critical damping factor [7]. These ideas are totally accessible to the experiment, either in acoustics or in optics, so that they can be verified or falsified.

Merrick’s idea of harmonic interference is within everyone’s reach and understanding and it is not without depth. It can be naturally complemented with the holographic concept proposed by David Bohm and his distinction between the implicate and explicate order. Although Bohm’s interpretation is not standard, it is compatible with experimental data. The harmonic interference theory can also be combined with mathematical theories of cycles and scales such as those mentioned.

Or, finally, one can consider other more classical theories that differ from the mainstream but which may provide deeper insights into the subject. Within this category, there are various degrees of disagreement with the standard theories: from just a wider understanding of thermodynamics, to in-depth revisions of classical mechanics, quantum mechanics and calculus. We could say that this third option is not that speculative, but rather divergent in the spirit and the interpretation.

Here we will focus more on the third level, which may also seem the most problematic. One could ask what is the need to question the best established physical theories in order to find a better ground for the occurrence of a constant that maybe does not require them. Furthermore, the first two levels already offer plenty of room for speculation. But this would be a very superficial way of looking at it.

We cannot delve into the presence of the continuous proportion in a symbol of perfect reciprocity ignoring the question of whether our present theories are the best exponent of continuity, homogeneity or reciprocity —and in fact they are far from it.

POLE OF INSPIRATION – The spark and the thread

Those who like simple problems can try to demonstrate this relationship before moving on. It’s insultingly easy:

φ = 1/φ +1 = φ-1+1 = 1/φ-1

We owe this fortunate discovery to John Arioni. The elementary demonstration, along with other unexpected relationships, is on the site Cut the knot [1]. The number φ is, naturally, the golden ratio (1+√ 5)/2, in decimal figures 1.6180339887…, and φ-1 is the reciprocal, 0.6180339887… . And since its infinite decimal places can be calculated by means of the simplest continuous fraction, here we will also call it the continuous ratio or continuous proportion, because of its unique role as mediator between discrete and continuous aspects of nature and mathematics.

This looks like the typical casual association of recreational math pages. One can get φ in many different ways with circles, but to my knowledge this is by far the most elementary of all, being the radius the unit of reference. In other words, this relation seems too simple and direct not to contain something important. And yet it has only recently been discovered, almost by chance.

John Arioni

Since Euclid and probably much earlier, the entire history of findings on this proportion has been derived from the division of a segment “in the extreme and mean ratio”, and has developed with the construction of squares and rectangles. The most immediate cases involving the circle come from the construction of the pentagon and the pentagram, no doubt known to the Pythagoreans; but one does not need to know anything about mathematics to realize that the relationship contained in this symbol is of a much more fundamental order —just as, from the quantitative point of view, the 2 is closer to 1 than 5, or from the qualitative one, the dyad is closer to the monad than the pentad.

If the circle and its central point are the most general and comprehensive symbol of the monad or unit, we have here the most immediate and revealing proportion of reciprocity, or dynamic symmetry, presented after the division into two parts. The Taijitu has a double function, as a symbol of the supreme Pole, beyond duality, and as a representation of the first great polarity or duality. It is, as it were, halfway between both, and both are linked by a ternary relationship —precisely the continuous proportion.

A relation is the perception of a dual connection, while a proportion implies a third order relationship, a “perception of perception”. Since at least the times of the Kepler’s triangle, we have known that the golden mean articulates and conjugates in itself the three most fundamental means of mathematics: the arithmetic mean, the geometric mean, and the so-called harmonic mean between both.

We could ask ourselves what would have happened if Pythagoras had known about this correlation, which would certainly have exalted Kepler’s imagination as well. It will be said that, like any other counterfactual, there is no point about it. But the question is not as much about the past that might have been as it is about the possible future. Pythagoras could hardly have been as surprised as we are, since he knew nothing about the decimal values of φ or π . Today we know that they are two ratios running to an infinite number of decimal places, and yet they are linked exactly by the most elementary triangular relationship.

Mathematical truth is beyond time, but not its revelation and construction. This allows us to see certain things with the insight of a Geohistory, as it were, in four dimensions. There has been speculation about what would have happened if the Greeks had known and made use of the zero, and whether they might have developed modern calculus. This is very doubtful, since they would have still needed to make a series of great leaps far from their conception of the world, such as the numbering system, the zero and its positional use, the idea of derivative, and so on. The double spirals were a common motif in archaic Greece, and the arithmetical speculations of the Pythagoreans, very similar in nature to those developed by the Chinese over time; but for whatever reason the Greeks did not intertwine the two spirals into one, and, in China itself, a diagram like the one we know today did not came into existence until the end of the Ming dynasty and only after a lengthy evolution.

Which is just another example of how hard is to see the obvious. It’s not so much the thing itself, but the context in which it emerges and in which it fits. Depending on how one looks at it, this can be encouraging or discouraging. In knowledge there is always a high margin to simplify, but as in so many other things, that margin depends to a large extent on knowing how to make it happen.

The Taijitu, the symbol of the supreme Pole, is a circle, a wave and a vortex all in one. Of course, the vortex is reduced to its minimum expression in the form of a double spiral. Characteristically, the Greeks separated their double spirals, and eventually turn them into squares, in the motifs known today as grecas. It is just another expression of their taste for statics, a bent that set the general framework for the reception of the golden mean in mathematics and art, and which has come down to us through the Renaissance.

The series of numbers that approximate infinitely the continuous proportion, known to us as Fibonacci numbers, appeared already long before in the numerical triangles consecrated in India to Mount Meru, “the mountain that surrounds the world”, which is just another designation of the Pole. As it is well known, from this figure, called the Pascal’s triangle in the West, a huge number of combinatory properties, scales and sequences of musical notes are derived.

The polar triangle, known in other cultures as Khayyam’s triangle or Yang Hui’s triangle, is one of those “extraordinarily well connected” mathematical objects: from it one can derive the binomial expansion, the binomial and normal statistical distributions, the sin(x)n+1/x transform of harmonic analysis, the matrix esponential and the exponential function, or the values of the two great gears of calculus, the constants π and e. It is almost incredible that the elemental connection with the Euler number has not been discovered until 2012 by Harlan J. Brothers. Instead of adding up all the figures in each row, one only needs to extract the ratio of ratios for their product; the difference between sums and products is a motif that will emerge several times throughout this article.

The polar triangle looks like an arithmetic and “static” representation, while the Taijitu is like a geometric snapshot of something purely dynamic. However, the rich implications for music of this triangle, partially explored by the work of Ervin Wilson, largely circumvent the separations created by adjectives such as “static” and “dynamic”. In any case, if the staircase of figures deployed in Mount Meru is an infinite progression, when we finally see the lines hidden in the circular diagram of the Pole we immediately know that it is something irreducible —the first offers us its arithmetical deployment and the second its geometric retraction.

The oldest known mention of the triangle, albeit a cryptical one, can be found in the Chandaḥśāstra of Pingala, where Mount Meru is shown as the formal archetype for metric variants in versification. It is also fair to say that the first Chinese author to deal with the polar triangle is not Yang Hui but Jia Xian (ca. 1010-1070), a strict contemporary of the philosopher and cosmologist Zhou Dunyi (1017-1073), the first author who publicized the Taijitu diagram.

Nowadays very few people are aware that both figures are representations of the Pole. It is my conjecture that all the mathematical relationships that can be derived from the polar triangle can also be found in Taijitu, or at least generated from it, although under a very different aspect, and with a certain twist that possibly involves φ. Both would be a dual expression of the same unity. Mathematicians will see what is the point of this.

Between counting and measuring, between arithmetic and geometry, we have the basic areas of algebra and calculus; but there is an overwhelming evidence that the latter branches have developed in one particular direction more than in others —more in decomposition than in composition, more in addition than in multiplication, more in analysis than synthesis. So the study of the relations between this two expressions of the Pole could be full of interesting surprises and basic but not trivial results, and it poses a different orientation for mathematics.

It can be seen that the arithmetic triangle has closed links with fundamental aspects of calculus and the mathematical constant e, while the Taijitu and the constant φ lack in this respect relevant connections —hence the totally marginal character of the continuous proportion in modern science. It has been said that the latter, unlike the intimate connection with change of Euler’s number, is a static relationship. However, its appearance in the extremely dynamic character of the yin-yang symbol already warns us of a general change of context.

For centuries calculus has been dissolving the relationship between geometry and change in favor of algebra and arithmetic, of not so pure numbers. Now we can turn this sandglass upside down observing what happens on the upper bulb, the lower bulb and the neck.

*

The appearance of the golden mean between the yin and yang in a purely curvilinear fashion not only is not static but on the contrary cannot be more dynamic and functional, and indeed the Taijitu is the most complete expression of activity and dynamism with the minimum number of elements. The diagram also has an intrinsic organic and biological connotation, inevitably evoking cell division, which in fact is an asymmetrical process, and, at least in plant growth, often follows a sequence governed by this ratio. In other words, the context in which the continuous ratio emerges here is the true antithesis of its Greek reception that has lasted until today, and this can have far-reaching implications on our perception of this proportion.

Oleg Bodnar has developed an elegant mathematical model of plant phyllotaxis with hyperbolic golden functions in three dimensions and with coefficients of reciprocal expansion and contraction that can be seen in the great panoramic book that Alexey Stakhov dedicates to the Mathematics of Harmony [2]. It is an example of dynamic symmetry that can be perfectly combined with the great diagram of polarity, regardless of the nature of the underlying physical forces.

The presence of spiral patterns based on the continuous proportion and their numerical series in living beings does not seem mysterious. Whether in the case of a nautilus or vegetable tendrils, the logarithmic spiral —the general case- allows indefinite growth with no change of shape. Spirals and helixes seem an inevitable result of the dynamics of growth, by the constant accretion of material on what is already there. At any rate, we should ask why among all the possible proportions of the logarithmic spiral those close to this constant arise so often.

And the answer would be that the discrete approaches to the continuous proportion also have optimal properties from several points of view —and cell growth ultimately depends on the discrete process of cell division, and at higher levels of organization, on other discrete elements such as tendrils or leaves. Since the convergence of the continuous ratio is the slowest, and plants tend to fill as much room as possible, this ratio allows them to emit the greatest number of leaves in the space available.

This explanation seems, from a descriptive point of view, sufficient, and makes it unnecessary to invoke natural selection or deeper physical mechanisms. However, in addition to the basic discrete-continuos relationship, it contains implicitly a powerful link between forms generated by an axis, such as the pine cones, and the so-called “principle of maximum entropy production” of thermodynamics, which we will find later again.

Needless to say, we do not think this proportion has “the secret” to any universal canon of beauty, since surely such a canon does not even exist. However, its recurrent presence in the patterns of nature shows us different aspects of an spontaneous principle of organization, or self-organization, behind what we superficially call “design”. On the other hand, the appearance of this mathematical constant, due to its very irreducible properties, in a great number of problems of optimization, maximums and minimums, and parameters with critical points allows us to connect it both naturally and functionally with human design and its search for the most efficient and elegant configurations.

The emergence of the continuous proportion in the dynamic symbol of the Pole —of the very principle- augurs a substantive change both in the contemplation of Nature and in the artificial constructions of human beings. Contemplation and construction are antagonistic activities. One goes top-down and the other bottom-up, but there is always some sort of balance between both. Contemplation allows us to free ourselves from the connections already built, and construction gets ready to fill the resulting void with new ones.

It is somewhat strange that the continuous proportion, despite its frequent presence in Nature, is so poorly connected with the two great constants of calculus, π and e —except for anecdotic incidences as the “logarithmic golden spiral”, which is only a particular case of an equiangular spiral. We know that both π and e are transcendental numbers, while φ is not, although it is indeed the “most irrational number”, in the sense that it is the one with the slowest approximation by rational numbers or fractions. φ is also the simplest natural fractal.

Until now, the most direct link with trigonometric series has been through the decagon and the identities φ = 2cos 36° = 2cos (π/5). It has not been associated so far with imaginary numbers, i being the other great constant of calculus, which is concurrent with the other two in Euler’s formula, of which the so-called Euler identity (eiπ = -1) is a particular case.

The number e, base of the function that is its own derivative, appears naturally in rates of change, the subdivisions ad infinitum of a unit that tend to a limit or in wave mechanics. The imaginary numbers, on the other hand, so common in modern physics, appeared for the first time with the cubic equations and pop up each time additional degrees of freedom are assigned to the complex plane.

Actually, complex numbers behave exactly like two-dimensional vectors, in which the real part is the inner or scalar product or and the so-called imaginary part corresponds to the cross or vector product; so imaginary numbers can only be associated with motions, rotations and positions in space in additional dimensions, not with the physical quantities themselves.

This is easier to say than to think of, since it is even more “complex” to determine what a physical quantity or a mathematical variable can be independently of change and motion. Both to geometrically interpret the meaning of vectors and complex numbers in physics and to generalize them to any dimension a tool like geometric algebra may be used —”the algebra flowing from geometry”, as Hestenes put it; but even then there is much more to geometry than we may think.

Many problems become more simple on the complex plane, or so the mathematicians say. One of them, under the pseudonym Agno sent in 2011 an entry to a math forum with the title “Imaginary Golden Mean”, which shows a direct connection with π and e : Φi = e ± πi/3 [3]. Another anonymous author found this same identity in 2016, along with similar derivations, looking for fundamental properties of an operation known as “reciprocal addition”, of interest in circuits and parallel resistances calculations. As refraction is a kind of impedance, it may also have its place in optics. The relation in the polar diagram may be associated right from the start with geometric series and hypergeometric functions associated with continuous fractions, modular forms and Fibonacci series, and even with noncommutative geometry [4]. The imaginary golden ratio, in any case, reflects as in a mirror many of the qualities of its real part.

The Taijitu is a circle, a wave and a vortex all in one. The synthetic genius of nature is quite different from that of man, and she does not ask for unifications because not to arbitrarily separate is enough for her. Nature, as Fresnel said, does not care about analytical difficulties.

The diagram of the Taijitu becomes a flat section of a double spiral expanding and contracting in three dimensions, a motion that seems to give it an “extra dimension” in time. It is always a real challenge to follow the evolution of this process, both spiral and helical, within a vertical cylinder, which is but the complete representation of the indefinite propagation of a wave motion, the “universal spherical vortex” described by René Guenon in three short chapters of his work “The Symbolism of the Cross”. The cross of which Guenon speaks is certainly a system of coordinates in the most metaphysical sense of the word; but the physical side of the subject is by no means negligible.

The propagation of a wave in space is a process as simple as it is difficult to grasp in its entirety; one need only think of Huygens’ principle, the universal mode of propagation, which also underlies all quantum mechanics, and which involves continuous deformation in a homogeneous medium.

In that same year of 1931 when Guenon was writing about the evolution of the universal spherical vortex, the first work was published on what we know today as the Hopf fibration, the map of the connections between a three-dimensional sphere and a sphere in two dimensions. This enormously complex fibration is found even in a simple two-dimensional harmonic oscillator. Also in the same year, Paul Dirac conjectured the existence of that unicorn of modern physics known as the magnetic monopole, which brought the same kind of evolution into the context of quantum electrodynamics.

Peter Alexander Venis gives us in a wonderful work a completely phenomenological approach to the classification and typology of the different vortices. There is nothing mathematical here, neither advanced nor elementary, but a sequence of transformations of 5 + 5 + 2, or 7 classes of vortices with many types and countless variants that unfold from the completely undifferentiated only to return to the undifferentiated again —or to the infinity of which Venis prefers to speak. The transitions from ideal points with no extension to the apparent forms of nature seems quite arbitrary without the aid of vortices, hence their importance and universality.

Peter Alexander Venis

Venis does not deal with the mathematical and physical aspects of such a complex subject as vortices, and of course he does not apply to them the continuous proportion; on the contrary he gives us the privilege of a new fresh vision of these rich processes, in which the insight of a presocratic naturalist and the capacity for synthesis of a Chinese systematist meet together effortlessly.

Even if the Venis sequence admits variations, it presents us a morphological model of evolution that goes beyond the scope of ordinary sciences and disciplines. The author includes under the term “vortices” flow processes that may or may not have rotation, but there is a good reason for that, since this is necessary to cover key conditions of equilibrium. He also applies the theory of yin and yang in a way that is both logical and intuitive, which probably admits a fairly elementary translation to the qualitative principles of other traditions.

The study of this sequence of transformations, in which questions of acoustics and image are closely linked, should be of immediate interest in order to deepen the criteria of morphology and design even without the need to enter into further considerations.

Peter Alexander Venis

A metric-free description would be, precisely, the perfect counterpoint for a subject as badly affected by arbitrariness in the measurement criteria as the study of proportionality. Naturally, mathematics also has several tools essentially free of metrics, such as external differential forms, which allow the study of the physical fields with maximum elegance. Then, perhaps, the metrics that physics deals with could be used as a middle ground between both extremes.

Thus, in this search to better define the context for the appearance of the continuous proportion in the world of phenomena, we can speak of three types of basic spaces: the ametric or metric-free space, the metric spaces, and the parametric or parameter spaces.

By metric-free space we understand the different spaces that are free of metrics and the action of measurement, from the purely morphological sequence of vortices above to projective geometry or the metric independent parts of the topology or differential forms. The projective, metric-independent space is the only true space; if we sometimes speak of metric spaces it is only because of the different connections with metric spaces.

By metric spaces, we mean those of the fundamental theories in physics, not only mainstream theories but also other related, with a special emphasis on Euclidean metric space in three dimensions of our ordinary experience. They include physical constants and variables, but here we are particularly interested in theories that do not depend on dimensional constants and can be expressed in homogeneous proportions or quantities.

By parametric or parameter spaces we mean the spaces of correlations, data, and adjustable values that serve to define mathematical models, with any number of dimensions. We can also call it the algorithmic and statistical sector.

We are not going to deal here with the countless relationships that can exist between these three kinds of spaces. Suffice it to say that to get out of this labyrinth of complexity in which all sciences are already immersed, the only possible Ariadne’s thread, if any, has to trace a retrograde path: from numbers to phenomena, with the emphasis on the latter and not the other way around. And we are referring to phenomena not previously limited by a metric space.

Much has been said about the distinction between “the two cultures” of sciences and the humanities, but it should be noted that, before attempting to close this by now unsurmountable gap, we should begin first by bridging the gap between the natural, descriptive sciences and a physical science that, justified by its predictions, becomes indistinguishable with the power of abstraction of mathematics while isolating itself from the rest of Nature, to which it would like to serve as foundation. Reversing this fatal trend is of the greatest importance for the human being, and all efforts in that direction are worthwhile.