From monopoles to polarity

Polarity was always an essential component of natural philosophy and even of plain thought, but the advent of the theory of electric charge replaced a living idea with a mere convention.

Regarding the universal spherical vortex, we mentioned earlier Dirac’s monopole hypothesis. Dirac conjectured the existence of a magnetic monopole just for the sake of symmetry: if there are electric monopoles, why can not exist magnetic monopoles too?

Mazilu, following E. Katz, suggest quietly that there is no need to complete this symmetry, since we already have a higher order symmetry: the magnetic poles appear separated by portions of matter, and the electric poles only appear separated by portions of space. This is in full accordance with the interpretation of electromagnetic waves as a statistical average of what occurs between space and matter.

And this puts the finger on the point everybody tries to avoid: it is said that the current is the effect that occurs between charges, but actually it is the charge what is defined by the current. Elementary charge is a postulated entity, not something that follows from definitions. Mathis can rightly say that the idea of elementary charge can be completely dispensed with and replaced by mass, which is justified by dimensional analysis and greatly simplifies the picture —freeing us, among other things, from «vacuum constants» such as permittivity and permeability that are completely inconsistent with the word «vacuum» [50].

In this light, there are no magnetic monopoles, because there are no electric monopoles nor dipoles to begin with. The only things that would exist are gradients of a neutral charge, photons producing attractive or repulsive effects according to the relative densities and the screening exerted by other particles. And by the way, it is this purely relative and changing sense of shadow and light what characterized the original notion of yin and yang and polarity in all cultures until the great invention of elementary charge.

So it can well be said that electricity killed polarity, a death that will not last long since polarity is a much larger and more interesting concept. It is truly liberating for the imagination and our way of conceiving Nature to dispense with the idea of bottled charges everywhere.

Yes, it is more interesting even for such a theoretical subject as the monopole. Theoretical physicists have even imagined global cosmological monopoles. But it is enough to imagine a universal spherical vortex like the one already mentioned, without any kind of charge, but with self-interaction and a double motion for the rotations associated with magnetism and the attractions and repulsions associated with charges to arise. The same reversals of the field in Weber’s electrodynamics already invited to think that the charge is a theoretical construct.

We should come to see electromagnetic attraction and repulsion as totally independent of charge, and conversely, the unique field that includes gravity, as capable of both attraction and repulsion. This is the condition, not to unify, but to approach the effective unity that we presuppose in Nature.

*

The issue of polarity leads us to think of another great theoretical problem for which an experimental correlate is sought: the Riemann zeta function. As we know, basic aspects of this function present an enigmatic similarity with the random matrices of subatomic energy levels and many other collective features of quantum mechanics. Science looks for mathematical structures in physical reality, but here on the contrary we would have a physical structure reflected in a mathematical reality. Great physicists and mathematicians such as Berry or Connes proposed more than ten years ago to confine an electron in two dimensions under electromagnetic fields to «get its confession» in the form of the zeros of the function.

Polar zeta

There has been a great deal of discussion about the dynamics capable of recreating the real part of the zeros of the Riemann zeta function. Berry surmises that this dynamics should be irreversible, bounded and unstable, which makes a big difference for the ordinary processes expected from the current view of fundamental fields, but it is closer to quantum thermomechanics, or what is the same, irreversible quantum thermodynamics. Moreover, it seems that what it is at stake here is the most basic arithmetic relationship between addition and multiplication, as opposed to the scope of multiplicative and additive reciprocities in physics.

Most physicists and mathematicians think that there is nothing to scratch about the nature of the imaginary numbers or the complex plane; but as soon as they have to deal with the zeta function, there is hardly anyone who does not begin with an interpretation of the zeros of the function —especially when it comes to its connections with quantum mechanics. So it seems that only when there are no solutions the interpretation matters.

Maybe it would be healthy to purge the calculus in the sense that Mathis asks for and see to what extent it is possible to obtain results in the domain of quantum mechanics without resorting to complex numbers or the stark heuristics methods of renormalization. In fact, in Mathis’ version of calculus each point is equivalent to at least one distance, which should give us additional information. If the complex plane allows extensions to any dimension, we should check what is its minimum transposition into real numbers, both for physical and arithmetic problems. After all, Riemann’s starting point was the theory of functions, rather than number theory.

Surely if physicists and mathematicians knew the role of the complex plane in their equations they would not be thinking of confining electrons in two dimensions and other equally desperate attempts. The Riemann zeta function is inviting us to inspect the foundations of calculus, the bases of dynamics, and even our models of the point particles and elementary charge.

The zeta function has a pole at unity and a critical line with a value of 1/2 where lie all its known non-trivial zeros. The carriers of «elementary charge», the electron and the proton, both have a spin with a value of 1/2 and the photon that connects them, a spin with a value equal to 1. But why should spin be a statistical feature and not charge? Possibly the interest of the physical analogies for the zeta function would be much greater if the concept of elementary charge were to be dispensed with.

That the imaginary part of the electron wave function is linked to spin and rotation is no mystery. But the imaginary part associated with the quantization of particles of matter or fermions, among which is the electron has no obvious relation with spin. However, in classical electromagnetic waves we can deduce that the imaginary part of the electrical component is related to the real part of the magnetic component, and vice versa. The scattering amplitudes and their analytical continuation cannot be separated from the spin statistics, and vice versa; and both are associated with timelike and spacelike phenomena respectively. There can be also different analytical continuations with different meanings and geometrical interpretations in the Dirac algebra.

In electrodynamics all the development of the theory goes very explicitly from global to local. Gauss’ divergence theorem of integral calculus, which Cauchy used to prove his residue theorem of complex analysis, is the prototype of the cyclic or period integral. Like Gauss law of electrostatics, is originally independent of metric, though this is seldom taken into account. The Aharonov-Bohm integral, a prototype of geometric phase, is very similar in structure to the Gauss integral.

As Evert Jan Post emphasize time and again, the Gauss integral works as a natural counter of net charge, just as the Aharonov-Bohm integral works for quantized units of flux in self-interaction of beams. This clearly speaks in favor of the ensemble interpretation of quantum mechanics, in contrast to the Copenhagen interpretation stating that the wave function corresponds to a single individual system with special or non-classical statistics. The main statistical parameters here would be, in line with Planck’s 1912 work in which he introduced the zero point energy, the mutual phase and orientation of the ensemble elements [51]. Of course orientation is a metric-independent property.

Naturally, this integral line of reasoning shows its validity in the quantum Hall effect and its fractional variant, present in two-dimensional electronic systems under special conditions; this would bring us back to the above-mentioned attempts of electron confinement, but from the angle of classical, ordinary statistics. In short, if there is a correlation between this function and atomic energy levels, it should not be attributed to some special property of quantum mechanics but to the large random numbers generated at the microscopic level.

If we cannot understand something in the classical domain, hardly we will see it more clearly under the thick fog of quantum mechanics. There are very significant correlates of Riemann zeta in classical physics without the need to invoke quantum chaos; and in fact well known models such as those of Berry, Keating and Connes are semiclassical, which is a way to stay in between. We can find a purely classical exponent of the zeta function in dynamic billiards, starting with a circular billiard, which can be extended to other shapes like ellipses or the so-called stadium shape, a rectangle capped with two semicircles.

The circular billiard dynamics with a point particle bouncing is integrable and allows to model, for instance, the electromagnetic field in resonators such as microwave and optical cavities. If we open holes along the boundary, so that the ball has a certain probability of escape, we have a dissipation rate and the problem becomes more interesting. Of course, the probability depends on the ratio between the size of the holes and the size of the perimeter.

The connection with the prime numbers occurs through the angles in the trajectories according to a module 2 π/n. Bunimovich and Detteman showed that for a two hole problem with holes separated by 0º, 60º, 90º, 120º or 180º the probability is uniquely determined by the pole and non-trivial zeros of the Riemann zeta function [52]. The sum over can be determined explicitly and involves terms that can be connected with Fourier harmonic series and hypergeometric series. I do not know if this result may be extended to elliptical boundaries, which are also integrable. But if the boundary of the circle is smoothly deformed to create a stadium-like shape, the system is not integrable anymore and becomes chaotic, requiring an evolutionary operator.

Dynamic billiards have multiple applications in physics and are used as paradigmatic examples of conservative systems and Hamiltonian mechanics; however the «leakage» allowed to the particles is nothing more than a dissipation rate, and in the most explicit manner possible. Therefore, our guess that the zeta should be associated with thermomechanical systems that are irreversible at a fundamental level it is plain to understand —it is Hamiltonian mechanics that begs the exception to begin with. Since it is assumed that fundamental physics is conservative, there is a need for «a little bit open» closed systems, when, according to our point of view, what we always have is open systems that are partially closed. And according to our interpretation, physics has the question upside-down: a system is reversible because it is open, and it is irreversible to the extent that becomes closed, differentiated from the fundamental homogeneous medium.

From a different angle, the most apparent aspect of electromagnetism is light and color. Goethe said that color was the frontier phenomenon between shadow and light, in that uncertain region we call penumbra. Of course his was not a physical theory, but a phenomenology, which not only does not detract from it but adds to it. Schrödinger’s theory of color space of 1920, based on arguments of affine geometry though with a Riemannian metric, is somewhat halfway between perception and quantification and can serve us, with some improvements introduced later, to bring together visions that seem totally unconnected. Mazilu shows that it possible to obtain matrices similar to those arising from field interactions [53].

Needless to say, the objection to Goethe was that his concept of polarity between light and darkness, as opposed to the sound polarity of electric charge, was not justified by anything, but we believe that it is just the opposite. All that exists are density gradients; light and shadow can create them, a charge that is only a + or — sign attached to a point, can not. Colors are within light-and-darkness as light-and-darkness are within space-and-matter.

It is said, for example, that the Riemann zeta function could play the same role for chaotic quantum systems as the harmonic oscillator does for integrable quantum systems. Maybe. But do we know everything about the harmonic oscillator? Far from it, as the Hopf fibration or the monopole remind us. On the other hand, the first appearance of the zeta function in physics was with Planck’s formula for blackbody radiation, where it enters the calculation of the average energy of what would later be called «photons».

The physical interpretation of the zeta function always forces us to consider the correspondences between quantum and classical mechanics. Therefore, a problem almost as intriguing as this should be finding a classical counterpart to the spectrum described by Planck’s law; however, there seems to be no apparent interest in this. Mazilu, again, reminds us the discovery by Irwin Priest in 1919 of a simple and enigmatic transformation under which Planck’s formula yields a Gaussian, normal distribution with an exquisite correspondence over the whole frequency range [54].

In fact, the correspondence between quantum mechanics and classical mechanics is an incomparably more relevant issue on a practical and technological level than that of the zeta function. As for the theory, there is no clarification from quantum mechanics about where the transition zone would be. There are probably good reasons why this field receives apparently so little attention. However, it is highly probable that Riemann’s zeta function unexpectedly connects different realm of physics, making it a mathematical object of unparalleled depth.

The point is that there is no interpretation for the cubic root of the frequency of Priest transformation. Referring to the cosmic background radiation, «and perhaps to thermal radiation in general», Mazilu tries his best guess in order to finish with this observation: «Mention should be made that such a result can very well be specific to the way the measurements of the radiation are usually made, i.e. by a bolometer». Leaving this transformation aside, various more or less direct ways of deriving Planck’s law from classical assumptions have been proposed, from that suggested by Noskov on the basis of Weber’s electrodynamics to that of C.K. Thornhill [55]. Thornhill proposed a kinetic theory of electromagnetic radiation with a gaseous ether composed of an infinite variety of particles, where the frequency of electromagnetic waves is correlated with the energy per unit mass of the particles instead of just the energy —obtaining the Planck distribution in a much simpler way.

The statistical explanation of Plank’s law is already known, but Priest’s Gaussian transformation demands a physical explanation for classical statistics. Mazilu makes specific mention of the measurement device used, in this case the bolometer, based on an absorptive element —the Riemann zeta function is said to correspond to an absorption spectrum, not an emission spectrum. If today metamaterials are used to «illustrate» space-time variations and mock black holes —where the zeta function is also used to regularize calculations- they could be used with much sounder arguments to study variations of the absorption spectrum to attempt a reconstruction by a sort of reverse engineering. The enigma of Priest’s formula could be approached from a theoretical as well as a practical and constructive point of view —although the very explanations for the performance of metamaterials are controversial and would have to be purged of numerous assumptions.

Of course by the time Priest published his article the idea of quantization had already won the day. His work fell into oblivion, and little else is known about the author except for his dedication to colorimetry, in which he established a criterion of reciprocal temperatures for the minimum difference in color perception [56]; of course these are perceptive temperatures, not physical ones; the correspondence between temperatures and colors was never found. If there is an elementary additive and subtractive algebra of colors, there must also be a product algebra, surely related to their perception. This brings to mind the so-called non-spectral line of purples between the red and violet ends of the spectrum, the perception of which is limited by the luminosity function. With a proper correspondence this could give as a beautiful analogy that the reader may try to guess.

On the other hand, it should not be forgotten that Planck’s constant has nothing to do with the uncertainty in the energy of a photon, though today they are routinely associated [57]. Noskov’s longitudinal vibrations in the moving bodies immediately remind us of the zitterbewegung or «trembling motion» introduced by Schrödinger to interpret the interference of positive and negative energy in the relativistic Dirac equation. Schrödinger and Dirac conceived this motion as «a circulation of charge» generating the magnetic moment of the electron. The frequency of the rotation of the zbw would be of the order of 1021 hertz, too high for detection except for the resonances.

David Hestenes has analyzed various aspects of the zitter in his view of quantum mechanics as self-interaction. P. Catillon et al. conducted an electron channeling experiment on a crystal in 2008 to confirm de Broglie’s internal clock hypothesis. The resonance detected experimentally is very close to the de Broglie frequency, which is half the frequency of the zitter; the de Broglie period would be directly associated with mass, as has recently been suggested. There are several models for creating resonances with the electron reproducing the zeta function in cavities and Artin’s dynamic billiards, but they are not usually associated with the zitter or the de Broglie internal clock, since this does not fit into the conventional version of quantum mechanics. On the other hand, it would be advisable to consider a totally classical zero-point energy as can be followed from the works of Planck and his continuators in stochastic electrodynamics, though all these models are based on point particles [58].

Math, it is said, is the queen of sciences, and arithmetic the queen of mathematics. The fundamental theorem of arithmetic places prime numbers at its center, as the more irreducible aspect of the integer numbers. The main problem with prime numbers is their distribution, and the best approximation to their distribution comes from the Riemann zeta function. This in turn has a critical condition, which is precisely to find out if all the non-trivial zeros of the function lie on the critical line. Time past and competition among mathematicians have turn the task of proving the Riemann hypothesis into the K-1 of mathematics, which would involve a sort of duel between man and infinity.

William Clifford said that geometry was the mother of all sciences, and that one should enter it bending down like children; it seems on the contrary that arithmetic makes us more haughty, because in order to count we do not need to look down. And that, looking down to the most elementary and forgetting about the hypothesis as much as possible, would be the best thing for the understanding of this subject. Naturally, this could also be said of countless questions where the overuse of mathematics creates too rarefied a context, but here at least a basic lack in understanding is admitted.

There seem to be two basic ways of understanding the Riemann zeta function: as a problem posed by infinity or as a problem posed by unity. So far modern science, driven by the history of calculus, has been much more concerned with the first aspect than with the second, even if both are inextricably linked.

It has been said that if a zero is found outside the critical line —if Riemann’s hypothesis turns out to be false-, that would create havoc in number theory. But if the first zeros already evaluated by the German mathematician are well calculated, the hypothesis can be practically taken for granted, without the need to calculate more trillions or quadrillions of them. In fact, and in line with what was said in the previous chapter, it seems much more likely to find flaws in the foundations of calculus and its results than to find zeros off the line, and in addition the healthy creative chaos that would produce would surely not be confined to a single branch of math.

Of course, this applies to the evaluation of the zeta function itself. If Mathis’ simplified calculus, using a unitary interval criterion, finds divergences even for the values of the elementary logarithmic function, these divergences would have to be far more important in such convoluted calculations like those of this function. And in any case it gives us a different criterion for the evaluation of the function; furthermore, there might be a criterion to settle if certain divergences and error terms cancel out.

The devil’s advocates in this case would not have done the most important part of their work yet. On the other hand, fractional derivatives of this function have been calculated allowing us to see where the real and imaginary parts converge; this is of interest both for complex analysis and physics. In fact it is known that in physical models the evolution of the system with respect to the pole and zeros usually depends on the dimension, which in many cases is fractional or fractal, and even multi-fractal for potentials associated with the number themselves.

Arithmetic and counting exist primarily in the time domain, and there are good reasons to think that methods based on finite differences should take a certain kind of preference when dealing with changes in the time domain —since with infinitesimals the act of counting dissolves. The fractional analysis of the function should also be concerned with sequential time. Finally, the relationship between discrete and continuous variables characteristic of quantum mechanics should also be connected with the methods of finite differences.

Quantum physics can be described more intuitively with a combination of geometric algebra and fractional calculus for cases containing intermediate domains. In fact, these intermediate domains can be much more numerous than we think if we take into account both the mixed assignment of variables in orbital dynamics and the different scales at which waves and vortices can occur between the field and the particles in a different perspective like Venis’. The same self-interaction of the zitterbewegung calls for a much greater concreteness than hitherto achieved. This movement allows, among other things, a more directly geometric, and even classical, translation of the non-commutative aspects of quantum mechanics which in turn allow for a key natural connection between discrete and continuous variables.

Michel Riguidel makes the zeta function object of an intensive work of interaction in search of a morphogenetic approach. It would be great if the computing power of machines can be used to refine our intuition, interpretation and reflection, rather than the other way around. However, here it is easy to present two major objections. First, the huge plasticity of the function, which although completely differentiable, according to Voronin’s theorem of universality contains any amount of information an infinite number of times.

The second objection is that if the function already has an huge plasticity, and on the other hand graphics can only represent partial aspects of the function at any rate, further deformations and transformations, however evocative they may be, still introduce new degrees of arbitrariness. The logarithm can be transformed into a spiral halfway between the line and the circle, and create spiral waves and whatnot, but in the end they are just representations. The interest, at any rate, is in the interaction function-subject-representation —the interaction between mathematical, conceptual and representational tools.

But there is no need for more convoluted concepts. The greatest obstacle to go deeper into this subject, as in so many others, lies in the stark opposition to examining the foundations of calculus, classical and quantum mechanics. The more complex the arguments to prove or disprove the hypothesis, be it true or false, the less importance the result can have for the real world.

It is often said that the meaning of the Riemann hypothesis, and even of all the computed zeros of the function, is that the prime numbers have as random a distribution as possible, which of course leaves wide open how much randomness is possible. We may have no choice but to talk about apparent randomness.

But even so, there we have it: the highest degree of apparent randomness in a simple linear sequence generalizable to any dimension hides an ordered structure of unfathomable richness.

Michel Riguidel: Morphogenesis of the Zeta Function in the Critical Strip by Computational Approach
Michel Riguidel: Morphogenesis of the Zeta Function in the Critical Strip by Computational Approach

*

Let us return to the qualitative aspect of polarity and its problematic relationship with the quantitative realm. Not only the relation between the qualitative and the quantitative is problematic, but the qualitative interpretation itself raises a basic question inevitably connected with the quantitative.

For P. A. Venis everything can be explained with yin and yang, seen in terms of expansion and contraction, and of a higher or a lower dimension. Although this interpretation greatly deepens the possibility of connection with physics and mathematics, the version of the yin yang theory he uses is that of the Japanese practical philosopher George Ohsawa. In the Chinese tradition, yin is basically related to contraction and yang to expansion. Venis surmises that the Chinese interpretation may be more metaphysical and Ohsawa’s more physical; and latter he thinks that the former could be more related to the microcyclical processes of matter and the latter to the mesocyclical processes more typical of our scale of observation, but both views seem to be quite divergent.

Without resolving these very basic differences we cannot expect to soundly connect these categories with quantitative aspects, although one may still speak of contraction and expansion, with or without relation to dimensions. But on the other hand, any reduction of such vast and nuanced categories to mere linear relations with coefficients of separate aspects such as «expansion» or «contraction» runs the risk of becoming a poor simplification dissolving the value of the qualitative in appreciating nuances and degrees.

Venis’ interpretation is not superficial at all, and on the contrary it is easy to see that it gives a much deeper dimension, quite literally, to these terms. The extrapolation to aspects such as heat and color may seem to lack the desirable quantitative and theoretical justification, but in any case they are logical and consistent with his general vision and are wide open to delve into the subject. However, the radical disagreement on the most basic qualifications is already a challenge for interpretation.

It should be said right from the start that the Chinese version cannot be reduced to the understanding of yin and yang as contraction and expansion, nor to any pair of conceptual opposites to the exclusion of all the others. Contraction and expansion are only one of the many possible pairs, and even if they are often used, as with any other pair they depend entirely on the context. Perhaps the most common use is that of «full» and «empty», which on the other hand is intimately linked to contraction and expansion, although they are far from identical. Or also, depending on the context, the tendency towards fullness or void; it is not for nothing the common distinction between young and old yang, or young and old yin. Reversal is the way of Tao, so it is only natural that these points of potential, spontaneous reversal are also notorious in the Taijitu.

On the other hand, qualities such as full and empty not only have a clear translation in differential terms for field theories, hydrodynamics or even thermodynamics, but also have an immediate meaning, although much more diffuse, for our inner sense, which is precisely the common sense or sensorium as a whole, our undifferentiated sensation prior to the imprecise «sensory cut» which seems to generate the field of our five senses. This common sensorium also includes kinesthesia, our immediate perception of movement and our self-perception, which can be both of the body and of consciousness itself.

This inner sense or common sensorium is just another expression for the homogeneous, undivided medium that we already are —the background and tacit reference for feeling, perception and thought. And any kind of intuitive or qualitative knowledge takes that as the reference, which obviously goes beyond any rational or sensory criteria of discernment. Conversely, we could say that this background is obviated in formal thought but is assumed in intuitive knowledge. Physicists often speak of a result being «counter-intuitive» only in the sense that it goes against the expected or acquired knowledge, not against intuition, which it would be vain to define.

However it would be absurd to say that the qualitative and the quantitative are completely separate spheres. Mathematics is both qualitative and quantitative. We use to hear that there are more qualitative branches, like topology, and more quantitative branches like arithmetic or calculus, but on closer inspection this hardly makes any sense. Venis’ morphology is totally based on the idea of flow and on such elementary notions as points of equilibrium and points of inversion. Newton himself called his differential calculus «method of fluxions», the analysis of fluent quantities, and the methods for evaluating curves are based on the identification of turning points. So there is a compatibility that not only is not forced but it is natural; if modern science has advanced in the opposite direction towards increasing abstraction, which in turn is the just counterbalance to its utilitarianism, is another story.

Polarity and duality are quite different things but it is useful to perceive their relation before the convention of electric charge was introduced. The reference here cannot fail to be the electromagnetic theory, which is the basic theory of light and matter, and to a large extent also of space and matter.

Obviously, it would be absurd to say that a positive charge is yang and a negative charge is yin, since between both there is only an arbitrary change of sign. In the case of an electron or a proton other factors come in play, such as the fact that one is much less massive than the other, or that one is peripheral and the other is at the core of the atom. Let us take another example. At the biological and psychological level, we live between stress and pressure, which frame our way of perceiving things. But it would also be absurd to say that one or the other is yin or yang insofar as we understand tension only as a negative pressure, or viceversa. In other words, mere changes of sign seems to us trivial; but they become more interesting qualitatively and quantitatively when they involve other transformations.

Whether all or nothing is trivial depends only on our knowledge and attention; a superficial knowledge may judge as trivial things that in fact are are full of content. The polarity of charge may seem trivial, as may seem the duality of electricity and magnetism, or the relationship between kinetic and potential energy. Actually none of them is trivial at all, but when we try to see everything together we already have a space-time algebra with a huge range of variants.

In the case of pressure and stress or tension, the more apparent transformation is the deformation of a material. Strain-stress-pressure relations define, for instance, the properties of the pulse, whether in the pulsology of traditional Chinese or Indian medicine or in modern quantitative pulse analysis; but that also leads us to the stress-strain relations that define the constitutive law in materials science. Constitutive relations, on the other hand, are the complementary aspect of Maxwell’s electromagnetic field equations that tell us how the field interacts with matter.

It is usually said that electricity and magnetism, which are measured with dimensionally different units, are the dual expression of the same force. As we have already pointed out, this duality implies the space-matter relationship, both for the waves and for what is supposed to be the material support of the electric and magnetic polarity; in fact, and without going into further detail, this seems to be the key distinction.

All gauge field theories can be expressed by forces and potentials but also by non-trivial pressure-strain-stress variations that involve feedback, and there is feedback at any rate because first of all there is a global balance, and only then a local one. These relations are already present in Weber force law, only in this one what is «deformed» is force, instead of matter. The great virtue of Maxwell’s theory is to make explicit the duality between electricity and magnetism, hidden in Weber’s law. But we must insist, with Nicolae Mazilu, that we can find the essence of the gauge theory already in Kepler’s problem.

Constitutive relations with definite values such as permittivity and permeability cannot occur in empty space, so they can only be a statistical average of what occurs in between matter and space. Matter can sustain stress without exhibiting strain or deformation, and space can deform without stress or tension —this runs in parallel with the basic signatures of electricity and magnetism, which are tension and deformation. Strain and tension are not yin or yang, but to yield easily to deformation is yin, and to withstand tension without deformation is yang —at least as far as the material aspect is concerned. Of course between both there must be a whole continuous spectrum, often affected by many other considerations.

However, from the point of view of space, to which we do not have direct access but through the mediation of light, we could see the opposite: the expansion without coercion would be pure yang, while the contraction may be seen as a reaction of matter to the expansion of space, or the radiations that fill it. The waves of radiation themselves are an intermediate and alternate process between contraction and expansion, between matter and space, which cannot exist separately. However, a deformation is a purely geometrical concept, while a tension or a force is not, being here where the proper domain of physics begins.

Perhaps in this way a criterion for reconciling the two interpretations can be discerned, not without careful attention to the overall picture of which they are part; each may have its range of application, but they cannot be totally separate.

It is a law of thought that concepts appear as pairs of opposites, there being an infinity of them; finding their relevance in nature is something else, and the problem becomes nearly unsolvable when quantitative sciences introduce their own concepts that are also subject to antinomies but of a very different order and certainly much more specialized. However, the simultaneous attention to the whole and to the details makes this a task far from impossible.

Much have been said about holism and reductionism in sciences but it must be remembered that physics to start with, never has been described in rigorously mechanical terms. Physicists hold onto the local application of gauge fields, only because that is what give them predictions, but the very concept of Lagrangian that makes all that possible is integral or global, not local. What is surprising is that this global character has not a proper use in fields such as medicine or biophysics.

Starting from these global aspects of physics, a genuine and meaningful connection between the qualitative and the quantitative is much more feasible. The conception of yin and yang is only one of many qualitative readings man has made of nature, but even taking into account the extremely fluid character of these distinctions it is not difficult to establish the correspondences. For example, with the three gunas of Samkya or the four elements and four humors of the Western tradition, in which fire and water are the extreme elements and air and earth are the intermediate ones; these also can be seen in terms of contraction and expansion, of pressure, tension and deformation.

Needless to say, the idea of balance is not exclusive of the Chinese conception either, since the same cross and the quaternary have always had a connotation of equilibrium that is totally elemental and of universal character. It is rather in modern physics that equilibrium ceases to have a central place due to inertia, although it cannot cease to be omnipresent and essential for the use of reason itself, as it is for logic and algebra. The possibility of contact between quantitative and qualitative knowledge depends both on the precise location we give to the concept of equilibrium and the correct appreciation of the context and global features of the so-called mechanics.

Unlike the usual scientific concepts, which inevitably tend to become more detailed and specialized, notions such as yin and yang are ideas of utmost generality, indexes to be identified in the most specific contexts; if we pretend to define them too much they also lose the generality that gives them their value as an intuitive aid. But also the most general ideas of physics have been subject to constant evolution and modification depending on the context, and we only have to look at the continuous transformations of quantitative concepts such as force, energy or entropy, not to mention issues such as the criterion and range of application of the three principles of classical mechanics.

Vortices can be expressed in the elegant language of the continuum, of compact exterior differential forms or geometric algebra; but vortices speak above all with a language very similar to that of our own imagination and the plastic imagination of nature. Therefore, when we observe the Venis sequence and its wide range of variations, we know that we have find an intermediate, but genuine, ground between mathematical physics and biology. In both, form follows function, but in the reverse engineering of nature that human science is, function should follow form to the very end.

In Venis account there is a dynamic equilibrium between the dimensions in which the vortex evolves. This widens the scope of the equilibrium concept but makes it more problematic to assess. Fractional calculus would have to be key to follow this evolution through the intermediate domains between dimensions, but this also rise interesting points for experimental measurements.

How dimensions higher than three can be interpreted is always an open question. If instead of thinking of matter as moving in a passive space, we think of matter as those portions to which space has no access, the same matter would start from the point or zero dimension. Then the six dimensions of the evolution of vortices would form a cycle from the emission of light by matter to the retraction of space and light into matter again —and the three additional dimensions would only be the process in the opposite direction, and from an inverse optic, which circumvent repetition.

This is just one way of looking at some aspects of the sequence among many possible ways, and the subject deserves a much more detailed study than we can devote to it here. One think is to look for some sort of symmetry, but there must be many more types of vortices than we know now, not to speak of the different scales of occurrence, and the multiple metamorphoses. Only in Venis’ work one can find the due introduction to these questions. Venis assumes the number of dimensions to be infinite, so we could not find and count them all. An indication of this would be the minimum number of meridians necessary to create a vortex, which increases exponentially with the number of dimensions and which the author associates with the Fibonacci series.

We can speak of polarity as long as we can appreciate a capacity for self-regulation. That is to say, not when we just count on apparently antagonistic forces, but when we can not help notice a principle above them. This capacity was always present since the very Kepler’s problem, and it is only telling that science has failed to recognize it. Weber’s force and potential are explicitly polar, Newton’s force is not, but the two-body problem exhibit a polar dynamics in any case. To call the evolution of celestial bodies «mechanics» is just a rationalization, and in fact we do not have a mechanical explanation of anything when we speak of fundamental forces, and probably we cannot have one. Only when we notice a self-regulating principle could we use the term dynamics honoring the original intention still present in that name.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *