INFORMATION THEORY AND THE ZETA FUNCTION

It has been said that if the Riemann hypothesis were solved, all the keys to cryptography and cybersecurity could be broken; no one has specified how that could lead to faster factorization methods, but at least it reminds us of the close relationship between a hitherto intractable problem, cryptography and information theory.

Such speculations are only based on the fact that the Riemann zeta function establishes a connection between prime numbers and the zeros of an infinitely differentiable function that provides the most powerful method for exploring this field, prime numbers being the basis of classical cryptography. But we have already seen that the zeta is way more than this.

Today the technological revolution is synonymous with the digital revolution and with a category, information, which seems to pervade everything. All previous developments in science and technology converge in it and pass through it. Even physicists now think of the universe as a gigantic computer; there was much discussion about whether the world was made of atoms or stories but in the end it was decided that it was made of bits and case closed.

What is important about information theory is not so much its definitions as the direction it imposes on everything; changing that direction is equivalent to changing the direction of technology as a whole. The unequivocal direction, which obviously inherits from statistical mechanics, is that of decomposing everything into minimal elements that can then be recomposed at will.

For statistical mechanics there is no direction in time: if we do not see a shattered vase recompose itself and return to the table, it is only because we do not live long enough; if the pieces of a dismembered corpse do not get back together and walk again as if nothing had happened, it is only because we are not in a position to wait 10 raised to 1.000.000.000 years or something similar.

Information theory is not concerned at all with the reality of the physical world, but with the probability in its constituent elements, or rather the probability within its accounting of the constituent elements. On the contrary, the physical world is just a pool of resources for the sphere of computation, aimed to be independent of the former.

Is this statistical view a neutral position or is it simply a pretext to manipulate everything unencumbered? There is no single answer to this. Statistical mechanics and information theory are successfully applied in countless cases, and in countless cases they could not be more irrelevant. What is worrying is the overall trend they create.

It is my conviction that a shattered corpse is not going to recompose itself, no matter how fabulous the period of time we choose. And this is not rhetoric; rather, large numbers are the rhetoric of probability; an inflated and poor rhetoric, as it ignores the interdependence of things.

This interdependence, this infinite network of interrelations, is what makes things what they are. Statistical mechanics, and its daughter information theory, are the most general framework to deal with random independent elements; the Riemann zeta function, the most direct and elegant way of encompassing an infinite series of elements, the numbers themselves, which are both independent and dependent, apparently random and simultaneously containing an infinite net of relationships.

Then it is only a matter of time before the information theory and the zeta function meet together. Today numbers no longer seem to exist to understand the world, but to crush and squeeze it, and with it all of us. Information theory have become the funnel, the fearsome event horizon for everything; but in turn the zeta function could become the event horizon for the Information Age itself as the very concept of information, so highly generic, is refined and given content.

However, somewhat surprisingly, there is virtually no literature on an explicit relationship between the zeta function and information theory. In our search we only found a brief note due to K.K. Nambiar, in which he showed a connection between the capacity of the channel and the function, with a «Shannon series» and a «Shannon zeta function». The idea was worthy of much further elaboration, discussion and development. Besides, the equivalence between the zeta function and the classical sampling theorem fundamental in signal processing seems proved [69].

The same author issued an even shorter note establishing an electricity equivalent of the Riemann hypothesis in terms of the power dissipated in an electrical network; of course more elaborate equivalents in terms of electrical potential have also been established, but here we are more focused on thermodynamics. Infinitely many waveforms can be created with radiation patterns containing the zeros, the point is how to modulate them. Already in 1947 Van der Pol had created an analogical electromechanical device for computing zeros from the zeta [70].

If not directly with information theory, at least there are many more works connecting the function to closely related aspects: entropy, statistics and probability [71]. Here we will focus on the concept of entropy.

Entropy and information become almost synonymous, and if in thermodynamics entropy was seen as the loss of usable energy, it will now be seen as information loss. Shannon’s information was initially called surprisal or self-information —a message should contain something new.

The confusion of entropy with disorder is due to Boltzmann’s rationalization in attempting to derive macroscopic irreversibility from mechanical reversibility, quite a tour de force by itself. It was Boltzmann’s use of the concept of «order» what introduced a subjective element. Clausius original energetic conception of entropy, stating that the entropy of the world tends to the maximum, was, if nothing else, far more natural.

Order is a subjective concept. But to say world is to say order too, then also the idea of world is inevitably subjective. However subjective they may be, «order» and «world» are not just concepts, they are vital aspects of every organized entity or «open system». Yet even our very intuition of order and entropy are entangled and confused.

Not that entropy increases disorder. On the contrary, the tendency towards maximum entropy is conducive to order, or better said, order has more potential for dissipation. As Rod Swenson put it: «The world is in the order production business, including the business of producing living things and their perception and action capacities, because order produces entropy faster than disorder«[72].

Information theory is a formal and objective framework but it cannot escape from the reflexive turn in the representation and use of data. Information as an object is one thing, but the information as an environment with which an open system interacts and helps to shape is another. Although in black box mode, in many cases the interpretation is already part of the system’s behavior.

The ambiguities and limitations of information theory have given way to more inclusive frameworks, such as Luciano Floridi’s philosophy of information, which, to put it very simply, considers semantic information as data + questions [73]. Floridi’s approach still remains a clear heir to Cartesian dualism and idealism, and in this decisive division we prefer to speak of two types, external and internal, of information and entropy: there is a kind of information-entropy as a formal object and there is an information-entropy environment including the open systems that interact with it. Boltzmann’s entropy and Shannon’s entropy are of the first type, Clausius’ entropy and the one that engineers deal with in concrete physical applications is closer to the second.

The connection between entropy and the zeta function is quite recent and only in the 21st century is beginning to take shape. At a very basic level, one can study the entropy of the sequence of zeros: a high entropy would lead to a low structure, and vice versa. It is observed that structure is high and entropy low, and even low level neural networks were already successful in their prediction [74]. Also, of course, we can study the entropy of the primes in the ordered sequence of numbers and their correlation with the zeros; and from there on we already have an unlimited spectrum of correlations.

There are other non-extensive types of entropy independent of the amount of material, such as Tsallis entropy, which can be applied to the zeta; these types of entropy are generally associated with power laws, more than exponential laws. But as in thermodynamics in general, the definition of entropy can vary from author to author and from application to application.

In order to unify and generalize so many definitions of entropy, Piergiulio Tempesta has proposed a group entropy [75]. Whether in physical contexts or in complex systems, in economics, biology or social sciences, the relations between different subsystems depends crucially on how we define the correlation. This is also the central issue in data mining, networks and artificial intelligence.

Each correlation law may have its own entropy and statistics, rather than the other way around. Thus, it is not necessary to postulate entropy, but its functional emerges from the kind of interactions that one wants to consider. Each suitable universal class of group entropy is associated with a type of zeta function. The Riemann zeta function would be associated with Tsallis entropy, which contains classical entropy as a particular case. But these correlation laws are based on independent elements, while irreversibility at the fundamental level seems to reject that assumption. Fundamental irreversibility assumes universal interdependence.

We mentioned earlier Voronin’s theorem on the universality of the Riemman zeta function, which shows that any kind of information of any size can be approximated with arbitrary precision within this function —and not just once, but an infinite number of times. In this sense, the entropy of its mapping is infinite. It seems that there is a «zeta code» that could be the object of the algorithmic theory of complexity. But in turn Riemann zeta function, by itself an infinite encyclopedia of correlations and correlation laws, is only the main case of an infinite family of related functions.

One could also find any kind of information in white noise; but white noise lacks any structure, while the zeta function, being well-defined, has an infinite structure. Had Hegel known both cases he would have spoken of false and true infinity; we will see that these Hegelian apperceptions are not totally out of place here.

The seriousness of a problem such as the zeta demands that the problem of irreversibility be considered in depth. And considering it in depth means precisely including it at the most fundamental level. Conservative systems are still toy models for this.

The MIT school or Keenan school of thermodynamics, especially with Hatsopoulos, Gyftopoulos and Gian Paolo Beretta, has developed a quantum thermodynamics in stark contrast with the rationalization of entropy by statistical mechanics. The dynamics is irreversible at the most fundamental level. The number of states is incomparably greater than in the standard quantum mechanics, and only a few of them are selected. The selection principle is very similar to that of maximum entropy, although somewhat less restrictive: it is the attraction in the direction with the steepest-entropy-ascent. No great changes are needed in the usual formalisms, what is transfigured is the sense of the whole [76].

The equilibrium formalisms are also retained, but their meaning changes completely. The uniqueness of the stable equilibrium states amounts to one of the deepest conceptual transformations of science in the last decades. The approach is contrary to the idealism of mechanics and much more in line with the daily practice of engineers, for whom entropy is a physical property as real as energy. The theory has many more advantages than sacrifices, and it can be applied to the entire domain of non-equilibrium and to all temporal and spatial scales.

But of course irreversibility can be introduced directly into dynamics as we saw in M. J. Pinheiro’s reformulation of classical mechanics, replacing the Lagrangian principle of action with a system with a balance between energy and entropy. This allows the study of entropy generated in classical trajectories under different formalisms and criteria, establishing another general connection between different domains.

In general, for the physicists it does not make sense to change equations that «already work perfectly well». And, in fact, what fundamental physics has always done is to leave aside issues such as friction and dissipation to better stick with the «ideal cases». The principle of inertia is perfectly suited to the task. In this way thermodynamics was carefully segregated as a by-product of ideal physics, which is a curious arrangement indeed.

But we should try to see it the other way around: seeing how the appearance of reversible behavior emerges from a background of irreversibility is the closest thing to finding the magic ring at the heart of nature. Physics today is not about nature, but about certain laws that affect it. Reversibility and irreversibility are like fire and water; only if they intermingle in the right way one can hope to get the real thing.

The ironic twist is that mechanics, with its idealist disposition, has segregated thermodynamics; but in the end it will be information theory, the most idealist offspring of mechanics, that will need to recover everything that has been eliminated if it is to regain its meaning. And the zeta function should play an essential role in that mute and transfiguration of information theory. Information theory should be interested in irreversible mechanics… because it provides more information. And information of critical importance [77].

*

Idealization and rationalization are like the mythical Symplegades rocks that destroyed ships and navigators; only the one who understands their dangers and avoids them will be able to pass to the other side.

We have seen how this alternate process is inherent in the historical evolution of disciplines such as calculus, classical and quantum mechanics, or statistical mechanics. Hardly any experimental evidence can stop the overall trend of rationalization, since rationalization always finds ways to assimilate results and justify inconsistencies.

In philosophy, Hegel gives us the best example of the large-scale sway between idealization and rationalization. In an interesting article, Ian Wright attempts to explain the nature of the Riemann zeta function with Hegel’s Science of Logic, as the mediation between being and non-being through becoming within the realm of numbers [78]. Old-fashioned as this can sound to many, three things can be said:

First, that from the point of view of pure arithmetic, this is admissible since if there is a part of mathematics that can be considered a priori, that is arithmetic; while, on the other hand, the very definition of this function affects the totality of integers, and its extension to real and complex numbers.

Second, that calculus is not pure mathematics like arithmetic, although many specialists seem to think so, but mathematics applied to change or becoming. Were it not, there would be no need to compute zeros. The zeta function is a relationship between arithmetic and calculus as close as it is uncertain.

Third, for experimental sciences such as physics the opposition between being and not being does not seem to have systemic implications, and for formal sciences such as information theory this is reduced at most to fluctuations between ones and zeros; however the distinction between reversible and irreversible, open and closed systems, which is at the very heart of the idea of becoming, is decisive and lies at the crossroads of the issue.

In problems such as the one posed by the zeta function in relation to fundamental physics, crucial experiments capable of unraveling the relationship between the reversible and irreversible in systems could soon be conducted —provided that they are expressly sought. For instance, a Japanese team very recently showed that two distinct probes of quantum chaos, noncommutativity and temporal irreversibility, are equivalent for initially localized states. As is well known, this function has been studied a great deal from the point of view of non-commutative operators, in which the product is not always independent of order, so this would bring together two areas as vast as distant up to now [79].

With the rise of quantum computing and quantum thermodynamics, along with all the associated disciplines, there will be no shortage of opportunities to design key experiments. On the other hand, it is becoming increasingly clear that the second quantization, which deals with many-body systems, demands an special spectral approach, as Alain Connes and many other researchers have been emphasizing for many years now [80]. In addition to its theoretical interest, the practical scope of such a spectral theory could be extraordinary —think only of chemistry; but it is our opinion that the segregation of thermodynamic irreversibility prevents us from getting to the bottom of the matter.

It is not that hard either. It is said that electromagnetism is a reversible process, but we have never seen the same light rays returning intact to the bulb. The irreversible is primary, the reversible, important as it is, is just a settling of scores. Even Maxwell equations belong to two different thermodynamic categories.

But physicists have always been fascinated by reversible artifacts, independent of time, which some have seen as the last legacy of metaphysics. And what else? There are no closed, reversible systems in the universe, which are just fictions. Metaphysics is an art of fiction and physics has continued metaphysics with other means with the perfect excuse that it now descends into reality. And no doubt it has, but at what cost?

Previously we gave the example of the outfielder who catches the fly ball as a contrast for both the standard calculus and the now prevalent idea of artificial intelligence. This direct form of calculus, which we all perform without knowing it, could be the best illustration of Mathis’ constant differential method. Mathis is the first to admit that he has not been able to apply the principle even in many cases of real analysis, let alone complex analysis, which he does not even deal with. Needless to say, the mathematical community cannot stop to consider such limited methods.

And yet the constant differential is the naked truth of calculus, without idealization or rationalization, and we should not dismiss something so precious even if we do not succeed in seeing how it can be applied. If a constant differential is not found in the tables of values of the function, we can still estimate the dispersion, and whoever says dispersion, may also say dissipation or entropy.

In this way it is possible to obtain an intrinsic entropy of the function, from the process of calculus itself, rather than from correlations between different aspects or parts. This would be, at the most strict functional level, the mother of all entropies. It should be possible to apply this criterion to complex analysis, to which Riemann made such fundamental contributions, and from which the theory of the zeta function emerged.

The search for the constant differential may even remind us of the mean-value methods of elementary calculus or of analytical number theory. However the core of this method is not averages, it is on the contrary standard calculus that relies on averages —which without this reference lack a common measure. Finite-difference methods have also been applied to the study of the zeta function by leading specialists. But to make things even more interesting, we should remember that both methods do not always yield the same values.

This would simultaneously fulfill several major objectives. It would be possible to connect the simplest and most irreducible element of calculus with the most complex aspects. Something less appreciated, but no less important, is that it puts the idea of function in direct contact with that which is not functional —with that which does not change. Analysis is the study of rates of change, but change with respect to what? It should be with respect to what does not change. This turn is imperceptible but transcendental.

It should be remembered that Archimedes did not invent calculus, but he did invent the problem of calculus, by looking toward zero. Mathis is rectifying an approach that is now over 2,200 years old.

The direct method laughs in the face of the «computational paradigm» and its fervent operationalism. The now prevailing idea of intelligence as «predictive power» is reactive; in fact, it is no longer known whether it is the one we want to export to machines or the one humans want to import from them. Perceiving what does not change, even if it does not change anything, gives depth to our field. To perceive only what changes is not intelligence but confusion.

On the other hand, the definition of equilibrium conditions, as a sum and as a product, as an external and internal perspective, as applicable to closed and open systems, as intensive and extensive quantities with their possible breakdowns, as reversibility and irreversibility, are at the core of the algebraic aspects complementary to the analytical aspects that should contribute to make the intrinsic more explicit to get a deeper understanding of the zeta function.

If our view is somewhat correct, the cleaning and clarification of the enormous field of thermodynamics and entropy, with all the work that lies ahead, would be closely associated with the progress in understanding the zeta function. This does not seem a promising prospect, since it amounts to tons of dirty work far away from the exertions in the wondrous gardens of the theory of numbers. However, there is a hope that the connection between the two domains can be greatly smoothed out by analytical notions such as intrinsic entropy and the introduction of irreversibility in the foundation of analytical mechanics.

*

We have seen that the gauge fields of the standard theory, based on the invariance of the Lagrangian, clearly exhibit a feedback. They don’t even hide it, and the theorists have been struggling with terms like self-energy and self-interaction for generations now, shooing them away like flies of their faces because they thought it was a «silly» idea. Now, the renormalization group at the base of the standard model and statistical physics, containing this self-interaction, is more or less the same that is currently used in the neural networks of deep learning and artificial intelligence —so something that the electromagnetic field already has built-in is being used externally. Let us call it a missed natural intelligence.

Nobody made the integers, and all else is the work of man. The zeta function is an ideal candidate for creating a collective network of emerging artificial intelligence around it. Why? Because in it, as with addition and multiplication, the external and internal aspects of the act of counting, what some would call the object and the subject, coincide. In this type of networks will be key to what extent the potential of the physical substrate is tapped, the connection between analog and digital components in a hybrid approach, the local and global equilibrium conditions, the algorithms to simulate the structure of the integers, the structures that try to perceive or filter them, the correlation spectrum, and a good number of aspects we cannot list here.

Thus it may be possible to create a collective emerging artificial intelligence alien to human intelligence defined by purpose, and yet with the possibility of communication with humans at different levels through a common act, counting, that, as purpose limits our scope, also for humans use to exist at very superficial levels. Intellection is a simple act and intelligence is a connection or contact of the complex with the simple, rather than the opposite.

As a prime example of an analytical totality indigestible for modern methods, the zeta function is already a great sign. Someone, in an Information Age typical turn, might say that, if any possible information is already in that function, we might as well look for the answers there instead of in the universe. But the opposite is true: to better understand the function we need to change the ideas of how the universe «works» and the range of application of our more general methods. That is the beauty of the subject.

Until now, the mathematical physics from which the entire scientific revolution emerged has been applying mathematical structures to physical problems in order to obtain a partial solution by means of an artful reverse engineering. Calculus itself emerged in this process. On the contrary, what we have here are physical processes that spontaneously reflect a mathematical reality for which there is no known solution. This already invites us to change our logic from top to bottom.

As for the information itself, if with Shannon the sequential aspect of the flow of information units prevails, as we move towards more massive amounts of data the relevance of correlations increases unstoppably, and they even tend to constitute an autonomous sphere.

The data we use directly is one thing and the metadata that can be elaborated with that data and its multiple correlations with other data is quite another. The same can be said of a genetic sequence and the multifactorial analysis of the relationships between different genes, and so on. A Riemann sphere, or a Riemann surface with many layers like an onion, seems to be much more suitable representations for the multidimensional reality of analysis than the increasingly insignificant relationship of causal sequences.

However, the zeta function contains the most basic relationship between the simplest sequence, that of integers, and the infinite world of correlations between its elements. But we should not forget the most primary aspect of the question, on the contrary. We should not forget where problems come from, neither in information theory, nor in calculus, nor in mechanics.

In fact nearly everyone thinks that causation is something of the past; but this way we forget that even mechanical causation is a global issue, not a local one, and this may be relevant where we least expect it. If the unexplained dynamics of the zeta is not consistent with the foundations of mechanics, let us change the rules of mechanics, for the understanding of the problem deserves it. It is Nature that sings through random matrices and many other instances, not the spirit of the laws.

Today, it is no longer a question of mastering nature but of mastering technology, since the latter now has more destructive potential for human beings than the former. However, mastering technology demands freeing Nature from constraints imposed by a technoscience that is too instrumentalised from the outset. Considering the history of science as a whole, it is perhaps not so surprising that the avenger is now the most selfless of all sciences, the useless but eternal theory of numbers.

*

In a previous chapter we said that the concept of order is not less subjective than that of harmony; it goes without saying that this could be the subject of endless discussions. There are a lot of formal definitions of order in mathematics for as many different cases, but it could hardly be argued that the basis of them all are the natural numbers. In fact, without them mathematics would not be able to order anything.

The paradoxical aspects of entropy are related to the relative nature of the notion of order. Something with a highly visible order has more potential for disorder than what is already in complete disorder, which can be translated into the social paradox of entropy: the more complex the society, the more disorder it seems to produce. On the other hand, we also saw earlier that entropy is not necessarily the best measure of complexity, and that the energy rate density can be more revealing. These are far-reaching issues that should be attended carefully.

From another angle one can say that pure randomness is the ultimate order, something suggested by the very arrangement of the primes within the number system, so chaotic at a local level and with such a striking global structure —or as one mathematician put it, growing like weeds and yet marching like an army.

The zeta function is a transformation of the traditional harmonic series (1+1/2+1/3+1/4…) so that it does not diverge. The harmonic series, attributed to the Pythagoreans and also known in China at roughly the same time, gives us the harmonics or overtones added to the fundamental wavelength of a vibrating string, and it is a path to understand musical intervals, scales, tuning and timbre.

It has been precisely within music theory that Paul Erlich has combined information entropy with the theory of harmony to define a relative harmonic entropy. Harmonic entropy, «the simplest model of consonance… ask the question of how confused is my brain when it hears an interval. It assumes only one parameter in answering this question» [81].

Erlich’s concept of harmonic entropy deepens the line of research opened by engineer and psychoacoustic Ernst Terhardt, who had already introduced notions such as virtual tone. Virtual tones, in contrast to spectral tones, are those that the brain extracts even if the signal is masked by other sounds. The ear has a strong propensity to adjust what it hears in one or a few harmonic series. Harmonic entropy can be considered as a kind of «cognitive dissonance», but is also related to the intrinsic uncertainty of time series.

Harmonic entropy has a solid theoretical basis that makes extensive use of the Riemann zeta function. This closes a circle mostly unbeknownst, since Riemann himself made a memorable, though unfinished, study of the mechanism of the ear that already considered the global aspects of auditory perception and was conceptually much more advanced than Helmholtz’s reductionist model that was his starting point.

Riemann was struck, among many other things, by the incredible sensitivity of the detectors in the inner ear, currently known to be able to perceive displacements smaller than the size of an atom, or 1/10 of a hydrogen molecule. Would it be possible to illustrate the behavior of the zeta function with a precise acoustic analogy? Can we bring it to that area where understanding and perception merge?

Certainly, issues related to the senses are not among the most important for mathematicians; but here it is easy to see that we are dealing with a problem of high interest at different levels, for laymen and experts alike. If deep math reaches our perception, just as deeply it changes our perception of math and numbers, and that has never been more necessary than now. There is a wide range of psychoacoustic experiments to survey the matter. Erlich uses Farey’s series, one of the simplest ways to illustrate Riemann hypothesis.

Well known physicists have shown interest in the conversion of the zeta function and prime numbers into sound in order to hear its «music»; however the experiments we are talking about do not have to refer directly to this function, but first of all to the more generic questions of internal complementarity between spectral and virtual tones. The limits for the interest of the sonification of the arithmetic aspects would depend on the extent of this complementarity.

The reflection of the zeta function in perception in terms of harmonic entropy is a quest in its own right, and it should be sought with as much or even more eagerness than the physical systems capable of replicating it, as it is a more unitary endeavor. But in fact, they are not entirely different problems, even though they appear to us as the subjective and objective ends of the question, which is still misleading.

From the more utilitarian point of view of neural networks and artificial intelligence, the view that there is a basic continuity between perception and the so-called «higher» cognitive aspects is increasingly accepted. Riemann’s ideas about hearing as an analytical abstraction are precisely those applied now, 150 years later, to the computer models that attempt to reproduce it.

Distributed networks such as the one mentioned above, for example, would lack something essential without an ability to filter or perceive numbers analogous to that of the ear and the brain when selecting and reinterpreting acoustic signals. Riemann’s unfinished study of the ear, which calls the analogy «the poetry of hypothesis», dates from 1866. Weber-Fechner’s psychophysical, differential and logarithmic law, dates from 1860; Helmholtz’s great work on auditory physiology, from 1863. In 1865 Clausius makes public the first definition of entropy.

No doubt Riemann was after something deep and relevant as usual for him —although he would have had to wait at least a hundred years to start putting the pieces together. Today we can do so, although not without first making some simple, but fundamental adjustments.

Nor is there any doubt that hearing has physical, but also cognitive and psychophysical limits, and that the latter act on the former in such a way that we are faced with variable thresholds defined by harmonic entropy. Within this interaction we would find the zeta function.

Is there a place for the mathematics of harmony, based on the continuous proportion, within this context of the harmonic series and its adventures in the unlimited space of analysis? Certainly not, if one thinks of explicit arithmetic relations; at least so far mathematicians have not found anything worth mentioning. This would lead us back to the hypothesis that from Euclid’s Elements, unmixed like water and oil, two different traditions flow.

In the world of appearances, water and fire do not come into contact except in the guise of a cloud or mist of vapor. Since the first chapter we have been wondering what the relationship of the continuous proportion to analysis might be and we have not yet a definitive answer on the subject, but one of the threads would certainly be the connections of this proportion with entropy, elementary calculus and the algorithmic theory of measurement.

Needless to say, the difference between algebraic and transcendental numbers, so important in analysis and arithmetic, is irrelevant to acoustics and most of practical problems.

Thus, the harmonic series and analysis, the world of waves that reflects forms, would be on the «water shore» of reversibility; the continuous proportion, with its capacity to combine the continuous and the discrete, on the «fire shore» of entropy as irreversibility.

Of course, analysis and synthesis presuppose each other, but there are uncountable levels for their interaction. The zeta function, an analytical totality with a single pole, is in itself a prodigious synthesis, which also lends itself to all kinds of transformations in alternating, symmetrical functions, and so on. Now, the main synthetic component in modern science is hidden in the very idea of the elements, as fundamental building blocks —in physics atoms and particles. On account of them, it was thought unnecessary to look for other constructive and synthetic resources.

This is a burden from the past of physical atomism that information theory has yet to overcome, since the idea of independent elements will always be less restrictive than the net of observable, empirical correlations.

We will leave this topic suspended in the mist of its own cloud, trying to recall what is beyond complementarity, analysis and synthesis, object and subject. The example of the informal calculus of the outfielder after the the fly ball tells us that both the ball and the runner move mutually and correlatively with respect to what does not move —the constant mean.

This is the true invisible axis of activity, since the thinker is not less a thought than any other. What else can be said? But winged thought always flees from the zone of stillness, that is precisely what its life consists of. Surely this is too simple for us, and that is why we have discovered and invented the harmonic series, the continuous proportion, the entropy or the zeta function.

*

Bernhard Riemann’s eight-page paper On the number of primes less than a given magnitude was published in November 1859, the same month as The origin of species. That very same year saw the beginning of statistical mechanics with a pioneer work by Maxwell, and the starting point of quantum mechanics with spectroscopy and the definition of the black body by Kirchhoff.

I have always felt that in Riemann’s conscientious study, so free of further purpose, there is more potential than in all of quantum mechanics, statistical mechanics and its offspring information theory, and the theory of evolution combined; or that at least, in a secret balance sheet, it constitutes a fair counterweight to all of them. Although if it helps to redirect the doomed trend of infonegation of reality it will have already been enough.

These three or four developments mentioned are after all children of their time, circumstantial and more or less opportunistic theories. It is true that Boltzmann fought and despaired for the assumptions of the atomic theory to be admitted, but he had his whole battle with physicists, because chemists had already been working with molecules and atoms for a while. In mathematics, however, the lags operate on another scale.

A wordy bestseller, The origin of species was discussed in cafés and taverns on the very day of its presentation; but the Riemann hypothesis, being just the strongest version of the prime number theorem, nobody knows how to go about it after 160 years. Clearly we are talking about different wavelengths.

And what does one text have to do with the other? Nothing at all, but they still present an elusive point of contact and extreme contrast. The usual reading of the theory of evolution says that what drives the appreciable order is chance. Riemann’s hypothesis, that prime numbers are distributed as randomly as possible, but that randomness hides in itself a structure of infinite richness.

Mathematicians, even living on their own planet, are gradually realizing that the consequences of proving or disproving Riemann’s hypothesis could be enormous, inconceivable. It is not something that happens every Sunday, or every millennium. But we do not need to ask so much: it would be enough to properly understand the problem for the consequences to be enormous, inconceivable.

And the inconceivable becomes more conceivable precisely as we enter the Information Age. The inconceivable could be something that simultaneously affects our idea of information, its calculation and its physical support. Both software and hardware: a full blown short-circuit.

Probably information is not destiny, but the zeta function may be the destiny for the Information Age, its final event horizon. This would take us away from the spectre of the «technological singularity» and bring us closer to a very different landscape. The zeta function has a single pole that is dual with the zeros —the zeros reflect information that the unit cannot give because there are no finite results for s =1. This suggests an enigmatic envelope of reflectivity for the information galaxy.

In all probability, the substance of the problem is not a technical issue. But many of its consequences are, and not only technical. This development has largely gone against the grain of other scientific developments, and its assimilation would profoundly affect the dynamics of the system as a whole.

Like the prime numbers themselves, in the chronology and the sequential order the events cannot seem more fortuitous, but from another point of view they seem destined by the whole, pushed by the ebb from all shores to fit in the very moment they took place. Riemann lived in Germany in the middle of Hegel’s century, but in its second half the opposition to idealism could not have been greater in all areas, and in the sciences in particular. The pendulum had turned with all its force and the sixth decade of the century marked the peak of materialism.

Also a son of his time, Riemann could not help but feel acutely the contradictions of 19th-century liberal materialism; but the German mathematician, a theoretical and experimental physicist close to Weber and a profound natural philosopher, was an heir of Leibniz and Euler continuing their legacy by other means. His basic conviction was that we humans do not have access to the infinitely large, but at least we can approach it through the study of its counterpart, the infinitely small.

These were prodigiously fruitful years for physics and mathematics. Riemann passed away at thirty nine and did not have time to articulate a synthesis in tune with his conceptions and permanent search for unity; a synthesis understood not as an arbitrary construction, but as the unveiling of the indivisibility of truth. But he achieved something of the kind where least expected: in the analytical theory of numbers. A provisional synthesis though, that has given rise to the most famous conditional of mathematics: «If the Riemann’s hypothesis is true, then…»

This unexpected and conditional synthesis came about through complex analysis, just in the same years that complex numbers were beginning to emerge in physics, seeking, like atoms and molecules, more degrees of freedom, more room to move. The role of complex numbers in physics is a topic always postponed since it is assumed that their only reason for being is convenience —however, when it comes to dealing with the zeta function and its relationship with physics, there is no mathematician who is not forced to interpret this question one way or another, which physicists generally associate with rotations and amplitudes.

But complex analysis is just the extension of real analysis, and to get to the heart of the matter we must look further back. Riemann’s conditional synthesis speaks to us of something indivisible, but it still relies on the logic of the infinitely divisible; not to solve the famous problem, but simply to be in tune with it, it would have to be understood in terms of the indivisible itself, whose touchstone is the constant differential.

Is there anything beyond information and the computer? Of course there is, the same thing that is waiting for us beyond the Symplegades. A much more vast and undivided reality.

(Added June 10, 2022)

Scot C. Nelson [82] discovered in late 2001 that the logarithmic spirals of plant growth —sunflowers, daisies, pine cones, etc. — “serve as a simple and naturally efficient prime number sieve”. Like everything related to the continuous proportion and its associated number series, this has received barely any attention and seems relegated in advance to the ever-growing section of anecdotal coincidences. And yet this is the first basic connection that has been found between prime numbers and these ubiquitous patterns of phyllotaxis, which should have told us something. In light of Nelson’s finding, there appears to be a “central symmetry of prime numbers within three-dimensional objects”, and vegetal growth would have natural prime number generating algorithm in its becoming. The same passage from the number line to the unfolding of these patterns on surfaces and in three dimensions should be a thread for the geometric intuition of the fundamental theme of arithmetic. It is not the same to try to link arithmetic with modern abstract “geometry” than to link it with a natural geometry. A mechanical analogy comes to mind: the parts repel each other like magnetic dipoles with a minimization of the energy between them, and as the plant grows the time delay between the formation of new primordia is reduced. One can think of it in ergontropic and information entropy terms as well.

References

[69] K. K. Nambiar, Information-theoretic equivalent of Riemann Hypothesis (2003).

J. R. Higgins, The Riemann Zeta Function and the Sampling Theorem (2009)

Er’el Granot, Derivation of Euler’s Formula and ζ(2k) Using the Nyquist-Shannon Sampling Theorem (2019)

[70] K. K. Nambiar, Electrical equivalent of Riemann Hypothesis (2003)

Guðlaugur Kristinn Óttarsson, A ladder thermoelectric parallelepiped generator (2002)

Danilo Merlini, The Riemann Magneton of the Primes (2004)

M. V. Berry, Riemann zeros in radiation patterns: II.Fourier transforms of zeta (2015)

B. Van der Pol, An electro-mechanical investigation of the Riemann zeta function in the critical strip (1947)

[71] Matthew Watkins, Number Theory and Entropy; Number Theory and Physics Archive

[72] Rod Swenson, M. T. Turvey, Thermodynamic Reasons for Perception-Action Cycles

[73] Luciano Floridi, What is the Philosophy of information?

[74] O. Shanker, Entropy of Riemann zeta zero sequence (2013)
Alec Misra, Entropy and Prime Number Distribution; (a Non-heuristic Approach) (2006)

[75] Piergiulio Tempesta, Group entropies, correlation laws and zeta functions (2011)

[76] Gian Paolo Beretta, What is Quantum Thermodynamics (2007) One can also visit the site http://www.quantumthermodynamics.org/

[77] Miguel Iradier, The little finger strategy

[78] Ian Wright, Notes on a Hegelian interpretation of Riemann’s Zeta function (2019)

[79] Ryusuke Hamazaki, Kazuya Fujimoto, and Masahito Ueda, Operator Noncommutativity and Irreversibility in Quantum Chaos (2018)

[80] Ali H. Chamseddine, Alain Connes and Walter D. van Suijlekom, Entropy and the spectral action (2018)

[81] Harmonic Entropy, Xenharmonic Wiki

The Riemann Zeta Function and Tuning, Xenharmonic Wiki

Paul Erlich, On Harmonic Entropy, with commentary by Joe Monzo

[82] Scot C. Nelson, A Fibonacci Phyllotaxis Prime Number Sieve (2004)

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *