Archive for the ‘Mathematics’ Category

MATHEMATICAL PROOF OF STABILITY PROVIDED BY EVIL (Explains Why Plutocracy Is Stable, Ubiquitous)

February 20, 2026

Abstract: WHY IS THERE EVIL? We show MATHEMATICALLY that there is evil, because in a domineering species such as humanity, SELF-PREDATION, in other words, EVIL, PROVIDES WITH A STABLE ENVIRONMENT. The only hypotheses we made which go beyond standard mathematical ecology is to replace the “prey” by the enironment itself, and introducing a self-predation term, e. The latter ough not to be controversial, as abuse of humans by humans is well-documented (all and any abuse works: unequal society, plutocracy, exploitation of workers, slavery, torture, killing, etc.) Better: we accomodate altruism by changing the sign of e. So to deny the mathematization we make below of human ethology and ecology, one would have to deny both altruism and malevolence!  

We modify the classical predator-prey model of two entangled Nonlinear Ordinary Differential Equations into an environment-self-predation model more appropriate to the superpredator status of humans.  Humans need the environment, but also self-predate. Self predation is introduced minimally with a quadratic term whose coefficient e corresponds to EVIL.

A striking conclusion is that such a system is stable only if e > 0… In other words, one needs e, evil, to stabilize the environment. Making e negative, namely replacing evil by altruism, makes the system unsustainable. So not only does humanity suffer from original sin, but humanity could not do without it.

These are theorems which are mathematically rigorous. Strikingly, some evil is bnecessary to make civilization stable.

Another more mathematical essay will dig deeper in the stability of the system (which can readily be modified to accommodate various genocidal scenarios).

***

Predator-Prey systems have long been mathematically modelled. We present our own, the 

SUPER PREDATOR ENVIRONMENT SYSTEM, SPES.

It has two variables: x, denoting the edible mass of the environment (prey, and plants in calories) and y, denoting the number of humans. Both x and y are functions of time. dx/dt and dy/dt, the differentials of x and y tell how x and y will evolve. 

In the absence of human predation, dx/dt will grow proportionally to x. That means that x would be an exponential in the time t. The presence of humans will force dx to become negative (x will decrease) and that’s proportional to the human population, y… and it will decrease the faster the bigger x is so that decrease is proportional to xy. This gives the first differential equation..

For the second ODE, notice that dy/dt will grow proportionally to y but also to x, hence to the product xy, It decreases proportional to the population, that is the death rate is proportional to the population 

So far so good, nothing new here, it’s called the Lotka-Volterra system.

***.

SELF-PREDATORY DARK SIDE:

Now, thanks to our deviant mind, we introduce and add the EVIL TERM, eyy. It characterizes the superpredator who is subjected to predation only from the superpredating species [1]. The Evil Term is proportional to yy (why?Say y is large; suppose each individual composing y decides to kill n individuals in the population; that would be a decrease of ny by unit of time dt… As n  –> y, QED.) 

***

FINAL SYSTEM:

dx/dt = rx – axy 

dy/dt = bxy – dy – eyy

***

The quadratic predator mortality, eyy, can represent: cannibalism, territorial wars, disease spread at high density, resource interference, behavioral crowding costs. In fact, many real predator systems observed in the wild show stronger intraspecific regulation (eyy) than prey.

We found the value of x in a stable regime. That’s:

x = (d + er/a)/b.

So the environment gets destroyed only when e < 0. When e = 0 (thoroughly pacified society), one is in the standard L-V system, characterized by huge fluctuations, as illustrated in nature by the Canadian Lynx and Rabbit boom and collapse populations.

It goes without saying the preceding SPES model applies to all sorts of situations, for example the socioeconomic crisis in Africa: the preceding shows that pure altruism will make the situation worse. This is actually why there are evil regimes there: it’s more stable that way. But it also says that if, say, Europe, wants to deal with the Africa problem, pure altruism is counterindicated…

This also explains why pure do-gooder democracy is not stable, and rare, a fact known in antiquity (as pointed out by Aristotle and Polybius, inter alia). It also explains why both Athens and Rome used the Dark Side repeatedly, even while they were democracies, etc. It also explains why plutocracy is ubiquitous and so infuriatingly stable.

The math can be pushed considerably into nonlinear dynamics and more sophisticated models. 

Patrice Ayme 

***

[1] Some may object that Homo Sapiens is not the only species which is its own best enemy. This is irrelevant to the EVIL THEORY (it does not affect the math and conclusions above)… But also the fact that few species predate some existing megafauna has to do with the environment being already deeply degraded by Homo Sapiens… or species of predators having already been exterminated: for example the giant social saber cats which predated giant megafauna have been eliminated; elephants, hippopotamuses, etc. do not have to fear lions as much…

***

Note: Dynamical Consequences of Quadratic Predator Mortality:

  1. Breaks Hamiltonian structure of classical L-V.

  2. Removes closed orbits.

  3. Produces globally attracting equilibrium (in many parameter regimes).

  4. Can prevent paradoxical predator population explosion.

SEQUENTIAL LOGIC Dissolves The Liar Paradox (Abstract)

November 10, 2025

Doing Away With The Liar Paradox By Rethinking Logic.

There is a succinct and compelling dissolution of the Liar Paradox (LP):This sentence is false“… One can argue that LP is not a deep mystery of truth, but rather a failure of proper reference. Reference ought to be rooted in the sequential, embodied nature of cognition.

Indeed, just as the most basic neural network theory models how neuronal networks work in the most basic way, one can extend this strategy further by remodeling ALL of logic accordingly. We call this logic SEQUENTIAL LOGIC. The need for sequential logic is blatant considering the shortcomings in Twentieth Century logic sketched below..

Key Takeaways

  • Referential Indeterminacy: The paradox dissolves because the utterance “This sentence is false” is dynamically incomplete. The demonstrative “This” attempts to refer to a sentence that is still being processed and has not yet stabilized as a complete, definite object of thought. The statement is not a well-formed formula due to this referential failure.
  • Sequential vs. Timeless Logic: The core argument posits that logic in the human brain is a sequential, time-bound, and local process (like physical causality or quantum measurement), not an instantaneous, self-contained totality. The paradox arises only when one incorrectly assumes a “timeless” logic that can loop backward for self-observation after the fact (in complete contradiction with Quantum Measurement Theory).
  • Contrasting Classical Responses: The author suggests that classical solutions (like Tarski’s metalanguage or Russell’s Vicious Circle Principle forbidding self-reference) are sophistry that evade the issue (and pretty much repeat unimaginatively arguments elaborated by famous thinkers of the Thirteenth and Fourteenth Centuries). Those supposed solutions create an infinite hierarchy instead of addressing the fundamental, temporal problem of reference.
  • Totality and Metaphysics: One can link the Liar Paradox (and logical cousins like the set paradox, Gödel’s theorems, and the Halting Problem) to a metaphysical reliance on Totality—the assumption that one can meaningfully speak of “all propositions.” This is an idealization, as embodied systems operate locally and finitely (and a stealth axiom hidden below the famous Axiom of Choice).
  • Conclusion: The Liar Sentence is not a contradiction, but an incomplete/nonsensical instruction within a sequential interpretive system. It is dynamically inconsistent because the sentence as a referential object and the sentence as a processed event cannot coexist at a single moment in time.

In short, the paradox vanishes when we view logic as a procedural, time-ordered, local physical process rather than an absolute, self-contained totality with implied meaning and infinity for a backbone.

 

No infinity there, except in our minds…

Patrice Ayme

P/S: There is a more detailled 3,000 words version…

FUTURE OF MATH: COMMON SENSE LOGIC And LIAR PARADOX RESOLVED By Buridan… 7 Centuries Ago! New Axiomatics Proposed.

March 9, 2025

Abstract: We sketch a redoing of mathematics from scratch, using a debate around the Liar Paradox as leverage to introduce the new scheme. Mathematics and logic beat the brain into the better shape of more knowledgeable neural networks. Learning power comes in part from simplification (aka abstraction): subtler and more fundamental definitions. The new Axiomatics aims at simplifying math (the same method can be used all over!)

Defining logic itself in a more refined manner is essential to getting more subtle. 26 centuries ago, pre-Socratic Greek philosophers discovered self-contradictory logic, the Liar Paradox. Buridan, around 1340 CE found a thoroughly modern solution space to the Liar Paradox. Buridan plunged the Liar Paradox into a dissolving conceptual bath. Buridan pointed out that context, and subuniverses of false, true, and…. indeterminate statements, must be considered… 

In the early 20C, Russel and Godel used the Liar Paradox in their most famous works. Incompleteness of full arithmetic, thus mathematics, a la Godel, uses Liar Paradox style arguments…(however, considering Buridan’s dissolution of the Paradox, which I embrace, I doubt their work will withstand the scrutiny of eons… Mostly it will become irrelevant… Math will still be incomplete… But for a different reason…) . 

Buridan’s solution of the Liar Paradox came to the fore, and was readopted, with Intuitionistic Logic, Theory of Truth, and Category Theory.

Conclusion: mathematics and logic are much less certain than they look, and more flexibility in the axiomatic approach is suggested for both pedagogical and purely intellectual reasons… to reset abstracting powers. That will have practical consequences with AI.

***

The Cretan paradox, truly a Liar Paradox, originates from Epimenides (circa 556 BCE, around two centuries before Aristotle). Epimenides, a philosopher from Crete, thus a Cretan, supposedly said:

All Cretans are liars.

If this statement is true, then Epimenides, being a Cretan himself, must also be a liar—meaning his statement is false. But if his statement is false, then not all Cretans are liars, which contradicts the statement itself.

This paradox is a self-referential loop, where a statement denies its own truth. Bertrant Russel found such a loophole in the axiomatization of math proposed by Gottlob Frege, by producing a variant of the Liar Paradox. Russel pointed out that the concept of “the set L of sets which are not elements of themselves” is self-contradictory. 

If L is an element of itself, then by definition it should not be.
If L is not an element of itself, then by definition it should be.

This self-referential contradiction shattered Frege’s system. Frege himself, in the appendix of the second volume of Grundgesetze der Arithmetik, admitted defeat, writing:

A scientist can hardly meet with anything more undesirable than to have the foundations give way just as the work is finished.”

To rescue Set Theory Russell and Whitehead then invented the Theory of Types in Principia Mathematica. The idea was to organize sets into a hierarchy:

  1. Type 0: Individuals (e.g., numbers).
  2. Type 1: Sets of individuals.
  3. Type 2: Sets of sets of individuals.
  4. Type 3: Sets of sets of sets, and so on…

Under the system imposed by Russell and Whitehead, a set can only contain elements of a lower type, preventing “the set of things x that are such that x is not a member of x” (as Russell put it to Frege).

However, the Theory of Types was over complicated: it took hundreds of pages to prove 1 + 1 = 2…. Over complicated, in my view of optimal thinking, means wrong.

Later, Zermelo and Fraenkel developed ZFC Set Theory, which resolved the paradox in a more practical way by banning unrestricted set formation with the Axiom of Separation (aka Axiom of Specification aka Axiom of Replacement) allowing sets to be defined only from existing sets under well-specified conditions, namely those arising from mysterious “properties”, typically WFF, Well Formed Formulas).

Russell’s paradox was one of the crucial discoveries that forced mathematicians to rethink the foundations of mathematics, paving the way for modern logic and formal systems.

The Liar Paradox is closely related to Gödel’s incompleteness theorem because Gödel essentially formalized a version of the liar paradox within arithmetic, constructing a mathematical sentence that says,
“This statement is not provable in this system.”

If provable, the system is inconsistent. If unprovable, the system is incomplete. This self-referential twist is what makes both the liar paradox and Gödel’s work so philosophically deep. Here, following Buridan, we blow up the whole thing from below, a massive under-standing. 

Indeed the aim of depth ought to be teleological simplicity. I propose to replace all of these complications (Types, ZFC) with CSL, Common Sense Logic, and network based axiomatics. A basic idea is that, confronted to a nonsensical proposition (such as the Liar Paradox) one has two basic strategies: first ask for more context, that’s basically the Buridan and CSL approach… And if that does not work, one can modify the proposition following classical methods from Gentzen, transforming a “NO” within into a “YES”, etc… (More another time. Gentzen, aged 35, was starved to death by the Soviets in 1945)

***

My truth is false…. Is the essence of the Liar Paradox. 

In Common Sense Logic (CSL) given a statement S, there are three possibilities: S is true, S is false, S makes no sense. So if Epimenides comes to me and pontificates that:”All Cretans are liars!”, Common Sense Logic dictates to refine the context:”Tell me, Epimenides, you are a Cretan and you just said you are a liar. Why do you expect me to believe you?” 

As simple as that. CSL says that if one encountered someone who said solemnly:”My truth is false” one would consider such a person to be nuts.

***

Jean Buridan, a 14th-century philosopher, physicist, logician, and mathematician, a champion of empiricism and heliocentrism, creator of momentum and the first two laws of mechanics (including F = ma), also worked on the Liar Paradox. To yours truly’s admiration, Buridan found exactly the solution, yours truly had stumbled on independently… seven centuries later…

The Liar Paradox arises when a statement refers to itself in a way that creates a contradiction, such as “This statement is false.” If the statement is true, then it must be false, and if it is false, then it must be true.

Buridan’s Approach to the Liar Paradox

Contextual Evaluation:

Buridan suggested that the truth of statements could depend on the context in which they are evaluated. He proposed that self-referential statements like the Liar could be understood differently depending on the situation.

Not Just Two Truth Values:

Buridan posited that there are more than just true and false values. Buridan explored the idea of “indeterminate” truth values, meaning that some statements may not fit neatly into the binary true/false dichotomy. [A number of logical systems, including Intuitionistic Logic and Category Theory, came, in the 20C, to the same conclusion!]

Semantics and Syntax:

Buridan’s work emphasized the distinction between the syntax of language (the structure of statements… Modern “Proof Theory”) and their semantics (the meaning; modern “MODEL THEORY”). Buridan argued that the paradox arises from a misunderstanding of how self-reference affects truth.[From my point of view of Chronological Logic, the statement is applied to itself extemporaneously…]

Pragmatic Considerations:

Buridan considered the pragmatic implications of self-referential statements, suggesting that the context and intent behind a statement could influence its truth value.

***

In other words, Buridan’s solution of the Liar Paradox is thoroughly modern. Why did it take seven centuries to become part of the mainstream, especially as there it was in the 1400s, thanks to Buridan’s enormous fame, including as head of the University of Paris, adviser to four kings and lover of the queen, let alone insolnce to the church? Science moves in mysterious ways.

Or maybe not: Actually the church destroyed Buridan more than a century after his death… Hence his eclipse… 

Much mathematics is poorly understood, because it’s purely taught, as it is poorly understood or made deliberately obtuse by the teachers… Motive? Create an Elite Aura from which the elite profits… Often mathematics is used as a way to block the social ascent of those of lower birth…  . 

***

The preceding observation is psychological, and sociological… But I would dare say that it has real consequences not just in how math is taught, but also how math is understood. In other worlds, instead of focusing on an illusory search for rigor, I would suggest that the foundation of math should focus on the most valuable CONCEPTUAL CONTENT as the new, dynamic axiomatics.

***

I will give examples of this. The simplest is to observe that, locally a curve is the sum of a point, a line segment, a piece of parabola, a piece of cubic, etc… What I am saying is that this should NOT be JUST a theorem (attributed to Englishman Taylor when actually the French Bernoulli brothers discovered it much earlier: see the imperialistic politics at work?)… The expansion of a segment of curve into powers should not be just a theorem, because it is this EXPANSION what makes the theory of CALCULUS powerful: expanding in powers should be viewed in an alternate approach, as an AXIOM!

Here is a physical analogy: going back to metallic atoms which make up an engine from scratch to try to “explain” an engine is one thing, but the real interest of the jet engine is something else, it is in the fact that it creates a powerful jet. That’s a node in the neural networks, a choice value… “CHoice Value” is what “AXIOM” originally means! 

****

Axiomatizing through More Powerful Conceptual Teleology is how we humans can grab with the same brain, or maybe even an inferior brain, a much smarter and much more powerful culture than that of our distant ancestors.

This form of axiomatization enables us to abstract. ever more meaningful knowledge. The method must be applied to math itself, replacing the rigid axiomatics with dynamic intuition, in other words looking directly (intuition from Latin in-tueri “look at, consider,”). The Greek axioma comes from what is worthy.   

Common Sense Logic will enable us to go to the essence and neglect the dissidence from said Common Sense… Logic and metamathematics are the foundations of our understanding of the world they have been extensively changed since Aristotle, the proposals above are more of the same good old change….

Patrice Ayme

Here below the Axioms of the SET THEORY used by more thwn 99% of mathematicians. I object to the infinity axiom, and point out that other axioms, say that of Replacement, suffer from circularity and are implicitly network definitions… 

QUANTUM ONTOLOGY 

December 11, 2023

The Founders of Quantum Physics were both mechanics and philosophers, more exactly ontologists. Unsurprisingly, as ontologists, they disagreed with each other [1].

Among the consequences of Quantum Ontology: The traditional proof of the “Ontological” Existence of God is deconstructed by Quantum And Differential Topology… Ontology is the logic of… what is. No less. It’s an obscure word that philosophers love to brandish. But the concept has been at the core of the hardest problems in physics… for centuries… And progress has been made… Unbeknownst to most philosophers. Let’s try to correct that.

My argument against ontology as philosophical traditional views it is this: a necessary part of describing reality, what things are… is Quantum Physics, where local becomes global, and Differential Topology, where local constructs global. However traditional ontology assumes that bigger is not different from smaller. But a reminder is in order:

The compound concept ontology (‘study of being’) combines onto- (Greek: ὄν, on; gen. ὄντος, ontos, ‘being’ or ‘that which is’) and logia (-λογία, ‘logical discourse’). While the etymology is Greek, the oldest extant records of the word itself is a Neo-Latin form ontologia, which appeared thanks to Swiss based Jacob Lorhard (Lorhardus) in 1606.

Quantum Physics was unknown to the Tadjik Ibn Sina (Avicenna, Xth century), author of the first ontological argument on the existence of God and (parrot and Saint) Anselm (XIth century), who gave a simpler version. 

Avicenna distinguishes between a thing that needs an external cause in order to exist – a contingent thing – and a thing by its intrinsic nature – a necessary existent. Contingency seemed to lead to an infinite regress, cosmological arguments… So Avicenna concluded that some necessary cause (namely God) is needed to end the infinite chain. However this argument assumes that the information space is flat, so that what is global is just small written large. Advances in physics and mathematics show that this is not correct.

One can demolish the existence of indefinitely large numbers using the combination of Quantum Physics with Black Holes Physics. The argument generalizes to any infinity.

Independently, arguing that Quantum contingency must have a finite exponential radius… in direct consequence of differential topology, one must introduce a finite radius for Quantum Entanglement (SQPR)… The second argument seems similar to Avicenna’s desire to kill the infinite chain… Although SQPR was actually derived from the most logical granular analysis of topology we have: non flat differential topology always comes with a finite exponential radius…  

Demolishing the ontological proof of the existence of “God” using physics couldn’t be imagined before a century ago because Quantum Physics and its existence complex-waves mechanics was completely unimaginable.

Quantum Physics computes secretly with partial elements of known reality. The adjective “partial” is crucial (they are known as “eigenstates”, a Germano-English noun qualifying the basis vectors of reality considered). 

Many academia proclaimed “philosophers” don’t know much about Quantum Physics… so they don’t realize that Quantum Ontology is part of the reality which has been scientifically demonstrated to exist.

Cartesian materialism argues that it is possible to find the content of conscious experience moment by moment in the mind. Materialism argues that matter is the fundamental ‘substance‘. It is traditionally a point of view in ontology.

However, Quantum Physics throws out through the mind’s window what “matter” is, what “substance” is, or even what “fundamental processes” are. One can check Feynman’s “Theory Of Fundamental Processes” 

The concepts uncovered by Quantum Physics are so fundamental that they impacted mathematics, and should impact all branches of logic and wisdom. Concepts such as “Renormalization” where the act of observation changes the… ontology… Should be crucial in Twentieth-First Century philosophy.

Independently of Quantum Physics, geometry, which had been stuck in a trance caused by Euclid, finally rediscovered non-Euclidean geometry.,,, Starting with the rediscovery of elliptic geometry by the Hungarian genius Bolyai… What does differential geometry say? That on a general surface, distance and parallelism can only be defined LOCALLY. 

Special Relativity applies this localization to space-time; what Henri Poincaré called “Local Time”. So  it should have been obvious by the Nineteenth Century that Ontology had to be localized too. Then the rise of Quantum Physics should made obvious that Ontology had to be QUANTUM, or was not. Physics Nobel Niels Bohr tried to help ontology with his theory of complementarity. However, Quantum Field Theory, if nothing else, made complementarity irrelevant. In QFT, matter is created from apparently nothing, disappears before being directly offered, but makes its ephemeral existence felt.   

The preceding has a drastic impact on the Theory Of Knowledge (TOK). TOK must determine what knowledge is made of, the atoms of knowledge, so to speak. However, both differential geometry and Quantum Physics says that one can say more, have a more complex logic by making a more granular analysis! 

These advances in understanding in the last two centuries have a direct impact on ontology.  What there is must be analyzed differentially (in the mathematical sense) and through Quantum Physics (which uses non commutative differential topology, no less!)... Thus the old ontological arguments are not correct. Indeed, what do these old arguments do? They analyze the whole… As if it were the same in its totality as it is granularly… precisely what we now know, logically and experimentally, is impossible! 

All the old notions have vanished upon closer inspection: what is reductionism, when reduction brings delocalization? What is materialism when materials are made from exclusion principles and fields which are only virtual? What is mind-body dualism, supposedly when mind and body are different substances, when matter seems to have a mind of its own when dissolved at the level of fundamental processes?

Artificial Consciousness, what truly Quantum Computers are, will help us understand all this.

Ontology is more alive than ever, simply it has become Quantum and Differential.

Patrice Ayme

Quantum Physics all over, held together by delocalization and entanglement.

***

[1] Parmenides, Anaxagoras, Democritus, Plato are famous theoreticians of what existence was… Planck, main promoter of Einstein, disagreed with him on the nature of light, yet the latter got the Nobel precisely for that. De Broglie disagreed with the Copenhagen School, which owed him all. Heisenberg disagreed with Einstein while pointing out to Einstein he used Einstein’s own methodology against him… Dirac took everybody by surprise by generalizing what space was, rediscovering pure math of Cartan developed 15 years earlier… And so on… Ontology has become the most practical thing…

What’s An Equation? And Why Are Equations Crucial in Physics?

July 29, 2023

The Copenhagen Interpretation of Quantum mechanics (CIQ) is incomplete, because it uses the Quantum Collapse (QC) unavoidably, but has no description of it. Whatever. 

On the face of it, that shouldn’t be a problem, that’s how science always advances: newly proposed explanations are often incomplete.

A description of the collapse will be full when there is an equation for it. So what is an equation? It is a mathematical description using algebra and the logic (and semiotics) of algebraic geometry and its extension, mathematical analysis and infinitesimal calculus.

Now a typical equation is formalized by: V = W, where W and V are some expressions… each of them a logic of its own.

The equal sign (“=”) was invented by Robert Recorde, a Welsh mathematician, in 1557. He introduced the symbol in his book titled “The Whetstone of Witte,” where he explained its usage for the concept of equality in mathematical equations.

Before the equal sign, mathematicians used phrases like “is equal to” or “equals” to indicate equality. Recorde sought a more efficient and concise way to represent this concept, so he devised the two parallel lines (=) to signify equality between two expressions… As he put it:

“… to avoide the tediouse repetition of these woordes: ‘is equalle to’, I will sette as I doe often in worke use, a paire of paralleles, or Gemowe [twin] lines of one lengthe, thus: =, bicause noe .2. thynges, can be moare equalle.”

Going back to Euclid, or even earlier, Pythagoras, where are the equations? Well, I grabbed Pythagoras theorem, while testing my 13 year old daughter’s understanding of (some of the most famous of ) its proofs. It turns out that the proof equalizes logics.

It turns out that the proofs involve computing the same thing, an area, in two different ways, and equating the results.

Two logics lead to the same result, in two different ways, and equating the results forces out a hidden axiom (a so-called theorem [1]) 

The proof where one completes the hypothenuse is logically similar: one computes the area of the big square in two different ways, and then one equates the result, getting the relationship between the sides and the hypothenuse.

***

In physics the situation is different. The two different logic are forcefully equated; that’s the equation. That forces some results (actually they are theorems, same as in pure math). Then those results are checked against experiments, say the advance of the perihelion of Mercury [1], or the deviation of light by the sun (twice more in Einstein theory than Newton’s).

***

The force-acceleration law of motion (F = ma) and Poincaré-Einstein’s mass-energy equation (E=mc^2) are iconic equations which are the basis vectors of some dimensions of science (yes, equations as basis vectors in knowledge space…).

F is obtained, and can be measured, in a certain way, whereas the acceleration a is purely dynamic, it’s measured in a completely different way. So it’s not just two logics which are equated, but two completely different physical experimental processes [2]. Physical processes are themselves a particular type of logic,and each process is its own logic.

***

E = mc^2 was first demonstrated by Poincaré in a reduced setting (a bit as Buridan had a reduced setting for F = ma). Poincaré demonstrated that light of energy E had inertial mass m = E/c^2. That was rigorous. Later generalizations evolved: EE – ppcc = (mc^2)^2… that too is rigorous, but confusion arose about E = mc^2 standing alone (true if in the strict Poincaré context, speculative otherwise, covering up unknown physics…).

So equations in physics start always speculative, lead to new theories in physics, and then join two different experimental approaches (logics)… The same happens in mathematics (where also explicit examples are often generalized to more general contexts… which are then checked, quite a bit as in physics)

People like to talk about the “Multiverse”… In the context of Quantum Theory, it’s completely idiotic idea… BUT, in the case of information space, the Multiverse is the rule, and equations is what binds it together. Equations ultimately are the foundational ideas expressed to their barest form.

It took around seven centuries to get the equation of gravity in its present form (SQPR says it needs to be modified further, to include the “splitting aka fatigue” of gravitons…)

In the case of Quantum Collapse, we don’t have an equation yet; according to SQPR, we should get one, as the Quantum Interaction (that QI exists is another axiom) is a field (it propagates topologically and at finite speed). But we have an experimental, and theoretical fact: QC is much faster than the speed of light.

I equate, therefore I think…

Patrice Ayme

***

[1] The metalogic of the whole thing is crucial: Newton’s gravitation predicted not as much advance of the perihelion of Mercury as observed: a planet was searched to explain the dragging of Mercury’s ellipse… and not found… Einstein knew this. It turns out that Local Time slows down closer to the Sun, and that explains the effect…

***

 [2]. The first to point the equivalence of these two logics, at least in a reduced setting, was Buridan circa 1350 CE. Anglo-Saxons idolaters call it Newton’s first law

Is The PRIME NUMBER THEOREM TRUE? ULTRAFINITISM Says No

May 28, 2023

Beyond HoTT: Doing Away First With Too Many Integers!

Abstract: Contrarily to what is presumably popular opinion, the foundations of mathematics are a completely open subject. Famous mathematician and philosopher Bertrand Russell showed that Naive Set Theory didn’t work (circa 1900 CE). Russell modified Set Theory through the hierarchy of the Theory of Types… which has been recently reinforced by Homotopy Type Theory (HoTT) and its axiom of univalence, which identifies equivalent structures. A number of other valuable approaches to mathematical foundations were tried and bore fruits, such as Intuitionism and Category Theory. HoTT simplifies proofs and reason, by making equivalent concepts identical. 

Here, in a further simplification, we show that the set of Integers, as usually understood, the infinite set N, cannot exist… due, independently, to the Black Hole Limit, and the Sub Quantum Limit. In other words, nothing can go on forever. Platonists will scream that, per definition, abstract objects are not real, and thus can’t be objected to on the ground of reality (Aristotle was already “embarrassed” by this argument). Can one define something with the property that it can’t exist? No, because our brains only interconnect elements of reality. Moreover… it’s a contradiction. One can’t be and not be, even Shakespeare knew this.

Some Math Would Implode If Right! BLACK HOLEs would form 

Ultrafinitism is defined here as the impossibility of any mathematical object depicted by an infinite set of symbols, in particular the impossibility of using classical real numbers (a la Cauchy or Dedekind). BHL, the Black Hole Limit, prevents this. In particular all infinity axioms, even Potential Infinity, are outlawed. 

Ultrafinitism still allows us to do analysis… rigorous proofs coming from… non standard analysis! Numbers such as e, the exponential, let alone the exponential function, still exist, because they are defined through finite processes.

Necessity of Ultrafinitism? Abstract objects theory, invented by Plato, doesn’t exist (the BHL shows that it is a contradiction; for those who don’t want gravity as an argument, one can use an argument from Quantum Entanglement ). All there is is connections between objects as along paths and paths of paths.  Abstraction has led to the infinity addiction, which poisons mathematics.

Interest of Ultrafinitism? Focus attention on useful mathematics, rather than on the opioid-like infinity drug… which poisons the well of physics.

Cynics may observe that, de facto, mathematicians already practice ultrafinitism. One writes pi = 3.1456… Nobody could write all of pi’s digits: the universe will never last that long. Yes and no. Many great math problems would be much simpler without infinity. 

***

The Proof Of The Prime Number Theorem Can’t Be Written Down:

All the proofs of the infinity of the number of primes depend upon what they want to prove, namely infinity: primes are infinite, and we can prove it… using three infinity axioms in quick succession! So we prove that there is infinity… using infinity three times on the way!

Indeed the original proof of the prime number theorem assumes that given a number of numbers, p1, p2, p3,…pn, the product ((p1)(p2)… (pn)) is also a number, and so is  ((p1)(p2)… (pn)) +1.

Yes, three different infinity axioms are used to prove the infinity of the set of prime numbers:

AXIOM OF INFINITY: There exists at least one set that contains the natural numbers and is closed under the succession operation.

AXIOM OF SUCCESSION: For every natural number n, there exists a unique natural number n+1.

AXIOM OF INDUCTION: If a property P(n) is true for the natural number 0, and if the implication “If P(k) is true for a natural number k, then P(k+1) is also true” holds for all natural numbers k, then P(n) is true for all natural numbers n.

***

BLACK HOLE LIMIT (BHL) imposes Ultrafinitism:

Does that make sense physically? Assigning one elementary particle of some kind for each prime, in a restricted place, where the mathematician is located, and assuming that there is an infinity of prime numbers, one would get a Black Hole, and the whole theory would vanish. Do this math, become a Black Hole! Contradiction![1]

This argument is much more serious than it looks: there is no way to accumulate an infinite number of symbols in a finite region. One should call this idea, this admission of reality, ULTRAFINITISM.

***

Potential infinity occurs when a process can be repeated indefinitely without terminating. As we just saw, accumulating an infinite number of symbols in a locale is not possible. 

Potential infinity is used in a type of foundational mathematics called finitism. The reasoning in the preceding paragraph show that even finitism is too loose, too infinity friendly, and one should adopt ultrafinitism… because truly the alternatives mean nothing.

Finitism is what computers use. In finitism, only finitely constructible objects are considered to be legitimate mathematical objects, and potential infinity is used to describe sequences or processes that can be continued indefinitely without reaching a finite limit. Finitism does not use the Axiom Of Infinity. 

Finitism limits the types of mathematical objects that can be studied. I would limit them even more.

The real world is locally finite. So the mathematics we use should also be locally finite. After all, all mathematics is a bunch of neural networks in one locale, called a mathematician, and, thus, is bound to be finite too.

Advantage of ultrafinism? Focusing on mathematics that really exists, those that can be computed by computers… Say nonlinear wave theory….

Discarding infinities, outside of Potential Infinity, would then enrich mathematics, just as discarding flat space enriched geometry by introducing curved (and twisted) differential geometry. Finitism would reinforce computational mathematics, soon to be quantum computational math…  

The basic ideas of mathematics are intrinsically bound to finitude, and to that extent they derive their strength and their justification. The infinite, with its uncanny problems, is always secondary to what is finite.” (Mathematician Edmund Husserl ex-assistant to famous mathematician Karl Weierstrass, “The Crisis of European Sciences and Transcendental Phenomenology,” p. 261; later Husserl switched and became a famous philosopher).

***

  • “Abstract” Objects do not exist:

Plato held that teaching mathematics was very important to progress in wisdom. This is correct… And all the more remarkable as Plato held this opinion a century before Archimedes published the “method of exhaustion” (the beginning of infinitesimal calculus, using potential infinity…) Meanwhile our knowledge of math has progressed enormously, providing a gigantic knowledge space full of metaphors and analogies, let alone more precise methods for the most advanced thinkers.

Plato had no notion of neurology, let alone neural networks nor the topological geometry of the brain. According to Plato, the physical world was merely the vision of a shadow of a perfect and eternal realm of “forms”….beyond our physical reality. These forms included mathematical concepts like circles, squares, and triangles… 

Aristotle believed that forms were not separate entities existing in a realm beyond the physical world, but rather were immanent within (so I am repeating Aristotle, the Destroyer of Democracy! Horror!) Aristotle expressed embarrassment towards some of his former associates, including Plato and his followers, “all the more as they are friends”  regarding their views on the theory of Forms (that is abstractions unconnected to reality).

Weirdly, at first sight, most modern mathematicians subscribe to Plato’s view, which his own student Aristotle viewed, correctly, as ridiculous… 24 centuries ago. So why so weird? Because the modern theories of infinity, which deny the existence of Black Holes, among other things, are themselves weird and ridiculous, while much of modern mathematics orbit around them…. It may be simply the result of fashion and laziness. I was around top mathematicians for decades, and, although the vast lot I was mostly around may have been very peculiarm I did not find their moral standards elevated enough to intellectually contribute.

***

Ultrafinitism Doesn’t Exclude Infinitesimals:

Denying the existence of potential infinity, as ultrafinitism does, does not necessarily shut down NON STANDARD ANALYSIS. 

Archimedes’ axiom (AA)is a mathematical axiom that states that if a positive number, a, is added to itself enough times, it will eventually exceed any other positive number, b. (Book V of Euclid’s Elements as Definition 4: Magnitudes are said to have a ratio to one another which can, when multiplied, exceed one another.)

In other words, if we have two positive numbers a and b, there is always a whole number n such that na > b. In the Method Of Exhaustion, approximating curves by piecewise linear figures, Archimedes used it explicitly, attributing it to  the pre-Euclidean Eudoxus of Cnidus (Archimedes also used AA by… denying it, which is what happens when one uses infinitesimals… as he did!) 

This Archimedean Axiom is conventionally used to prove various mathematical results, including the properties of real numbers and the existence of irrational numbers. It is also important in calculus and the theory of limits.

Denying AA is true for the positive number a means that a, so that there are numbers b such that na is never superior to b implies that a is what non-standard analysis calls an infinitesimal: a = dx… to use Leibnitz’s notation. Non-Standard analysis made infinitesimals rigorous. However top mathematician Connes said infinitesimals have no meaning, because he couldn’t point at them… Yes, indeed, but neither can one point at 10^10^10^100, so the objection is without value, as Connes believes in the latter…. (especially considering the ultrafinitism imposed by the Black Hole Limit (BHL).)

Rescuing infinitesimals is important because, on the face of it, the traditional proofs of calculus a la Cauchy using limits don’t hold (beacuse ultimately, those proofs use an infinity of symbols). So, for rigorous proofs, if one posits, as I do, ultrafinitism, one can use non-standard analysis (cynics may point out that there is only a finite number of infinitesimals then; yes, but so what?) This is an extremely ironic situation…as initially infinitesimals were attacked by the likes of Berkeley as not rigorous… While, in light of ultrafinitism, non-standard analysis provides rigorous proofs… 

The preceding rejection of infinity is part of an answer to the most fundamental question in mathematics: How do mathematicians know that something they prove is actually true? Certainly, using infinite objects which can’t exist should not be part of the answer.

****

Patrice Ayme

***

[1] This brings up the subject of what is information in its most fundamental essence.

the presence or absence of an electrical charge. orientation of a magnetic field on a tiny section of the disk’s surface is bigger.in quantum computing, a bit may be represented by the spin state of a single electron or photon.

All have gravitational mass, though. So information is mass?

***

Homotopy Type Theory (HoTT) does not rely on “infinity axioms” in the same way as some set theories such as Zermelo-Fraenkel set theory with the axiom of infinity (ZFC+Infinity).

In traditional set theory, the axiom of infinity ensures the existence of an infinite set, typically represented by the set of natural numbers. This axiom states that there exists at least one set that contains the empty set and is closed under the successor operation.

HoTT takes a different approach to foundations and, similarly to Category theory, does not explicitly rely on the concept of infinity as an axiom. Instead, HoTT is based on the principles of homotopy theory and type theory. It focuses on studying types and their properties, with a focus on the ideas of equivalence, equality, identity types, omotopy-theoretic concepts and higher-dimensional structure.

However, it is important to note that HoTT does deal with certain infinite structures: they are just hidden in plain sight. For example, the circle type (S^1) or the type of natural numbers (N), are both infinite in nature.

The spirit of the univalence axiom, that was is equivalent must be identified… Was actually de facto, used by mathematicians since ever. Cardano, Euler, used it. But they used it implicitly, and this made mathematics hard to inderstand by those who naively think math is all and only about rigorous reason. So HoTT makes explicit what was implicit.

By espousing ultrafinitism, I promote a similar honnesty. Mathmaticians, implicitly mostly use math in an ultrafinistic way. I just try to block them from doing useless math, because math about what can’t be is no math.it

But the concept of infinity is not axiomatized separately in HoTT; it emerges naturally within the theory based on the principles of higher-dimensional type theory.

HOW MATHEMATICS EMPOWERS Souls With Wiser, More Powerful Abstractions: CONCEPTUAL DIMENSION THEORY

February 8, 2020

MATH IS A LANGUAGE WHOSE WORDS ARE NOT JUST THOUGHTS MADE OF SETS OF OBSERVATIONS, BUT COMPLICATED UNOBVIOUS LOGICAL SYSTEMS, Endowed With High Dimensions:

Abstract: What’s Math? And why does it matter?[1] Mathematics uses words denoting high dimensional concepts (defined subsequently). Those dimensions are the vertices of sophisticated logical systems. Logic itself is physics (nature), as basic as it goes. Thus mathematics is a maximally logically concentrated language which speaks of, and with, various conclusions humanity has drawn from the universe (that’s what “abstract” means: drawn away from!) Hence mathematics’ beauty, even poetry, let alone intelligence, from its enormous logical power.

Warning: Some of this essay is very basic, some on the forward edge of human understanding and will be controversial. Readers should jump harder sections. 

***

Mathematical concepts are hyper powerful because they are neurologically multidimensional and those dimensions are logically equivalent.

Mathematical concepts are hyper powerful because they are neurologically multidimensional and those dimensions are logically equivalent.

The power of mathematics comes from its power to abstract entire trains of thought, and more. This way is not unique to mathematics. Normal language works the same way. But mathematics is just much more powerful. As I will try to explain, the words of mathematics are much higher dimensional. 

If we say “red” (in any human language), we mean electromagnetic radiation within a more or less well defined wavelength range (which can be measured in fraction of a meter, or multiple of an atom). It doesn’t matter in which human language “red” is said: it’s always the same idea: a range of frequencies.[2]

A prehistoric man may have measured “red” as the wavelength of light emitted by blood, or bauxite, or iron oxide. Not exactly the same connotation, but the same general idea: a range of electromagnetic wavelengths. 

“Red” is a concept. So is a “parabola”: a concept too. But the second one is tied in, and it is, a much more complicated logic, with many aspects.

A parabola represents some sort of fixed equidistance, between one point, and a line. A hyperbola, some sort of fixed difference of the distances to two points. Two different subtle notions about distance. The two concepts are in turn full of corollaries and theorems: other unexpected at first sight consequences. Ellipses are the set of points whose sum of the distances to two points are fixed. Turns out that this is the trajectory of an object submitted to inertia counterbalanced by a force proportional to the inverse square of the distance to a central point

However a “parabola” is not just one concept, but many concepts, logics, so-called “theorems”. When you kick a soccer ball (or shoot an arrow, fire a missile or throw a stone), on a planet without atmosphere, it arcs up and comes down again, following a parabola (on a planet with atmosphere, the parabola shrivels a bit into a more complicated curve which can also be computed). A parabola is the set of point equidistant (same-distance) from a fixed line (the directrix) and a point (the focus).

A parabola has this profitable property: Any ray parallel to the axis of symmetry gets reflected off the surface straight to the focus.One can see the interest if one wants to concentrate (say) solar power, or conversely, have a focus of heat send back a beam of parallel heat… or parallel light, as in a lamp. if we slice through a cone, parallel to its side, we also get a parabola. The Ancients knew this. Menaechmus in the 4th century BC discovered a way to solve the problem of doubling the cube using parabolas (not just with compass and straight lines).

With such useful properties, parabolas are all over mathematics and physics, engineering and technology. A celestial body on a parabolic trajectory probably came from outside the solar system (and certainly so if it’s hyperbolic, the next conic section over…) Hence, when mathematicians, physicist, engineers brandish the word “parabola”, they actually brandish lots of elaborated logic, enough to fill up an entire book from senior high school mathematics. We are far here from a simple range of frequencies. So “parabola” is an abbreviation of thoughts.

***

Patrice’s DIMENSIONAL POWER OF CONCEPT THEORY:

The dimension of a mathematical concept shall be equal to the number of different neurological networks its various definitions, non obviously equivalent, but mathematically equivalent, call upon

One could object to this definition that it is subjective, that, if we were much more clever, the different definitions of a given mathematical concept would be glaringly obvious, etc. However, we have reached a level of intelligence that is enough to conquer the galaxy (if we don’t self-destruct, a big if, it’s only a question of time). So we have here a particular level of intelligence which is absolutely defined (roughly).

To further dig into the  notion of “subjectivity”: the notion of “mathematically equivalent” is different from “logically equivalent”: mathematics is, partly, a social concept. For example, mathematicians did excellent infinitesimal calculus, getting great results using Descartes Algebraic Geometry, for two centuries without a rigorous definition of “calculus” (and now we have too many notions!) This is no accident, but caused by the “neural networks” definition of mathematics. When we say that mathematical concepts are made of logical assemblies of neural networks, we are also alluding to the saying that the truth is in the pudding. This was practiced before, but not explicitly said, causing confusion. Something was clearly missing. What is mathematics? I say neural networks. Before this, the best authorities on the subject had nothing very deep to say on the subject. An example is Bertrand Russel, an authority in the Foundations of Mathematics (he found a glaring problem in the foundations of Set Theory and replaced it by the Theory of Types… launching an industry of foundations of mathematics…

As Bertrand Russell put it… well before neural networks, but I long meditated that quote, bringing me where I am:

As this essay shows, and I have long held, this quote expresses a thought which, unsurprisingly, turns out to be untrue. Why? Because it excludes the neural network definition of mathematics… which I embrace (as I created it!) it’s unsurprising, because as Russell would have been the first to admit, mathematics works, thus is, he would readily admit, true. Somehow. I show how.

Here is Bertrand more fully quoted: “Pure mathematics consists entirely of assertions to the effect that, if such and such a proposition is true of anything, then such and such another proposition is true of that thing. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true. […] Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. People who have been puzzled by the beginnings of mathematics will, I hope, find comfort in this definition, and will probably agree that it is accurate.”

Explanation in a more modern language which Russell, living a century ago, couldn’t have the notion of. Neural networks don’t have to prove they are true, because, as soon as they exist, they are. Mathematics is all about neural networks, proving their equivalences, or building more with them (hence the success of category theory). Hence Russell was wrong: mathematics contains absolute truths, the truths of the neural networks which depict them. 

Bertrand Russell was on the trail which led where yours truly got.

Anyway the point here is to demonstrate, first of all, the role of mathematics in human intelligence, and how it relates to the universe.

That sort of dimensional approach can be extended to other concepts, for example love (sexual, parental, romantic, etc.; love is obviously in some sense very high dimensional… but not in the mathematical sense, because there are no rigorous proofs of the logical equivalence of the various notions of love (said logical equivalences making their own networks)… for the good and simple reason that they are often illusory or false, as they call upon different neurohormonal systems)

Each word is a theory. In normal language, as in mathematics. Neurologically, each word is a network. The concept of elephant is well-known to be made of various attributes, as described by blind men: a tail, tusks, legs like tree trunks, belly like a cave, ear like giant leaves, etc. And it eats trees, doesn’t forget, and can be tamed. So the concept of an elephant is a network.

A mathematical object or concept would often be similar, with various, widely different aspects… but they can be demonstrated to be all equivalent, modulo lots of logic. Math concepts are like the concept of elephant, with various aspects, but logically tied together: where the tail implies the tusks and the trunk, and the ears and the big feet. The number of these neurologically different aspects of one mathematical concept I call the conceptual dimension of that concept

Let me go on with my little example. “Red” is, literally, a one dimensional concept: a color is more or less red, as the frequency varies along the spectrum. Now a dimension of a function is simply described: a function, or a space, of n arguments, or n coordinates, is n dimensional. So how does the brain work? It has inputs and outputs. Inputs are known as senses. The senses are actually made of dedicated processing organs. For example the “visual area” has 17 or so processing sub-organs. Then end result, though, is that “Red” is PERCEIVED AS ONE INPUT. So we will call it ONE DIMENSIONAL. For that reason alone? Not quite electromagnetism literally demonstrates “red” is indeed a range of frequencies, it’s one dimensional in its fundamental input. 

A “Parabola” is high dimensional. Why? It is simple, a parabola has different definitions.  “Different” means that they look nothing like each other. They can be proven to be all equivalent, through a lot of mathematics and other keen observations. However, those equivalences are not obvious. Parabolas were known to have wonderful properties… for twenty centuries… before it was discovered that they described the trajectory of a projectile submitted to gravity. 

By making what he called his “War on Mars”, Kepler was able to prove that Mars followed an ellipse. However, it took another 70 years or so before newton published a more or less finished proof that Kepler’s Three Laws of planetology (including the ellipse) were equivalent to inertia plus the inverse square of the distance law. This is Newton’s greatest claim to fame (and many astronomers and mathematicians in Paris, from which came the gravitation law, would have liked to prove that… so it was not easy to do so). The bottom line is that here we have here two completely mathematically equivalent definitions and one can go from one to other, only through enormously hard work. Another definition of an ellipse, equivalent through more hard work, and that one known for 24 centuries is that it’s a particular section of a cone. 

So “ellipse”, like parabola, is a concept that is at least three dimensional: it is the equivalence of three completely distinct neural networks.   

Much mathematics consists in proving that completely different notions and approaches (different neural networks) are equivalent. For example, in differential geometry, the famous Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions of some operators on the manifold) is equal to the topological index (defined in terms of some topological data/network). That equivalence in turns includes many other theorems, as special cases, and has applications to theoretical physics.

***

Is mathematics the language of the universe? No. Universe don’t talk, just is. Mathematics is the smartest language of Homo Sapiens, talking about the universe in the most abstracted, thus most powerful, fashion!

Traditionally, it is said that Galileo discovered that, without air, a body would follow a parabola (artillery men had long discovered something like that was true). Galileo said: “Philosophy is written in that great book which ever lies before our eyes — I mean the universe — but we cannot understand it if we do not first learn the language and grasp the symbols, in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”  

And so it goes, all over mathematics. The exponential is an arsenal of theorems. The square root of (-1) even more so. To understand the square root of negative numbers means to understand the complex numbers, the “largest” field (both of the latter word are themselves mathematical concepts, that is, sets of most significant theorems).  

The word “red” is already a broad abstraction of a vast field of possibilities. But the exponential or the complex numbers, or any mathematical concept can symbolize entire logical systems. Exp and the complex numbers are actually connected by the famous equation: exp (ix) = cos x + i sinx… Where i is the square root of minus one. So, in particular, exp(i) = -1…

Introducing basic, crucial mathematics to the uncouth multitudes is necessary, as Plato himself proclaimed at the entrance of his Academy… Said multitudes absolutely need more intuitive grasp of mathematics to become cogent enough about the world to help sheperd our great leaders toward enough sanity to ensure survival of the species. Nice perspective on parabolas, and what the different coefficients thereof mean. 

Not the easiest method to solve the quadratic equation, of course, as changing variables by taking X= (x+ b/2) as new variable is algebraically irresistible and solves the equation in 4 lines or so. 

Parabolas, and ellipses (both conic sections) were central to Seventeenth Century physics.

However, in the Nineteenth century waves, rose to prominence, first with light as wave, Fourier analysis (decomposing periodic motions into sum of cosines/sines), electromagnetism. it turns out (plenty of theorems) that all these come from the exponential!

Without a thorough grasp of exponentials, phenomena such as the CO2 catastrophe, or pandemics, can only escape the understanding of the commons or god-struck politicians. Exponentials grow at an instantaneous speed equal to their instantaneous value… exactly as a bacterial colony. Most catastrophes involve exponentials. Exponentials also illustrate all sorts of decays and, glued together, the most frequent probability distributions. 

***

Math beauty, the beauty of neural networks. Neural networks give us power, and we find that beautiful…

HIGH POWER CONCEPTS HAVE HIGH DIMENSION:

All this goes meta. Example: the concept of “Coronavirus” (“Crown Shaped Virus”). Antivirals against some type of Coronaviruses act against others (Remdesivir). So what is logically connected can be collectively treated. This is why broad concepts feed intelligence, thus action power.

By this I mean (rough) equivalences of foundations themselves form high dimensional conceptual objects: Category Theory is, by itself, such an object.

Another, more practical example: Infinitesimal Calculus. Infinitesimal Calculus has many different definitions, more or less equivalent, the earliest dating back to Archimedes, and then another one, which I call the Infinitesimal Geometric Calculus developed the Buridan school in the Fourteenth Century (this is the one Newton used). The more recent definitions of infinitesimals (Robinson and Al.) are from the Twenty-first Century (2006 Karel Hrbacek). This means the field is still fully active research! More dimensions to be added!

This makes Infinitesimal Calculus, according to my definition, a very high dimensional object. Refined, high dimensional thinking was of course hated by the terroristic, mentally simplistic Roman Catholic Church. Accordingly, Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632! (Notice that this was long before the birth of Leibniz or Newton, to whom the creation of calculus is often erroneously attributed by Anglo-German tribalists…)

Mathematics is the language whose words are ready made sets of powerful thoughts (for example word-concepts such a “parabola”, or the “exponential” come with an arsenal of thoughts and inner logic). 

By learning to speak and think math, we learn a metalanguage, the most powerful language humanity has written, and keeps writing, whose elements belong to, and depict, the world. Mathematica and, even more, logics are the skeletons of physics, and the latter is how the world is made. To have more advanced thoughts on what the world is made of, they are not just the eyes, but the senses one can’t do without.  

One could call mathematics the Post-Prehistoric Language. [3]

In any case, mathematics is the surest, inescapable way to more powerful thinking. [4] Even the lousiest pseudo-philosophers nowadays know some more important mathematics than Archimedes itself (a truly horrendously offensive thought!)  The more advanced thinking they got imprinted with in primary school, much of it mathematical, helps to explain why even the lousiest official thinkers nowadays are smarter than the Ancients.

When communicating mathematics, one communicates with entire, high dimensional logical systems.[5] Thus the language is hyper powerful: it has huge logical bandwidth.

Patrice Ayme

***

***

[1] Plato famously interdicted access to his Academy to all non-mathematicians. The essay above explains why. Top philosophy can’t indulge mental retards too much, out of the lab, to study them. Mastery of contemporary math insures some minimum standard of intellectual capability.

By the way, my neurological network definition of mathematics shows that the Platonic world of math, out there was… all along inside Plato’s head. Or the heads of all mathematicians (including those in kindergarten…) 

***

[2] Range of frequencies is of course the post-Maxwell description/explanation… Now prehistoric man would have shrugged that he knew red when he saw it in sunsets, blood, bauxite, flowers… That comes down to the same excitement of the brain in the same way each time, a particular pattern: there is no logic to it.

***

[3] “Postmodernism” means, of course, nothing. because when was “modernism”? When William The Conqueror suggested that the Earth turned around the Sun, before freeing all the slaves of England while his friend the Abbot Berengar was suggesting that Reason was what was meant by God (to the impotent fury of the Vatican)? That was during the Eleventh Century… Whereas, “Prehistory”, defined as what was before the Neolithic (because the Neolithic is entering history, thanks to lots of archeology) is certainly a well-defined notion. Prehistoric men knew concepts such as red, as in bloody sunsets, very well. But they had little notion of parabolas… except of course, in practice, when they threw a projectile onto a prey or predator…

***

[4] Learning math doesn’t guarantee wisdom, especially not anti-fascist wisdom, to wit, Plato. The deplorable “modern” case being Kant. Kant started as an astronomer, a co-discoverer of the concept of galaxy. He should have stuck to that, instead of helping turn hundreds of millions of germans (over a few generations) into moralizing murder robots.

Many people are full of hatred, and they don’t even suspect it. Worse: the Zeitgeist, the spirit of the times, is to pretend that there is such a thing as good, moralizing people, bereft of hatred. A contradiction in adjecto

Philosophically, of course Kant was mostly an enslaving pre-Nazi robot as his most important characteristic, proving mathematics produces plenty of idiot savants. Nietzsche, an excellent philosopher, was no mathematician, but a philologist (a lover of logic, of the interpretation of the meaning of texts; recently the term hermeneutics is preferred because it sounds more savant)

Descartes, of course was one the greatest minds and a very astute psychologist… and used psychology to further math… by forcing math in more useful logic… something I also advocate in my stance relative to infinity! A lot of top scientists were top philosopher, having to invent new philosophy to invent new physics (Maxwell’s identification of electromagnetism and light, Boltzmann’s murky states and Poincare’s local space and time being obvious examples) And of course the Foundations of Quantum Physics are a philosophical abyss questioning time, space, and reality itself into an uncertain, not to say ethereal, medium…

***

[5] The dimension of a logical system is the minimal number of axioms in its axiomatics. Don’t look it up: I invented the notion. It boils down to the usual definition of dimension in a manifold (by subtracting, axioms in common).

What Are Numbers? Math is most abstracted physics!

June 27, 2019

German mathematician Richard Dedekind (1831–1916) published in 1888 a paper entitled Was sind und was sollen die Zahlen? What are numbers and what should they be? 

Here is my answer: forget what you know. 

Numbers are neural networks. Small numbers have small networks, big ones, big networks; so the nature of numbers, change, as they get bigger…(According to me, listening, delighted, to the indignant screams of distant mathematicians ).

Diagram Chasing all of them: not a coincidence. Instead of having “It from Bit”, one has it from action (arrow in Cat theory, action potential with neurons, fundamental process in physics…)

A few immediate applications of this master idea:

  1. Numbers are learned, because neural networks are learned.
  2. Advanced animals, having advanced neural networks, should be capable of having those neural networks we call numbers.
  3. Big numbers are different from small numbers, because big neural networks are different from small ones. Here again is the idea that energy should matter in mathematics (the conventional thinking being just the opposite: energy doesn’t matter).

***

Kronecker’s also quipped: “God made the natural numbers. Everything else is the work of man.

Kronecker proceeded to define numbers from Set Theory, invented for the purpose. Later Bertrand Russell found a problem with Set Theory, the set of sets which are not elements of themselves brought a contradiction. Russell tried to get out of that with a hyper complicated theory. In modern times, mathematicians prefer to use Category Theory. [1]

I go beast on how to construct numbers. Beasts have brains, and brains have neural networks.

Kronecker thought mathematics is the work of man. But, actually all advanced animals move in a way proving they are capable of differential calculus. Far from being the work of god, differential calculus is the “work” of dog. Without differential calculus, that dog can’t hunt. OK, dog is not conscious of god, or of the calculus it’s using. So what?   

Now for a few easy bits:

*** 

Let’s notice that numbers are definitely the work of the genus Homo: 

Consider the integer 152. 152 is the work of man. Just like “Yes” is the work of the Englishman. 

152 means: 100 + 5×10 + 2. But that’s only in base ten. In base 60, that would be: 60 x 60 + 5 x 60 + 2… Which converts to 3,902  back in base ten. 

So “152” is not an absolute notion. For that integer to make sense, the basis in which it lives has to be expressed (and what the notation means, such as 2 = 1+1…). The Babylonians invented base 60 to handle big numbers in astronomy. We still use base 60 to this day, for angles and time. So “152” is a cultural construction. In several ways. 

***

So how come Platonists claim that numbers live out there, in a special realm of their own, if there is so much human explanation and convention to provide, with just basic numbers? Most mathematicians also believe their are exploring that realm of Plato. But actually all they are exploring is the possible connections which can be built within the neural networks inside their brains. So they are exploring physics, a bit like a child on a beach explores which sand castle she can get away with. A difference with building sand castles is that the possibilities are few and are carefully recorded, becoming the body of that culture and language called “mathematics”. 

An example is the Archimedean axiom. The Greeks knew about it well: it’s in Euclid, and it says that, given two magnitudes, A and B, there is always an integer n so that: nA > B.

If one denies that axiom, one gets infinitesimals… That was made rigorous through Model Theory, in the 1950s, three centuries after Leibnitz first introduced infinitesimals, starting a fight with Newton.

No Plato universe of “forms”… or rather, they exist, but live as geometries inside brains…

Even more dramatic are hyperbolic and elliptic geometries: they were discovered at least a century before Euclid. Then they were forgotten, and a stupid debate occurred for 21 centuries about whether the parallel axiom (one parallel to a line, one only, through a point off the line) was independent of the others. Mathematicians, even the brightest, had forgotten that their ancestors had found geometries with many, or no, parallels…)

***

Let’s recapitulate: culture is composed of (vague, but good enough) descriptions of neural networks, which can be transmitted. Once contracted, those neural network templates modify brains in similar ways. Those similarly modified brains behave all similarly, mimicking innate characteristics.  

Language enables a transmission of neural geometries, topologies, logics, and categories. Language is primitive in most advanced animals, consisting in grunts, cooing, gestures, etc. But in Homo language became an advanced mental cultural duplication system (and some of the mentality passed is mathematical, but not only). 

True, advanced animals have a sort of pseudo-innate capability to evolve neurobiological mathematical structures: through trial and errors mimicking their relatives, or experimentation with what works, young animals brains learn to optimize trajectories: the brains of many predators in pursuit make subsets of themselves into differential calculus machines. 

So if Plato’s “forms” are real forms in (generalized) geometry and topology… what are the latter made of? Good question! Therein come our old friend, the Quantum Wave… 

Clearly, math is the most abstracted physics.

Patrice Ayme

DOING AWAY WITH INFINITY SOLVES MUCH MATH & PHYSICS

January 11, 2018

Particle physics: Fundamental physics is frustrating physicists: No GUTs, no glory, intones the Economist, January 11, 2018. Is this partly caused by fundamental flaws in logic? That’s what I long suggested.

Says The Economist:“Persistence in the face of adversity is a virtue… physicists have been nothing if not persistent. Yet it is an uncomfortable fact that the relentless pursuit of ever bigger and better experiments in their field is driven as much by belief as by evidence. The core of this belief is that Nature’s rules should be mathematically elegant. So far, they have been, so it is not a belief without foundation. But the conviction that the truth must be mathematically elegant can easily lead to a false obverse: that what is mathematically elegant must be true. Hence the unwillingness to give up on GUTs and supersymmetry.”

Mathematical elegance? Define mathematics, define elegance. What is mathematics already? What maybe at fault is the logic, that is the mathematics, brought to bear in present day theoretical physics. And I will say even more: all of today logic may be at fault (what logic is, is itself the deepest problem in logic…). It’s not just physics which should tremble. The Economist gives a good description of the developing situation, arguably the greatest standstill in physics in four centuries:

“In the dark

GUTs are among several long-established theories that remain stubbornly unsupported by the big, costly experiments testing them. Supersymmetry, which posits that all known fundamental particles have a heavier supersymmetric partner, called a sparticle, is another creature of the seventies that remains in limbo. ADD, a relative newcomer (it is barely 20 years old), proposes the existence of extra dimensions beyond the familiar four: the three of space and the one of time. These other dimensions, if they exist, remain hidden from those searching for them.

Finally, theories that touch on the composition of dark matter (of which supersymmetry is one, but not the only one) have also suffered blows in the past few years. The existence of this mysterious stuff, which is thought to make up almost 85% of the matter in the universe, can be inferred from its gravitational effects on the motion of galaxies. Yet no experiment has glimpsed any of the menagerie of hypothetical particles physicists have speculated might compose it.

Despite the dearth of data, the answers that all these theories offer to some of the most vexing questions in physics are so elegant that they populate postgraduate textbooks. As Peter Woit of Columbia University observes, “Over time, these ideas became institutionalised. People stopped thinking of them as speculative.” That is understandable, for they appear to have great explanatory power.

A lot of the theories found in theoretical physics “go to infinity”, and a lot of their properties depend upon infinity computations (for example “renormalization”). Also a lot of problems which appear and that, say, “supersymmetry” tries to “solve”, have to do with turning around infinite computations which go mad for all to see. For example, plethora of virtual particles make Quantum Field Theory… and miss reality by a factor of 10^120 (one followed by 120 zeroes…). Thus curiously, Quantum Field Theory is both the most precise, and most false theory ever devised. Confronted to all this, physicists have tried to do what has NOT worked in the past, sometimes for centuries, like finding the intellectual keys below the same lighted lamp post, and counting the same angels on the same pinhead.

A radical way out presents itself to simplify the situation. It is itself very simple. And it is global, clearing out much of logic, mathematics and physics, of a dreadful madness which has seized those fields: GETTING RID OF INFINITY… at the logical level. Observe that infinity itself is not just a mathematical hypothesis, it is a mathematically impossible hypothesis: infinity is not an object. Infinity has been used as a device (for computations in mathematics). But what if that device is not an object, is not constructible?

Then lots of the problems theoretical physics try to solve, a lot of these “infinities“, simply disappear. 

Colliding Galaxies In the X Ray Spectrum (Spitzer Telescope, NASA). Very very very big is not infinity! We have no time for infinity!

A conventional way to get rid of infinities in physics is to cancel particles with particles: “as a Higgs boson moves through space, it encounters “virtual” versions of Standard Model particles (like photons and electrons) that are constantly popping in and out of existence. According to the Standard Model, these interactions drive the mass of the Higgs up to improbable values. In supersymmetry, however, they are cancelled out by interactions with their sparticle equivalents.” Having a finite cut-off would do the same.

A related logic creates the difficulty with Dark Matter, in my opinion. Here is why. Usual Quantum Mechanics assumes the existence of infinity in the basic formalism of Quantum Mechanics. This brings the non-prediction of Dark Matter. Some physicists will scoff: infinity? In Quantum Mechanics? However, the Hilbert spaces which Quantum Mechanical formalism uses are often infinite in extent. Crucial to Quantum Mechanics formalism, but still extraneous to it, festers an ubiquitous instantaneous collapse (semantically partly erased as “decoherence” nowadays). “Instantaneous” is the inverse of “infinity” (in perverse infinity logic). If the later has got to go, so does the former. As it is Quantum Mechanics depends upon infinity. Removing the latter requires us to change the former.

Laplace did exactly this with gravity around 1800 CE. Laplace removed the infinity in gravitation, which had aggravated Isaac Newton, a century earlier. Laplace made gravity into a field theory, with gravitation propagating at finite speed, and thus predicted gravitational waves (relativized by Poincaré in 1905).

Thus, doing away with infinity makes GUTS’ logic faulty, and predicts Dark Matter, and even Dark Energy, in one blow.

If one new hypothesis puts in a new light, and explains, much of physics in one blow, it has got to be right.

Besides doing away with infinity would clean out a lot of hopelessly all-too-sophisticated mathematics, which shouldn’t even exist, IMHO. By the way, computers don’t use infinity (as I said, infinity can’t be defined, let alone constructed).

Sometimes one has to let go of the past, drastically. Theories of infinity should go the way of those crystal balls theories which were supposed to explain the universe: silly stuff, collective madness.

Patrice Aymé

Notes: What do I mean by infinity not constructible? There are two approaches to mathematics:1) counting on one’s digits, out of which comes all of arithmetics. If one counts on one’s digits, one runs of digits after a while, as any computer knows, and I have made into a global objection, by observing that, de facto, there is a largest number (contrarily to what fake, yet time-honored, 25 centuries old “proofs” pretend to demonstrate; basically the “proof” assumes what it pretends to demonstrate, by claiming that, once one has “N”, there is always “N + 1”).

2) Set theory. Set theory is about sets. An example of “set” could be the set of all atoms in the universe. That may, or may not, be “infinite”. In any case, it is not “constructible”, not even to be extended consideration, precisely because it is so considerable (conventional Special Relativity, let alone basic practicality prevent that; Axiomatic Set Theory a la Bertrand Russell has tried to turn around infinity with the notion of  a proper class…)

In both 1) and 2), infinite can’t be considered, precisely, because it doesn’t finish.

Some will scoff, that I am going back to Zeno’s paradox, being baffled by what baffled Zeno. But I know Zeno, he is a friend of mine. My own theory explains Zeno’s paradox. And, in any case, so does Cauchy’s theory of limits (which depends upon infinity only superficially; even infinitesimal theory, aka non-standard analysis, from Leibnitz + Model Theory survives my scathing refounding of all of logics, math, physics).  

By the way, this is all so true that mathematicians have developed still another notion, which makes, de facto, logic local, and spurn infinity, namely Category Theory. Category Theory is very practical, but also an implicit admission that mathematicians don’t need infinity to make mathematics. Category Theory has now become fashionable in some corners of theoretical physics.

3) The famous mathematician Brouwer threw out some of the famous mathematical results he had himself established, on grounds somewhat similar to those evoked above, when he promoted “Intuitionism”. The latter field was started by Émile Borel and Henri Lebesgue (of the Lebesgue integral), two important French analysts, viewed as  semi-intuitionists. They elaborated a constructive treatment of the continuum (the real line, R), leading to the definition of the Borel hierarchy. For Borel and Lebesgue considering the set of all sets of real numbers is meaningless, and therefore has to be replaced by a hierarchy of subsets that do have a clear description. My own position is much more radical, and can be described as ultra-finitism: it does away even with so-called “potential infinity” (this is how I get rid of many infinities in physics, which truly are artefacts from mathematical infinity).  I expect no sympathy: thousands of mathematicians live off infinity.

4) Let me help those who want to cling to infinity. I would propose two sort of mathematical problems: 1) those who can be solved when considered in Ultra Finite mathematics  (“UF”). 2) Those which stay hard, not yet solved, even in UF mathematics.

Free Will Destroys The Holographic Principle

February 12, 2017

Abstract: Many famous physicists promote (themselves and) the “Holographic Universe” (aka the “Holographic Principle”). I show that the Holographic Universe is incompatible with the notion of Free Will.

***

When studying Advanced Calculus, one discovers situations where the information on the boundary of a locale enables to reconstitute the information inside. From my mathematical philosophy point of view, this phenomenon is a generalization of the Fundamental Theorem of Calculus. That says that the sum of infinitesimals df is equal to the value of the function f on its boundary.

The Fundamental Theorem of Calculus was discovered by the French lawyer and MP, Fermat, usually rather known for proposing a theorem in Number Theory, which took nearly 400 years to be proven! Fermat actually invented calculus, a bigger fish he landed while Leibniz and Newton’s parents were in diapers.

As Wikipedia puts it, inserting a bit of francophobic fake news for good measure:  Fermat was the first person known to have evaluated the integral of general power functions. With his method, he was able to reduce this evaluation to the sum of geometric series.[10] The resulting formula was helpful to Newton, and then Leibniz, when they independently developed the fundamental theorem of calculus.” (Independently of each other, but not of Fermat; Fermat published his discovery in 1629. Newton and Leibniz were born in 1642 and 1646…)  

Holography is a fascinating technology.  

Basic Setup To Make A Hologram. Once the Object, The Green Star, Has Fallen Inside A Black Hole, It’s Clearly Impossible To Make A Hologram of the Situation, If Free Will Reigns Inside the Green Star.

Basic Setup To Make A Hologram. Once the Object, The Green Star, Has Fallen Inside A Black Hole, It’s Clearly Impossible To Make A Hologram of the Situation, If Free Will Reigns Inside the Green Star.

The objection is similar to that made in Relativity with light: if one goes at the speed of light (supposing one could), and look at a mirror, the light to be reflected could never catch-up with the mirror. Hence, once reaching the speed of light, one could not look oneself into a mirror. Einstein claimed he got this idea when he was 16-year-old (cute, but by then others had long figured out the part off Relativity pertaining to that situation…

My further objection below is going to be a bit more subtle.

***

Here Is The Holographic Principle As Described In Wikipedia:

The holographic principle is a principle of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region—preferably a light-like boundary like a gravitational horizon. First proposed by Gerard ‘t Hooft, it was given a precise string-theory interpretation by Leonard Susskind[1] who combined his ideas with previous ones of ‘t Hooft and Charles Thorn.[1][2] As pointed out by Raphael Bousso,[3] Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way.

In a larger sense, the theory suggests that the entire universe can be seen as two-dimensional information on the cosmological horizon, the event horizon from which information may still be gathered and not lost due to the natural limitations of spacetime supporting a black hole, an observer and a given setting of these specific elements,[clarification needed] such that the three dimensions we observe are an effective description only at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the particle horizon has a non-zero area and grows with time.[4][5]

The holographic principle was inspired by black hole thermodynamics, which conjectures that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon.

***

The Superficiality Principle Rules:

I long suspected that physicists and mathematicians are taken by the beauty of the simplification of knowing the inside from the outside. It’s a sort of beauty, fashion model way of looking at the world. It miserably fails with Black Holes.

To figure this out, one needs to know one thing about Black Holes, and another in philosophy of mind.

***

FREE WILL DEMOLITION OF THE HOLOGRAPHIC PRINCIPLE:

My reasoning is simple:

  1. Consider a Black Hole so large that a human being can fall into it without been shredded by tidal effects. A few lines of high school computation show that a Milky Way sized volume with the density of air on Earth is a Black Hole: light falling into it, cannot come back. (Newton could have made the computation and Laplace did it.)
  2. So here we have this Human (call her H), falling in the Milky Way Air Black Hole (MWAB).
  3. Once past the boundary of the Black Hole, Human H cannot be communicated with from the outside of the boundary (at least from known physics).
  4. What the Holographic proponent claim is that they can know what is inside the MWAB.
  5. Suppose that Human H decides to have scrambled eggs for breakfast instead of pancakes. The partisans of the Holographic Universe claim that they had the information already. However they stand outside of the MWAB, the giant Black Hole, and cannot communicate with its interior. Nevertheless, Susskind and company claim they knew it all along.

That is obviously grotesque. (Except if you believe Stanford physicists are omniscient, omnipotent gods, violating known laws of physics: that is basically what they claim.)

This is not as ridiculous as the multiverse (the most ridiculous theory ever). But it’s pretty ridiculous too. (Not to say that the questions Free Will lead to in physics are all ridiculous: they are not, especially regarding Quantum Theory!)

By the way, there are other objections against the Holographic Universe having to do with the COSMOLOGICAL Event Horizon (in contradistinction of those generated by Black Holes). Another time…

***

We Are Hypocrites, So We Live From Fake News:

Tellingly, the men promoting the Holographic Universe are Nobel Laureates, or the like. Such men tend to be very ambitious, full of Free Will, ready to say, or do anything, to dominate (I have met dozens in person). It is revealing that so great their Free Will is, that they are ready to contradict what they are all about, to make everybody talk about themselves, and promote their already colossal glories.

Patrice Ayme’


SEQUENTIAL LOGIC

New logic solving 25 centuries old logic problems such as the Liar Paradox And Incorporating Spirits of Quantum Logic, Local Time, And Local Truth. More General Than PDL ,

Croatian View

From Croatian perspective

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Patrice Ayme's Thoughts

Trying To Think Better By All & Any Means. To Be Human Is To Unleash As Much Intelligence As Possible, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

SEQUENTIAL LOGIC

New logic solving 25 centuries old logic problems such as the Liar Paradox And Incorporating Spirits of Quantum Logic, Local Time, And Local Truth. More General Than PDL ,

Croatian View

From Croatian perspective

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Patrice Ayme's Thoughts

Trying To Think Better By All & Any Means. To Be Human Is To Unleash As Much Intelligence As Possible, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

SEQUENTIAL LOGIC

New logic solving 25 centuries old logic problems such as the Liar Paradox And Incorporating Spirits of Quantum Logic, Local Time, And Local Truth. More General Than PDL ,

Croatian View

From Croatian perspective

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Patrice Ayme's Thoughts

Trying To Think Better By All & Any Means. To Be Human Is To Unleash As Much Intelligence As Possible, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.