Posts Tagged ‘Brain’

Entrepreneurship Philosopher Ayn Rand Objectively Limited And Not Randy… And Yet What Europe Needs…

March 15, 2026

While the USA, trailed by China, is heading to the stars… One should not make too much fun of Ayn Rand, a sort of chipmunk philosopher for entrepreneurial America..Her simplistic mindset had a huge, positive influence on US GDP, and Europe would be well advised to read the childish Ms Rand, because Europe is affected by a severe case of anti-Randism, namely coercion of individual business activity strangled by taxes, regulations, plutocratic renormalization, censorship, and crypto demonicity (the multidecade ugly European collaboration with the Kremlin tyrant being the best example).  

US president Ronald Reagan and Alan Greenspan, long head of the US Central Bank, professed to be followers of Rand. Donald Trump has called Rand his favorite writer and said he identifies with Howard Roark, the protagonist of The Fountainhead, — an architect who dynamited a housing project he designed because the builders did not precisely follow his blueprints. Nietzsche with dynamite.

Her full name was Alisa Zinov’yevna Rosenbaum (Russian: Алиса Зиновьевна Розенбаум), born in Saint Petersbourg, and she later adopted the pen name Ayn Rand. Her father’s pharmacy was confiscated by the followers of Lenin. She fled in 1926, and met while in Hollywood director Cecil B. DeMille who provided her with a job as an extra and then screenwriter… A providential Atlas did not shrug for Rand…. She was very successful as a writer.  Her family was refused exit visas and died during the siege of Leningrad.  

Meanwhile in New York where she overviewed her Broadway plays, Austrian School economist Ludwig von Mises came around to appreciate her. Despite philosophical differences, Rand strongly endorsed him, and economic journalist Henry Hazlitt, and they expressed admiration for her. Mises called her “the most courageous man in America”, a compliment that particularly pleased her because he said “man” instead of “woman”.

Rand was married to the same actor for 50 years and had a cult-like following (even after her death)…

Ayn Rand’s novel Atlas Shrugged is a monumental work that encapsulates her philosophy of Objectivism. Rand rejects altruism as a moral ideal. Instead, she argues that rational self-interest is the only ethical foundation for human action. However what motivates most people is looking good, prancing, influencing, captivating, capturing, seducing and cohabitating with when not outright cooking for, sleeping with, when not outright talking their heads off other people

Rand meant well, but she was silly, a chicken spreading her wings and looking terrible as if she were a snake killer Secretary Bird. 

Key Quote: “I swear by my life and my love of it that I will never live for the sake of another man, nor ask another man to live for mine.” (John Galt’s Oath). 

Stupid. His (mythical) parents lived for his sake. Most parents live for their children’s sake. Most people who succeed in society profited from mentor individuals or mentor organizations. We are all culture creatures, that is, other people’s creatures… Rand herself had a 50 year marriage from a totally devoted husband.

She didn’t stop there. Rand made strong attacks against the notion of common good: “The common good is an undefined and undefinable concept, a moral blank check for those who attempt to embody it.”

When the common good of a society is regarded as something apart from and superior to the individual desires of its members, “it means that the good of some men takes precedence over the good of others, with those others consigned to the status of sacrificial animals.

This was written as National SOCIALISM and International Socialism were trying to overpower all of Europe…

Well before Harvard got flooded by French Theory, Rand’s philosophy was updated in 1974 by Harvard philosopher Robert Nozick in his bestselling book Anarchy, State, and Utopia. Nozick argued that individual rights are the only justifiable foundation for a society. Instead of a common good, he wrote, “there are only individual people, different individual people, with their own individual lives.

That was in contrast with the reality of the USA. In 1945, the USA found itself with 16 million young men armed and trained to kill in the name of democracy. So they were provided with all they wanted: free studies, jobs (women were redirected towards child bearing), housing, free-ways, suburbs, cars, washing machines, cheap oil, and a submitted and admiring world at the USA’s fingertips.

What went wrong? The rise of plutocracy. US plutocracy had been somewhat restrained by anti-monopoly acts, but then grew immensely, mostly overseas, first by running the Franco-British blockade of the Kaiser in World War One and then ruining France and Britain through credit during the war… while exploiting the victory of the two European democracies after 1918… Making Lenin chuckle that the capitalists were so greedy they would sell the rope to hang them with… But the Americans would develop the Baku OFFSHORE oil fields and make Germany into a new wild west, complete with adoring Nazis.

US plutocrats kept a light touch inside the USA as long as those dangerously motivated young men were the bulk of the work force, and US steel workers lived as well as European CEOs…. Meanwhile US plutocracy expanded overseas, under the cover of its decolonization propaganda. Things changed under Nixon who made health care a corporate profit center under the guidance of his friend the multibillionaire engineer Kaiser, while going to China to arrange a deal: cheap labor for US plutocrats in exchange for capital from US plutocrats, insuring a 100% US plutocrat victory… As the plutocrats bought all US media, We The People Of The USA became bleating sheep….

***

Culture extends over millennia and millions of creators. That is, upon altruism. Rand also seemed to ignore the science side of things… The fact is, scientists and thinkers in general are motivated by teaching others, revealing their discoveries (granted Socrates claimed the Sophists were only motivated by money whereas true thinkers like him, Socrates, and his students, were pure altruists). 

Ayn Rand also ignored the fact that new science is often government funded (deep down research institutions are government funded and have been for 2 millennia…)

***

Following Descartes like a machine Ayn Rand extolls reason by identifying it with the Artificial Intelligence we have now:

Rand elevates RAW reason as the supreme tool for human survival and flourishing:

She called her philosophy “Objectivism”. A sort of hardcore Cartesianism for beancounters.

Key Quote: “Reason is man’s only means of knowledge, his only standard of truth, his only guide to action.”

Actually E-motion is what moves the mind. Reason rests on neural networks, now ever more complex as they reach down to the Neural Nano Tube level (2025 discovery of new connectome)… Those networks entangle chemistry (emotion) and signaling through electric or moelcular transport, and it’s not clear how the latter is different from emotion…) 

Human beings are passionate. That’s their fuel. In her personal life, and the lives of their minds, Rand does not seem to have been so randy that she would realize the importance of emotions (although she had a 14 year affair followed by what she perceived as betrayal….)

Emotions move us.

Human beings are very loving and compassionate, they could not exist without, and most have experienced it during childhood. I sure did. However human beings are also potentially cannibalistic, and that’s not the worst. Rand came short on both counts. She is not important for the history of thought directly. However, she had a very positive impact on US GDP, especially relative to Europe, and that, in turn, has a positive impact, potentially, on the survival of humanity, hence thought. 

Patrice Ayme

 

Rand distinguished strongly between entrepreneurship and evil. No doubt the former is great. But ultimately, if too successful, it falls into oligarchy and from there, plutocracy…

LIAR PARADOX DISSOLVED FROM SEQUENTIAL LOGIC OF BRAIN

November 18, 2025

Abstract: Specialists know that the foundations of logic are probably not on a sound basis. We propose a new approach: NEW LOGIC For An Old Problem. We point out that, in its simplest version, the Liar Paradox LP can be seen as an example of failed communication, and incomplete statement

However, the suggestions made in the 20th century by Russeell and Tarski, inter alia, for solving the Liar Paradox did not recognize this and instead pretend to solve LP by introducing infinite hierarchies (which we reject, with the rest of the notion of infinity, as ultimately non rigorous solutions, because they are non-physical… while we recognize the utility of “types” and “meta” as forms of mental shorthands).

We propose instead a new logic, SEQUENTIAL LOGIC, an alternative to Propositional Logic (also called 0th order logic). Sequential Logic mimics the basic functions of the brain, so it is local, topological, ordered and temporal. It’s also endowed dynamically with truth, falsehood, and incomplete values..

The atomic elements of Sequential Logic are Operations, which are more or less identical to Neural Networks, or strings thereof… Indeed, operations are implemented by Neural Networks and can be identified to them… Those Operations/Neural Networks are the equivalent of the propositions which are the atomic statements of Propositional Logic… The latter are used to make strings in PL. Similarly, strings of Operations/Neural Networks make formulas in SL.

In Propositional Logic, if A is a formula NON A is also a formula (it’s an axiom). Not so necessarily in Sequential Logic. However if A is activated in SL it can be true, false, or incomplete: that has no equivalent in PL (proper comparison ought to be with CPL + QT). The other axioms of PL stay pretty much the same in SL.. (If A → B and B → C, then A → C, etc…). 

Real neural networks in the brain use dopamine bursts in mid -brain, to help ascertain truth in part of the brain and a somewhat related, but opposite, dopamine busts, in a different part of the brain, the pre-frontal cortex, to ascertain falsehood. The brain processes truth pleasantly, differently from falsehood which is so alarming it gets dispatched to more advanced parts of the brain.   

Similarly, the formulas (strings of neural networks) of Sequential Logic can be true or can be false (and if neither, incomplete).

In Sequential Logic, logical evaluation is temporally extended and contextually constructed — reference and predication occur in ordered neural events, not in atemporal structures. In particular, a sequence like A → NON A  → A is possible, which is viewed as a contradiction in traditional logic, it’s not a contradiction in SL, it corresponds to cyclical activity within the brain, a cyclical activity similar to any such activity in nature.

***

The first sentence in this essay is a lie.”
This is the Liar Paradox. Something feels off about saying it, and indeed, the unease it provokes has haunted thinkers for twenty-five centuries — from Eubulides to Buridan,  Russell, Tarski, and Gödel. Here we suggest a new approach: one that takes into account not only formal logic, but also the way logic is physically realized in the brain.

1. The LIAR PARADOX UNRAVELLED:

Take the classic version: “This sentence is false.” Traditionally, “this sentence” is read as referring to the entire utterance. The sentence therefore applies the predicate “false” to itself, generating a loop: if true, then false; if false, then true.

But the wise listener, new to the subject, might demur: In what sense is this sentence a lie? What exactly is meant by “this”? What do we mean by “sentence”? Once those questions are carefully raised, the paradox evaporates. The utterance fails to designate anything definite. This” is a demonstrative with no stable referent. It does not succeed in pointing to an object — even to itself — that can serve as the argument of “false.” The statement is therefore not a well-formed formula, not because of grammar, but because of referential indeterminacy. The paradox arises only when we pretend that this INCOMPLETENESS is well-formed.

In that sense, the Liar Paradox is not a deep mystery of truth, but a miscommunication. The Liar Paradox attempts to refer before it has anything to refer to.

***

2. Classical Responses: The Ban on Self-Reference:

Tarski recognized this to some extent and formalized a rule: truth cannot be defined within the same language that expresses the statements being judged. A meta-language is needed to speak about truth in the object-language. This is an application of Russell’s Vicious Circle Principle: “Whatever involves all of a collection must not be one of the collection.” Russell invented it to avoid the contradiction of sets that contain themselves — such as the set of all sets that are not members of themselves — a logical cousin of the Liar [1].

***

  1. The PROBLEM OF TOTALITY, ENABLING THE CANTOR DIAGONALIZATION TRICK:

This type of “self-reference exclusion” reappears in Gödel’s incompleteness theorems, Turing’s halting problem, and Tarski’s undefinability theorem. Each involves the same move: consider all formulas or programs; then create one that refers to the totality, and contradiction or incompleteness follows. The structure is always the same: totality → self-reference → impossibility. An essential piece of logic is diagonalization of a two dimensional countable infinite array.  One can always diagonalize — as Cantor showed — to create an element outside any purportedly complete countable infinite list made of countable elements… Because the list is countable  but its infinite character always enables us to exhibit an uncountable infinity of logical mechanisms to build elements outside of that list (that’s called Cantor Diagonalization).

***

4. Classical Argument: METAPHYSICS OF INFINITY MADE FINITE:

All these arguments rely on a powerful but metaphysical assumption: that one can meaningfully speak of all propositions, all statements, or all computable programs [2]. But such a totality is not something we can construct; it is an idealization, not a reality. 

The existing laws of physics would make the construction impossible; careful inspection of Gödel’s proof shows that the existence of a number is supposedly demonstrated… But NO process is given to construct that number (cynically one could say that infinity is used to pretend that this number exists, but then the number cannot be rolled out since its existence depends upon something that does not exist, namely the sort of “infinity” used in the proof).

An infinity is not a warehouse of completed objects. In practice, logic and cognition do not operate globally; they operate locally, constructively, context by context Finitely, as a classical computer, or a biological neural network, must.[3]

Our thinking systems — human brains, or any embodied processors — do not evaluate “all possible propositions.” They operate network by network: finite, temporally ordered, and dependent on prior activations. Each logical evaluation is contextual, feeding forward and backward across connections… But always forward in time (as in Quantum Mechanics). The Liar Paradox arises only when one mistakes this dynamic, sequential process for an instantaneous, self-contained totality.

***

5. Sequential Logic Arises From The Nature of Reference

In the brain, logic unfolds in time and place, it’s a fundamental process ordered by time and place (similarly, fundamental processes in Quantum Mechanics are ordered in space and time; and it may be more than an analogy [4]; as it has been discovered in 2025 that the brain CONNECTOME’s involves nanotubes between neurons, an indication that the quantum scale has to be considered in the brain; so if the Quantum turns out to be the fundamental mechanics of the brain, it is not surprising that the latter operates like the former.) 

When a sentence begins, its initial segment (“This sentence…”) triggers predictive neural networks seeking a referent. But the referent does not yet exist; we know that a sentence is going to be pointed at. The crux of the Liar Paradox is that there is nothing there. Pointing out how one communicates with a dog may help: 

***

  1. Learning Communications From Dogs:

When my dog accomplishes this first step, in what he views as a proposition, he searches my eyes and throws me a special look, which indicates that a “sentence” is going to come and I must then acknowledge, by looking back to him in a special way, that I am ready to receive the pointing data. “This sentence”, from my dog, accomplishes succinctly the first part in the pointing sequence.. 

Then my dog communicates what he wants. The communication is in body language. He turns his head and points at the particular object, say, a door which he wants to see open for him. If I indicate agreement, which he checks by looking back at me, he will wag his tail to indicate his satisfaction for the signing of this deal. I will come and open the door.

The part labelled by “This sentence” animates the general neural networks machinery, it tells that machinery that a sentence, which is another and particular neural network, or strings of neural networks, is going to be pointed at. 

But, in the Liar Paradox, that SUBSEQUENT DESIGNATION WHICH “THIS SENTENCE” WAS A WARNING FOR, NEVER HAPPENS… There is no door being pointed at

***

  1. There Is No Argument In The Liar Paradox:

THUS the paradox of the liar then arises when WHAT DOES NOT EXIST… from the point of view of Sequential Logic, gets qualified NEVERTHELESS (by “is false”). The classical scheme is completely different: Classical Propositional Logic, CPL, + Robinson Q Arithmetic, that’s the simplest, Quaintest arithmetic, without infinite induction, + a Truth Predicate brings us Self Referential Contradictions…

If I sit at the dinner table and I say out of the blue:”This is false”. A polite diner will ask: “Excuse me, what is false?”

So, in the traditional Liar Paradox, “This sentence is false”, the neural network animation sequence to designate and then qualify something is short-circuited while still under construction (because neural networks communications are constructed). 

When the utterance ends (“…is false”) the processing machinery, is implicitly asked, by those who believe there is a paradox, to loop backward and reinterpret the earlier part as a designated sentence in light of the whole, and then decide that’s the only sentence around and thus this is what “This” designated — but the brain’s logic is not timeless. It is sequential, it proceeds in micro-sequences, a progressive evolution, not as a single frozen self-observation after a bit of time travel backwards.

Thus “This sentence is false” cannot stabilize as an object of thought which is a neural circuit going one way. This sentence is false” is not paradoxical; it is dynamically inconsistent, because it is dynamically incomplete. There is no moment when the sentence, as a complete referential object, made of an ordered succession of networks, and the sentence, as a processed linguistic event, coexist. The paradox presupposes an incomplete description is enough of a reference — something cognition, and physics itself, does not allow. 

***.

8. From Logic to Physics:

One can make an exact analogy between what neurology has to do with what Quantum Measurement Theory does: contrarily to repute, QMT is very explicit and time ordered. Physicist Michel Devoret, who has conducted experiments describing the SEQUENTIAL nature of fundamental processes in Quantum Physics received the Nobel Prize in 2025; see note [5]

Seen in this light, the Liar Paradox is not just a problem of language. It points to the deep structure of how meaning — and perhaps even reality — is generated. Logic, in its living, physical form, is a process of singularization: local stabilization within a sea of potentialities. Each act of interpretation collapses a cloud of possible meanings into one coherent local state made of a set of connected networks.

In formal logic, paradoxes appear when we deny this locality and demand total, self-encompassing closure. But such closure contradicts the physical nature of cognition.

Quantum Measurement Theory, QMT, analyzes the most exquisite, precise fundamental processes.  Bohr insisted that an experiment could not be separated from the apparatus inside which it happened [5]. Bohr was correct and Einstein took a while to wrap his mind around it. Looking at it a la Tarski, the apparatus ML is the Meta Language of the experiment L (Wigner later introduced the “Wigner Friend Paradox” which is akin to looking at MML, not just ML… In this case ML is the Schrodinger Cat…)

In Quantum Mechanics, time acts as a “one parameter group of transformations” (and not as a sort of space!)… So the Quantum Logic unfolds in a timely manner (time and space are not equivalent dimensions in QMT). Truth, like measurement, or causality in physics, is an act that takes place locally in time and context.

Instead, the Liar Paradox, as initially posited, is analogous to an experiment without an apparatus, not anything being measured inside. Russell’s Vicious Circle Principle is akin to not making an experiment while the first one is not completed, and Tarski’s Meta is akin to adding an apparatus, the context, the Meta Language ML (but then one runs into the “Wigner Friend” hierarchy problem).

There is no “view from nowhere” from which all propositions can be surveyed at once as Propositional Logic pretends there is, causing the LP. 

***

9. Sequential Logic Handles the No-No Paradox Beautifully And Practically: It’s A Dog Collar!

The NO-NO paradox involves two mutually referential statements rather than a single self-referential one. It was originally discussed in medieval logic by 14th century thinkers like Thomas Bradwardine and John Buridan. 

So consider two propositions, Left and Right, labelled L and R, which contradict each other in an apparent vicious circle. Bertrand Russell and his predecessors from the 13th century would say that this contradicts the Vicious Circle Principle. Left L sends to Right R by asserting that R is true. R says that L is false.

Sequential Logic handles this by observing that reading L activates at the very least a neural network for reading and another expressing veracity.. Suppose veracity shows up as a green color painting L. Then R is read and claims L is false that is red… But L was green before R was read. Now L is supposed to be red , that is false, if R is green and true, as it is. So then L turns red, that is false. 

Then L says that R is false, that is red so then L is green, and so on. So we just have a dog collar whose color changes periodically from green (true) to red (false), according to neurological time of reading one then the other (the objection that physical time can be made as short as possible is not true in neurology and not even in physics as Planck’s Time sets a lower bound). 

[Other resolutions not using Sequential Logic. Tarski would assign L to a language L and then assign R to a metalanguage, ML. But notice that this is what Sequential Logic does, without metalanguage! 

Saul Kripke proposed to define a single language with a single truth predicate in which anything goes but in which sentences not caught up in this construction, neither true (+1) or false (-1) have an intermediate truth value… In the preceding case each box is half true, half false, and thus has truth value zero.]

***

  1. Conclusion

The Liar Paradox dissolves once we see that “this sentence” does not designate a static self-reference but a temporal instruction within a sequential interpretive system. The paradox is an illusion born of treating time-bound, embodied logic as if it were timeless and disembodied. Logic in the brain — and perhaps in nature itself — is procedural, not total; contextual. Logic is not absolute, standing outside the material quantum world, like Plato’s “forms” [6]. Just as time orders Quantum Processes, time ought to order logic too, time may be local, but it remains the most basic causality

Truth emerges from the unfolding of communication, not from standing outside it. The Liar Paradox, then, is not a contradiction in truth itself, but a reminder that truth, like understanding, only exists locally within the flow of interpretation [this is how Category Theory from the 1950s defines truth; 7].

***

  1. Epilogue SEQUENTIAL LOGIC?

One may wonder why it took 25 centuries to come up with the obvious solution to the Liar Paradox by making logic sequential. I do not doubt that many peasants, exposed to it, found the same solution as yours truly. But they were not in academia, which installed the LP as a genuine problem (reflecting an academia serving God and the monarch; those who disagreed may have shared Archimedes’, Hypatia’s or Boethius’ fates, top thinkers who were killed precisely for the reason of their superior intelligence).. 

My own approach is to try to understand how logic and truth work, especially in the source of all we know, the human brain (independently of any other consideration)

Sequential and local are the important notions in nature, physics, in contrast to instantaneous and global: I had already encountered that methodological confrontation in the Foundation of Quantum Physics, and it is prominent in the invention of Relativity by H. Lorentz and Henri Poincaré, followed by Einstein… An engineer, Poincaré encountered the solution to non-simultaneity in the electromagnetic transmissions of the telegraph, generalizing it to LOCAL TIME theory in Maxwell equations. This locality and delay found in local time theory has been experimentally demonstrated to the greatest extent (GPS depends on it). Locality and delay found with local time is generalized in sequential logic [8].  

***

The Liar Paradox does not really resolve by evading self-reference through meta-languages, as this creates an infinite hierarchy of “meta”. Instead we proposed to dissolve the Liar Paradox by recognizing that reference itself is temporal, procedural, explicit and embodied. Logic, in the brain and in the world, operates finitely and sequentially — from interaction to interaction, from potential to singularization.

As it is, the Liar Paradox utterance is just incomplete. Neither true, nor false, just literally nonsensical.

The Liar Paradox only looks like a paradox if we freeze logic into a sculpture. Once we let logic move—as it actually does in brains—the puzzle evaporates. “This sentence is false” is not a self-referential jewel of timeless contradiction; it is a failed temporal program. It begins an instruction (“designate something”) and then never supplies the object to complete it. A brain cannot refer backward in time to repair an incomplete reference any more than a quantum experiment can measure an outcome before the apparatus exists.

The Liar Paradox is like a movie whose last frame is shown before the first one finishes rendering. Static logic treats this as a profound contradiction. Sequential logic recognizes it as a simple glitch: the sentence is not false, nor true, but simply not an executable sequence.

When logic is allowed to have a before and an after, the Liar disappears—not solved, but dissolved.

Patrice Ayme .  

Remark: This sequential, embodied logic can be tied into the broader physical theory of photon and particle singularization and TAU collapse (SQPR) — i.e., a possible analogy between logical and physical “resolution events”… (Another time!)

***

[1] Tarski was honest enough to recognize he was inspired by Russell’s Theory of Types. Russell “solves” the Liar by noticing that it has a hidden quantifier (“there exists” (a sentence, etc.)) when interpreted as a truth value (or lack thereof). So the sentence itself and its interpretation as truth value are of different “types”. In other words, L and ML, language and metalanguage… From my point of view this is lots of sophistry hiding a much simpler truth…

Amusingly, the VCP was invented to deny the possibility of what made Russell famous: Russell demolished Fregge with the set S whose elements are not elements of S, a version of the Liar Paradox…

***

[2] Standard Zermelo Fraenkel Set Theory, especially with AC, the Axiom Of Choice (“ZFC) assumes the Totality Axiom (my label). TA is necessary to express the AC.

***

[3] A first sight, a Quantum Computer, by using Quantum Entanglement, in the classical Copenhagen Interpretation of the Quantum (CIQ) is able to do operations simultaneously at a distance. SQPR denies this… But, in practice, the superluminal speeds involved are so great that the practical difference between CIQ and SQPR will be neglectable.

***

[4] Quantum Mechanics treats time as a one parameter group of transformation (fundamentally differently from space, and that’s a crucial difficulty in fusioning Naive Relativity Theory and QM) Logic, as it enfolds in the brain, activating a succession of neural network is also a one parameter group of transformations best identified as time.

***

[5] Quantum Measurement proceeds with a wavefunction W submitted to a field F in space, the whole thing evolving in time. “This sentence” means W without designating F and then reaching a conclusion, the measurement act, “false”.  In the process of reaching the conclusion, before collapse, we have an implicit time evolution. So we see that the QM is in exact analogy with what neurology does and has to do.

*** 

[6] Quantum Collapse (aka “Decoherence) is part of QM, but violates causality. This is (one of the reasons) why SQPR (Sub Quantum Physical Reality) was proposed: in SQPR, the Quantum Collapse is causal and progressive. Michel Devoret agrees, and gets the Physics Nobel in 2025…

The Liar Paradox also violates causality… As explained above. 

***

[7] One can define truth using Category Theory (many call CT “Abstract Nonsense”, but it is a monument of modern math). Truth in CT is very abstract and sounds quite nonsensical… However, the CT truth is LOCAl, and very much compatible with the spirit of what is suggested in the present essay (which, being thoroughly descriptive, can be read again…).

***

[8] The proposed Sub Quantum Physical Reality (SQPR) generalizes locality and delay to Quantum Physics… Immediately predicting Dark Matter, etc… It would be most economical logically if it were true…

***

[9] The Revision Theory of Truth (developed by Belnap, Gupta, Herzberger).

  • Focus: While Kripke seeks a stable, single fixed-point definition of truth, Revision Theory focuses on the sequence of evaluation itself.
  • Outcome for L/R: In Revision Theory, the Liar cycle (or my L/R cycle) is defined as pathological because its truth value never stabilizes. It cycles indefinitely between T and F across the revision stages, just as my “dog collar” changes color. This theory affirms the temporal oscillation as the very nature of the paradox, rather than resolving it with a third value…. But it considers it pathological whereas in Sequential Logic, it’s just natural, and nothing more upsetting than waves coming and going on a shore….

***

Historical Note: The Liar Paradox was extensively addressed in the Thirteenth and Fourteenth Centuries, and, unbeknownst to most, those studies were partly duplicated in the 20th century.

Jean Buridan was a 14th-century logician and top philosopher and physicist who discovered inertia, momentum, and the first and second laws of mechanics, plus heliocentrism and geometric calculus (while making fun of the Church). Buridan solved the Liar Paradox to his satisfaction with his innovation of the principle of virtual implication.

This principle states that every sentence implicitly asserts its own truth in addition to its explicit content (this principle is explicit in Tarski ML). Buridan says that the explicit claim “A is false” contradicts the virtual implication (“A is true”). This contradiction makes the overall content false, according to him.

Buridan was far from my argument (he anticipated Tarski instead)

In the  century preceding Buridan, Scholastic logicians (Quaestionists) were more in the mood of the present approach. The dominant early theories were Cassatio (cancel the argument) and Restrictio (restrict possible formulas). The Cassationists’ solution is the closest to the one exposed in the present essay; for the Cassationists, the LP “says nothing” (nihil dicit). Buridan made fun of them later…

These theories solved the paradox by declaring the Liar Sentence defective, arguing it was not a genuine proposition or that its self-reference was logically forbidden (here we see Russell’s VCP). The present argument is that the LP says nothing, because the designating sequence (a neural network) is launched… but then does not designate anything: the next neural layer is missing (to use Neural Network theory).

Cassationists pointed out semantic defectiveness in the LP. The present essay’s modern neuroscientific argument argues that the failure is even more basic: the sequential neural network for designation is launched but fails to designate an object.It points out at dynamic incompleteness in the context of neurology, or neural networks. So the medieval intuition of some Medieval logicians is grounded in cognitive structure. .***

Full disclosure: 1) I elaborated my own theory before I became cognizant of the Cassationists’ existence. Considering the simplicity of my argument, though, I was not surprised that some Medieval Logicians found it… (In similar historical research recently I found the proof of  F = ma for gravitation in Buridan’s work!, 300 years before Newton).

2) The theory above may seem close to Temporal Logic (Jerzy Łoś, Prior, Kamp, etc.). There are at least two dozen types of Temporal Logics. Prior’s tense logic (1950s–60s) represents reasoning in time, using operators like “it will be the case that” (F), “it was the case that” (P). Later, Hans Kamp (1971) and van Benthem (1970s–80s) extended this to model natural language semantics.

Connection to Patrice Ayme’s theory:

  • Temporal logic treats truth values as indexed to moments in time.

     

  • PA’s “logic unfolds in the brain, sequentially and causally” seems reminiscent of that: the evaluation of a statement depends on where and when in the neural process one is, which neural network is actually at work — the “now” of cognitive unfolding in the brain (in the brain this is helped by the fact that biological neural networks operate by separated action potentials, giving a real ticking clock, differently from artificial neural networks, which operate continuously).

     

  • However, Patrice Ayme goes further: for PA, time is not just a parameter of logic; it is the substrate of reference itself. The Liar Paradox misfires because it assumes a timeless self-reference, while cognition is temporally constructed as it jumps from neural network to neural network. → In Prior or Kamp, truth varies over time; in PA’s model, truth emerges through time.

There are many other types of logic (van Benthem, dynamic epistemic logic) to check if one wants to make sure that the preceding is really innovative. These comparisons are all very technical, and I may put them in a separate paper (if I find the time).  

Patrice Ayme

BASAL vs APICAL LEARNING! Are We Realizing The Full Human Neurobiological Potential? Or are we UNDERSTIMULATING our NEUROBIOLOGY, thus Causing Deep Pathologies?

April 23, 2025

Abstract: The more we understand the brain, the more complex it gets. It turns out that methods of learning vary along a given dendrite (what provides a neuron with electrical input). The question naturally arises whether we are properly feeding this complexity with the wealth of inputs it deserves. Considering that at least two human species, which are among our ancestors (Neanderthals and Denisovans) have been found with significantly larger brains… The answer is that the dumbing down some expect from AI, may have started ever since cities arose and (too much) fascism ordered them… We may be pathological because we deny the full human experience… And tyrants play the whole thing like a violin...

***

It has just been discovered that the “rules” the brain uses to learn differ according to exact locations along appendages of neurons known as dendrites… (I adapted in some paragraphs below The Conversation.) 

The human brain is made up of billions of nerve cells and billions of glial cells which are also computationally active. The neurons conduct electrical pulses that carry information, much like how classical computers circulate electrical pulses to carry data under binary form.

These electrical pulses pass to other neurons through connections between them called synapses. Some are input (dendrites) some outputs (axons). Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites process (in a non elucidated manner) and transmit these inputs to the main body of the neuron, where the neuron body, the soma, then integrates all these signals in mysterious ways to generate, after a while, its own electrical pulses (down the axon). The old theory of the “grandmother neuron”, according to which one particular neuron recognized one’s grandmother, is probably not true, but there is little doubt that the more complex neuron each have quantum computer capabilities.

It is the collective activity of these electrical pulses across specific groups of neurons that form the representations and treatment of different information and experiences within the brain.

Since the 1940s, neuroscientists have held that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain.

In order for the brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear. In the following quote synapses related to APICAL and BASAL neurons were compared. Let me quote:

Defining the rules

We decided to monitor the activity of individual synaptic connections within the brain during learning to see whether we could identify activity patterns that determine which connections would get stronger or weaker.

To do this, we genetically encoded biosensors in the neurons of mice that would light up in response to synaptic and neural activity. We monitored this activity in real time as the mice learned a task that involved pressing a lever to a certain position after a sound cue in order to receive water.

We were surprised to find that the synapses on a neuron don’t all follow the same rule. For example, scientists have often thought that neurons follow what are called Hebbian rules, where neurons that consistently fire together, wire together. Instead, we saw that synapses on different locations of dendrites of the same neuron followed different rules to determine whether connections got stronger or weaker. Some synapses adhered to the traditional Hebbian rule where neurons that consistently fire together strengthen their connections. Other synapses did something different and completely independent of the neuron’s activity.

Our findings suggest that neurons, by simultaneously using two different sets of rules for learning across different groups of synapses, rather than a single uniform rule, can more precisely tune the different types of inputs they receive to appropriately represent new information in the brain.

In other words, by following different rules in the process of learning, [ALONG the SAME DENDRITE!] neurons can multitask and perform multiple functions in parallel.

Future applications

This discovery provides a clearer understanding of how the connections between neurons change during learning. Given that most brain disorders, including degenerative and psychiatric conditions, involve some form of malfunctioning synapses, this has potentially important implications for human health and society.

For example, depression may develop from an excessive weakening of the synaptic connections within certain areas of the brain that make it harder to experience pleasure. By understanding how synaptic plasticity normally operates, scientists may be able to better understand what goes wrong in depression and then develop therapies to more effectively treat it.

***

Neural network research, as presently implemented in AI, assumed that the nodes were simple. However, it has long been known that neurons are IMMENSELY complex. The interaction structures of neurons do not reduce to a single axon. They have thousands of dendrites, themselves so very complex. that they are nonlinear computers, all by themselves.

It is also philosophically certain that each neuron is a QUANTUM COMPUTER. We have zero idea how that could work out…but room temperature Quantum Computers are now known to be feasible, using plain old quantum optics! Consciousness no doubt arises that way. The Quantum.

***

One of the fundamental simplifications in artificial neural networks (ANNs):

When neural networks were first developed, especially in the McCulloch-Pitts model and later in perceptrons, the biological neuron was radically simplified. Each “node” in an ANN typically:

  • Receives a set of scalar inputs,
  • Weighs them,
  • Sums them up,
  • Applies a non-linear activation function,
  • And passes a single scalar output.

This is a far cry from the immensely intricate behavior of real biological neurons. Here’s how real neurons diverge:

  • Dendritic Trees: Real neurons receive input through thousands of dendrites, and the dendrites themselves are not just passive conduits—they perform nonlinear, local processing. Some researchers even argue that dendrites function as sub-neurons. The surfaces of dendrites are incredibly complex and structured, both morphologically and functionally. One of the most important structures you’ll find there are dendritic spines, the locale of many synapses. Dendritic spines grow or disappear in reaction to learning. Dendritic surfaces are like living circuit boards, constantly rewiring and tuning itself based on input..
  • Axonal Complexity: While ANNs assume a single axonal output, biological axons branch sometimes to thousands of downstream neurons, sometimes with context-dependent effects (e.g., different neurotransmitters or firing patterns).
  • Temporal Dynamics: Biological neurons operate over time, with a rich interplay of ion channels, refractory periods, bursting, oscillations, and more. Most standard ANNs are purely feedforward and static (though recurrent networks and spiking neural networks attempt to address this).
  • Plasticity: Synaptic strengths in real neurons change in highly context-dependent ways, involving not just Hebbian rules but also neuromodulators (like dopamine, serotonin), glial interactions, and even gene expression changes. To be inspirational, I would say that the topological dynamics of the embedded chemical space has dimension at least 30.

***

In essence, what we call “neural networks” are inspired more by the conceptual metaphor of neurons reduced to their simplest non trivial expression… rather than by their actual biological implementation as we know it today

There’s ongoing research into biologically plausible learning, spiking neural networks (SNNs), and neuromorphic hardware, which try to close this gap. But we’re still far from capturing the true richness of a real neuron.

***

Philosophically, what we are facing is immensely complex machinery.

Morality? The richness of experiences human beings can experience has been underestimated. Much depression, “hyper activity”, perhaps “autism”, “depression”, anorexia, phobias and the like are probably related to the PAUCITY of the experiences people experience, because that empoverishment is an imposed handicap. We are meant, by evolution, to be chased by lions and kill bison. Anything less is not following the owners’ manual!

This, by the way, is one of the reasons for wars: people are just otherwise too bored, and war is a force that gives meaning to the full operative manual of the human brain… It makes us fully alive. This is why tyrants can find canon fodder: even if it means their demise, human brains are keen to examine life, fully experienced.

So, instead of avoiding wars and great efforts (what Islam calls “jihad”), pain, passion and ravenous pleasure, we should concentrate instead on indulging in variants stimulating the entire brain, while staying as optimally innocuous as possible. A good example is making another war on Mars…the first one was made, and won, by Kepler, who called this way his multidecadal effort to figure out what we now know as Kepler’s laws… This time we want to land there and colonize…

***

Realizing the full potential of humanity means excitement guaranteed. And it’s a neurobiological necessity. Any reduction from this full potential is a form of abuse… This is also why tyrants feel fully alive and well, because they live dangerously and abusively, preventing precisely the reciprocal in their terrorized subjects…

Patrice Ayme

 

Apical learning is different from basal learning, the synapses don’t work the same, and dendrites are supercomputers themselves…

Monothought, Monotheism, Monofascism: No Brains If With Just One Value

March 9, 2024

Ethics is both relative (to circumstances) and absolute (given these circumstances). 

Ethically, Fascism orders us to have just one value: the Chief. We saw it in the built up to the George Bush invasion of Iraq in 2003: as George W Bush put it, essentially, you are for me or you are against us… W used the majestic plural refering to himself… You are for us, or you are against us. No nuance. That’s the entire point.

Another interesting notion is what Vladimir Lenin, the leader of the Bolshevik Party and the first leader of the Soviet Union used to call a “useful idiot”… Somebody not favorable to a cause but inadvertently advancing it through the force of sheer idiocy. Lenin wanted mostly to advance one value “dictatorship of the proletariat”. As he died with a ruined brain, Lenin suffered the excellent supplementary torture of seeing the ex-seminarist and confirmed gangster Stalin grabbed the dictatorship, and made it his personal monostate.

Socrates, as represented by Plato, was prone to use useful idiots, so that he could preene his feathers.  

A contradictor told me in a comment to the essay linked above, that I must “Choose one (1) value in order to claim *all* of existence.” … Such a statement may reflect a common trait of insanity… The connection with the mentality of dangerous fanatics all over: they want to “claim *all* of existence” and they need to “choose one (1) value” to do that.  Call such a person a “useful maniac”, who led me to the following obvious observations:.  

***

The very reason for the existence of the brain is as a comparator and adjudicator of values to optimize behavior conducive to species survival. This requires that there is more than one value among which to exert judgment… the most basic choice for any ebauch of a thinking system being: shall I stay (and save energy) or shall I go (to get energy).

This is precisely why “god” is imposed as the one and only value: “god” makes brains irrelevant by imposing just one value: god. This is precisely why “god” was created: to force people to learn NOT to use their brains, let them atrophy, and thus scrupulously obey the higher ups… since you have no brain left anyway. Thus monotheism is intrinsically mental fascism, and that’s its reason for being.

***

Consequences: Monotheism, like other poor educational systems, does not just brainwash brains, it makes deficient brains. Brains exposed to just one value can’t learn to adjudicate…The circuitry probably does not appear on its own without exercising it when it’s still inchoate.

Paradoxically, as it all ends up in war, the monofascist idiots are then forced to become more intelligent. Thus ISIS (Islamist State) got the idea of using commercial drones to drop explosives on allied Western troops… inaugurating the utilization of drones on the battlefield. 

Theofascism can give the opportunity for the most beautiful art (Blue Mosque, Istambul).

As mental fascism, with its one value, stunts brains, it is basically impossible to neurologically grow out of it by the power of reason. One can basically imagine such neurologies connecting all to a particular value (all and any “value” being some subsystem in the brain). That means that recondioting will be hard. Journalist Tom Friedman went to a reconditioning Western camp in Syria with 43,000 civilians, mostly women and children with some Western ancestry, who got children from many ISIS fighters… each )one woman, a “battlefied bride”, six fathers…). The children pelted with stones Friedman’s convoy… He caustically noticed it would take some time to recondition them.

Emotions are sourced in the brain. E-motions is what gets the brain in motion. So, ultimately, if one needs to make people smarter, if one wants to get brains out of the fascism of monothought, monovalue, one has to act on emotions. Philosophy schools in China have correctly connected wisdom and climbing, the latter being potentially highly emotional, probably for reasons found in: https://patriceayme.wordpress.com/2020/09/17/why-climb-to-study-life-in-full/

Ruth Hermence Green once quipped: “There was a time when religion ruled the world. It is known as the Dark Ages.” Yes, and some denied that was the case, because they lacked nuance, not knowing when the ages started to darken… So they were wrong. Religion started to rule the world with deified Roman emperors. In the early Fourth century occured a smooth transition to “Orthodox Catholicism”, a terror religion. Most of the Darkening Ages were concentrated in these darkening centuries…. when the most cultivated minds learned to have just one value… The pantheon of Antiquity, with many gods insured many values… The god of the Bible could do with just one serpent… ready to bite, in a garden of doom…

Patrice Ayme

Is CREATIVE THINKING Somewhat EVIL? Are NEW THOUGHTS AGGRESSIONS?

February 12, 2024

Modern NEUROLOGY Teaches Us That: THINKING ANEW IS INTRINSICALLY AGGRESSIVE. Consequences.

The foundations of brain science were laid around the 20th century. One can deduce from the rough outline of neurobiology a surprisingly large amount of wisdom which may not have been certain previouslyAnd which explains a lot of evil. In particular, why it’s so hard to think. Why? Because new thinking hurts and is subject to mental inertia. Both are obvious consequences of neurobiology.

Thus considering thinking in light of neurobiology explains why new ideas and new emotions hurt. And why they are so hard to accept. Neurobiology also explains why there is such a thing as mental inertia. Thinking at the very least consists in the activation of neural networks (maybe there is more to it, but we don’t need it here). Thinking needs facts.

“Fact” is a strange word. It curiously comes from Latin “facere”… to create, make. That’s curious, at first sight, because the modern idea of fact is that it’s something which just exists independently of human beings. So how could it be made or created? Did the etymology guess something common sense overlooked? Did etymology guess neurobiology? Well, yes.

Indeed, when considering a “fact”, what could possibly be made or created? At the simplest and most basic, a “fact” would involve electric activity from motor neurons… so we can move that finger or those eyeballs (to point towards a fact). 

At the most sophisticated, if the fact is new, new neural networks are made. Why? Because a NEW fact means a new association of ideas, sensations, emotions…. Thus new connection(s)… And new connection(s) in thinking have a direct neurobiological meaning. Connections in neurobiology literally mean new neurological formations enabling electrical or chemical impulses: axon, dendrite, etc… From these physiological facts  can be generated -I mean, facere, fabricated- a number of conclusions…

First, thinking new ideas and emoting differently hurts: having new thoughts means new connections, literally erecting new brain structures… Probably having to destroy old ones to make room… That’s all very energy consuming, and thus quite painful. And perhaps extremely very much so. So no wonder the public out there does not find new thinking attractive, pleasant and hilarious. Better keep on scrolling the same old same old…

Aristotle admitted the aggressive character of new thinking explicitly [1]. He articulated it around categories. Kategoria comes originally from against (kata) the agoria (public). This is an admission, by the initiator of the concept of how thinking works, that this most noble pursuit is an ADVERSARIAL process against the majority. [2]

Thinkers, new thinkers thinking things anew, need to be a funny lot, like Socrates, Aristotle to compensate and keep a sense of sanity. This is why Socrates made fun of the jury which was considering inflicting him the death penalty, by proposing free meals for him and his heirs…

And what of mental inertia? Routine brain activity consists in the activation of neural networks which get reinforced through Hebbian activity (the more synapses/neural networks are used the easier it is to use them… that explains the fascination of endless scrolling on Internet devices).  

Activating those existing networks is less energy consuming than creating new networks at the cost of destroying old ones… So brains keep on churning out the thoughts and emotions they are already endowed with…

***

Plato used to explain everything fundamentally with a theory of “forms”. Well, we found the “forms”: they are embodied in brain geometry… in particular neurological networks (the brain also has an elaborate, high dimensional neurological architecture of organs such as the hypothalamus, the pineal gland, the hippocampus, the amygdala, etc… Those react to, and cause further, neurological activity.)

Ideas are fundamentally relationships between concepts which are themselves sets of other relationships… All of this can be modeled by neural networks organized in… categories.

***

A number of issues arise from the preceding, in light of the (remote) possibility for extraterrestrial civilization, and, more pragmatically, the (unavoidable) rise of Artificial Intelligence and Artificial Consciousness. In particular: is thinking intrinsically aggressive?

In short: yes.  Thinking is not just a placid form of mental engagement. It’s intrinsically violent, if well done.

One needs first to understand that, given a body of thought, new thinking will consist in modifying this body of thought. Hence to spend energy. For a life form, or any thought form, spending energy could bring deactivation, it is adversarial… and prevented by the occurrence of pain.

So, if we want adversity at all cost, we will prevent thinking and that in turn will bring more adversity. Can’t win…

Looks as if we will have to learn to live with evil. Looks as if humanism will have to revew its relationship with war… It also looks as if intellectuals (including yours truly!) are not as innocent as they claim to be…

Patrice Ayme

Aryan dreams… Verdun 1940 on the right…

***

[1] Recognizing aggression for what it is is an act of honnesty and mental acuity. Aristotle fled Athens, telling that he wanted to spare the city another crime against philosophy (an allusion to Socrates’ death). So he recognized thinking was an aggressive act. So he Aristotle had committed aggression, he may as well flee. Whereas Socrates was in denial of his own aggressivity, and stood his ground.

***

[2] So basically some Public P (it could be a nation, party, class, institution or a religion or civilization) says something Stupid S. Then A  comes with a piece of logic C, the kata-agoria, counter-public to demolish S and thus the brains of P.The Greek verb’s original sense of “accuse” had weakened to “assert, name” by the time Aristotle applied katēgoria to his 10 classes of “expressions that are in no way composite,”

Antagonistic has the same root: anti-agōn “assembly, mass of people brought together,” 

Split Brains And Multi Consciousness

April 13, 2022

Wisdom forges ahead of the self, however full of books the latter may be. New wisdom arises from beyond, and putting the mind out of the culturally expected zone.

Trail running means a potentially fully different world every couples of seconds. It takes one second to go from routine to head first at several meters per second (with potentially a terminal outcome [1]). Exactly what will happen if one quits concentrating. Foot landing is an adventure at any step, or bound, or leap (downhill mountain running is truly a succession of leaps, and a good runner can achieve dangerously high sustained speeds). Not surprisingly, command and control tends to be extremely localized and automatized. Here is an example, today:  

***

Suddenly wiggly is detected. There are no words, no thought, just wiggly is generating motion. The quickest part of the brain, and I literally felt to be in the back of the head going straight from visual area to cerebellum and legs, orders a general danger emergency jump, with particular lift of the left leg, where wiggly has been perceived. The wiggly input also launches an adrenaline burst. And a visual, directed inspection of wiggly.

Meanwhile the frontal cortex, and one literally feels it’s in the front, retorts with a slower analysis. Wiggly has got to be a root because of its general location, on a piece of asphalt, and it was not actually dynamic, and strong winds have brought innocuous wigglies down. 

Then an arbitration area kicks in, and I feel it’s in between. Arbitration decretes that the quick reaction area probably got it wrong, but it does not hurt to jump, but arbitration sends a moderation order to the jump, because emergency jumps are dangerous.

Such is the human brain.

Or more exactly, the human brains. The human brain is made of many. 

Even with half his brain dead, from strokes, bullets and what not, the bloody tyrant Lenin could provide astute opinions about his successors…  

Human brains are made of different pieces, not all equal, doing different things, and then conferring at a higher level called “consciousness” or “thinking”. 

The situation above happened April 12, 2022, but I had encountered an ultra rare snake on cement, a few miles away wiggling away very fast, a few days prior. It was of a sub-species of Garter snake, mostly jet black and scarlet red, related to the colorful one represented. A few weeks prior, on dirt, the scene above repeated, but that time there was a real snake below my left shoe! It nearly got pancaked. (Those snakes are not dangerous)

Conclusions:

  1. To speak of human consciousness is a simplification: a given brain has many coexistent consciousnesses, and they work at different speeds, in different ways, and are focused on non-intersecting inputs and outputs [2]. The wiggly = jump away is obviously a primitive form of hard wired consciousness (prehistoric men evolved in regions full of extremely dangerous snakes). 
  2. What we call “thinking” is often high level arbitrage. That doesn’t mean that lower level areas and entanglement are not conscious and thinking.
  3. The brain is a sort of democracy, with its own institutions: brain organs entangled through neural networks, and different areas get to vote.
  4. Social organizations should mimic the brain and for the same reason: neural democracy is hardwired. The brain works the world in parallel, not top down. That means democratically, not fascistically. Why? Because this way the brain can do more, and some of it at extreme speed.

Some currents of Buddhism suggest to rest the mind by doing nothing, that’s supposed to be meditative. However, rock climbers learn to rest dynamically. I believe in dynamic meditation, and putting the entire brain to work, resting dynamically, not just breathing… The Dyonisos approach, embraced by Socrates, getting drunk to reach joy and perspective, is part and parcel of this dynamical meditation (I don’t drink alcohol… no need… Crazy enough already…) 

Ah, wiggly was just a sinuous branch thrown by the strong, cold wind. And the frontal cortex was right to suspect that, in spite of the sun, it was no snake weather. 

Patrice Ayme

***

[1] Once on Mount Tamalpais in California, my right shoe caught a thin piece of steel (!) which was sticking out after a (botched) trail repair job by rangers. I crashed over the next ten meters; the trail was straight, so the crash happened on the trail rather than the precipice on the right… Last summer I crashed twice in quick succession on icy rocks at 3,500 meters (!). Bad soles on those shoes I discovered. There again I was lucky not to have fallen off trail… Just got decorated with blood… Those crashes were actually more dangerous than the ones where I broke bones…

What I nearly stepped on a few years ago. (Actually a related subspecies, even rarer, as it is found just around one hill.)

[2] I think many things in many ways, all at the same times. Does that mean that I am many, Mr. Descartes?

NOVELTY SEEKING IS HARD WIRED Even In Primitive Brains. Censors Are Then Against Brains.

February 6, 2022

NOVELTY SEEKING IS ITS OWN REWARD, PHILOSOPHY GUESSED, AND NEUROLOGY NOW SHOWS!

Why do we have brains, we advanced animals? Because brains enable us to predict the world. Brains make us stand under the world, knowing its inner machinery, and see where it will go, before it gets there. That’s how falcons intercept ducks… They anticipate where the duck is going. Cheetas do the same and estimate carefully which leg to swipe to make their prey tumble. Dogs, of course make calculus of variations to find out when they have to keave run along the beach to swim to the ball in the least time…

And thus how do we, brainy animals, figure out the world? By the so-called “scientific method” invented over 370 million years ago, when fishes became amphibians. Stepping onto the land, and figuring out what to do with all that mud and beyond, no doubt required a lot of experimenting. 

Is experimenting amusing? It better be! Given two genetic variants of the same species, the one amphibian who tended to explore more, we descend from. But it turns out that NOVELTY SEEKING is so fundamental that being pleased by it is not enough. Novelty seeking turns out to be an automatic circuitry of the brain, at its core, a mandatory pathway, a general dispatch center, the Zona Incerta. In particular, ZI is more fundamental than the frontal lobes, showing that curiosity preceded and is more important than, the higher intelligence of the frontal lobes. This ontic order has enormous consequences, starting with civilization, sociology and politics.

***

A study published in Nature Neuroscience December 13, by Ilya Monosov, and Al. (Department of Neuroscience at Washington University School of Medicine), shows that the Zona Incertaa region deep within the brain—is responsible for controlling novelty seeking in animals.

The very fact that this is happening deep within the brain, not within the frontal cortex shows that evolutionary speaking the trait may be as old as half a billion years, and curiosity, novelty seeking is a fundamental characteristic of brainy animals. .

Zona incerta neurons predict future novel objects and move our gaze to them. Turning them off, disrupts novelty seeking.

An important aspect of the discovery is that the mechanism for novelty-seeking is partly separate from the usual dopamine reward system: there is a brain circuit for seeking out novelty for novelty’s sake.

Philosophically, that’s not surprising. It’s well known that curiosity kills cats. Even cows have it: novelty seeking explains why the grass is always greener on the other side of the fence.

It turns out that monkey brains can react differently to bad news: some wanting to know, others avoiding bad news… Interesting two monkeys can react identically to good news, but one may confront bad news, and the colleague avoids them.

https://www.sciencedaily.com/releases/2021/06/210611110807.htm

In the latest study from the same lab, the authors first tested whether novelty seeking was encoded by the dopamine reward system. They first showed that animals predict and actively seek out novelty similarly to how they seek out primary rewards, for its own sake. They then tested if dopamine-producing neurons, known to regulate reward seeking, could also regulate novelty seeking. Surprisingly, the researchers found that dopamine neurons were indifferent to predictions of future novelty.

However not so the adjacent zona incerta (ZI). “The ZI is ideally suited to control novelty seeking behavior,” Monosov explains. The ZI receives input from higher-order visual areas that encode meaning and novelty of objects, and projects to the superior colliculus, which controls the eye. When animals performed novelty seeking, ZI neurons there were active. And when ZI cells’ activity was disrupted, the animals had less motivation to look for novel images.

The finding dissociates the mechanisms of reward-seeking and novelty-seeking when novelty has on extrinsic reward value, illustrating that the motivation to experience novelty can be independent from the motivation to gain reward.

Monosov says one of the interesting aspects of the zona incerta is that it appears to be a relay station for novelty seeking—processing information about the novelty and directly sending that signal to the motor control area that regulates gaze shifts—without additional stops along the way.

As fundamental as it gets: emotions are what comes out of motion, here we have in the Zona Incerta of the brain, what ordered motion itself. So the Zona Incerta is more fundamental than the emotional system.

And it’s an automatic system.

Consequence, philosophically, of the existence and primacy of the Zona Incerta’s novelty seeking nature? What political or thought system represses curiosity for novel things is not just anti-human, or anti-amphibian, but even anti-brain. In particular censors of novel ideas, perspectives, etc. are anti-brain.

Curiosity is its own morality, a morality that is half a billion years old!

Patrice Ayme 

P/S 1: And now a word for those who criticize space exploration: So doing, they criticize the fact of having a brain. Landscapes on Mars are novel. Total area of Mars is roughly that of all continents on Earth, 150 million square kilometers (with 13% or so submitted to Putin’s dictatorship, one must destroy Putin…

Photo taken by NASA/ESA Curiosity Rover, Gale Crater, 2015. Image looking toward the higher regions of Mount Sharp was taken on September 9, 2015, by NASA’s Curiosity rover. In the foreground — about 2 miles (3 kilometers) from the rover — is a long ridge teeming with hematite, an iron oxide. Just beyond is an undulating plain rich in clay minerals. And just beyond that are a multitude of rounded buttes, all high in sulfate minerals. The changing mineralogy in these layers of Mount Sharp suggests a changing environment in early Mars, though all involve exposure to water billions of years ago

P/S 2: The Roman Republic had actually an office of “censor”. Should we then attribute the destruction of thr Roman Republic to this anti-brain structure? Good question!

P/S 3: Einstein famously said “The most incomprehensible thing about the world is that it is comprehensible.” Well, sorry to say he did not understand the first reason why brains evolved, precisely to under-stand the world. Now many people understands this, and thus can bask in the satisfaction that they understood all along why Einstein had a brain, although he himself had not figured it out…

P/S 4: the long and harduous genesis of the concept of “emotion” took at least 5 centuries in France and was born greatly out of the notion that motion in political matters was disquieting, risked “riotes” (same meaning as modern English “riots”). So “emotion” was related, in its etymology, its true and original meaning, to out-of control novelty, disturbance, etc.

BRAIN MODULARITY, NONLOCALITY, CONSCIOUSNESS, QUANTUM

November 22, 2019

MANY BRAINS NEED ONE MIND…

Abstract: Brain modularity makes consciousness mandatory to enable motor neural command. Consciousness thus has to act as one, but nonlocally. The analogy with the Quantum Effect, how the whole gets to the point, is absolute. Thus it is compelling to suggest both physical phenomena are actually one.

***

It is known that the human mind consists of many specialized units designed by the process of natural selection. For example, there are auditory, visual, equilibrium, fear, language systems (Broca area, Wernicke area)… There is for example a system to detect motion (to spot predators, dangers and prey). Balance is processed in the cerebellum, short term memory in the hippocampus, etc.

While these modules often work together seamlessly, they don’t always, resulting in impossibly contradictory beliefs, or, more fundamentally contrary desires (or watch what happen when patients have Parkinson). A little sound in the bush can mean delicious prey, dangerous snake, or a calmly waiting leopard (the latter happened to me in Africa, for real. Twice.) The possibilities are connected to wildly different e-motions: move to grab, move to flee. Thus several contradictory systems can get pre-activated (amygdalia for fear, hunting systems).  

The modular view of the mind evolved, starting in the Nineteenth Century with the discovery of various localizations in the brain (some even overdid it, and confused brain and shape of the skull).

That the brain is made of brains is not a new discovery. But I claim the consequence is mandatory consciousness. That’s new.

A contemporary author makes moralistic conclusions from the observed modularity. Modularity would cause “vacillations between patience and impulsiveness, violations of our supposed moral principles, and overinflated views of ourselves”. 

Modularity suggests to the same author that there is no “I”, no “self”. Instead, he insists that each of us is a contentious and debating “we”—a collection of discrete, interacting systems whose constant exchanges shape our interactions with one another and our experience of the world. This sort of revelation is not new: it’s already found in Freud, following the French neurology professor of Freud, Jean-martin Charcot, and Nietzsche… And originally Sade, or even Socrates and his famous “demons”. 

Verily, while brain modularity is known to be true, that doesn’t imply there is no “I”. Just the opposite. Come to think of it.

Consciousness exists, just to fabricate that “I”. To fabricate an executive agent, the “I”. The “I” engages the neuromotor system, and, or the hierarchy of modules within. One authority to decide is necessary, so the “I” is necessary.

So what is this consciousness made of, how does it work? Many of the brightest minds have considered the question. I, in turn, question what they questioned, and the little they knew. 

Descartes, contrarily to what Demasio assumed, was no fool, and more penetrating a mind that Demasio… three centuries earlier. Descartes’ observations on the nature of mathematical reasoning were so deep, I was really surprised (as I thought only yours truly was capable of them, being a mix of the bold, the deep and the obvious).

Descartes, of course, had no idea of Quantum Mechanics. QM was hard to produce: Planck was amazing that way, and then came a flurry of geniuses: Einstein, Bohr, Bose, De Broglie, Heisenberg, Dirac, Pauli… (Among others.)

Francis Crick came up with what he grandiloquently called “the astonishing hypothesis”. It posits that “a person’s mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them.” Crick claims that scientific study of the brain during the 20th century led to acceptance of consciousness, free will, and the human soul as subjects for scientific investigation. Of course none of this is new, except for the detailed machinery: Descartes proposed the soul was in the Pituitary gland, and asserted animals (hence, implicitly, humans) were machines… 

Meanwhile the notion of machines has now been completely changed in something nonlocal and quirky, thanks to Quantum Mechanics, which has blown up laboratory reality into something… cosmic. Thus Crick and all others miss one point: the brain is not a classical machine, it’s a Quantum one. How do I know this? In the simplest way: the universe is Quantum, not classical. Quantum is complex, first of all, because it’s nonlocal. That means reality is entangled at a distance: that’s the entire challenge of the Quantum computer. Recently a baby Quantum computer entangled ten photons: that was viewed as a great success. In a brain, at least trillions of trillions of particles get entangled, each microsecond…

Guess what? To treat all these brain modules as one, to bring them to cooperate, one conductor, consciousness too, has to be nonlocal. 

Right, a sort of classical non locality in the brain is not just imaginable, but a fact: why else all those long connections (axons) throughout the brain? But the brain is involved in zillions of zillions of Quantum processes every microsecond (zillions is a tech term meaning more than any known number; just kidding but not really). 

Some will say QM is not room temperature, not long range at room temperature, etc. But they don’t know anything, they just talk like they know they are supposed to. In truth, High Temperature Superconductivity is a fact… And NOT explained. The only thing clear is that long range, non local Quantum effects are involved (the efficiency is 100%). If, out of a trillion Quantum processes in the brain in one microsecond, one such processes delocalize enough to cover the brain, that’s enough to create a plausible Quantum substrate for Quantum epistemology.

***

Don’t sneer that Quantum effects would be too small, involve too few particles. A few Quantum particles (“Lichtquanten” of Einstein) can have a big effect: when a probe passing Pluto at an infernal clip shot photons towards Earth, very few of these were received. Actually, Voyager I, launched decades ago, and now out of the heliosphere is an even better example [1]. We get just one photon from Voyager every few seconds, but that’s enough.

Quantum Mechanics computes by being all over simultaneously. The brain does the same, because being all over the place, in all localities simultaneously, enables contextual computing. Consciousness then tries to put some order, to result in action items.

 

The exact same thing happens in Quantum Mechanics: the fabrication of order in Quantum Mechanics is from singularization (also known as “collapse of the wave packet” which happens after “decoherence”, a distinction of no difference…)… Which is equivalent to CONSCIOUSLY firing a particular module in the brain by connecting to the action neurology (the neuromotor cortex and sub-systems such as the intestine, with its 35,000 neurons…). 

***

In conclusion, that the brain was made of modules was already obvious to Descartes, and amply confirmed by 1900 CE. What is new is that now we have a candidate to use as a  medium for consciousness: what underlays Quantum Physics itself, with its nonlocal, and non-measurable nature.

Philosophically, the Brain and the Quantum exist to steer globally according to local conditions [2]. The Quantum is the solution to the same problem the Brain has: how to steer the general, from local conditions.

Suggesting that consciousness is a Quantum phenomenon from the preceding is not foolhardy. There is a precedent. After Maxwell found that electromagnetic waves were going at the speed of light, he suggested to identify both. The situation here is not as clear, and we don’t have a few equations and one speed. Instead we have the need for brain nonlocality, from brain modularity. Right, it is classically achieved with axons. But it is tempting to suggest the feeling of existence is achieved through the Quantum.

Patrice Ayme

***

***

[1] Passing Jupiter Voyager I sent photons towards an antenna which received around 700 of them per second. Now it’s roughly 20 billion kilometers away, 40 times further, so the same antenna would receive only 700/(40^2) ~ one photon every two seconds. We still can get a correct information flow from that. Point is we don’t need to many event to picture a higher signal.

***

[2] This is the famous two-slit experiment. What is local (a slit) is having a global effect (the global interference pattern). Similarly a Brain has to take into account what is found locally to establish a general, adaptable model of reality 

What Is A Logic? Just A Piece Of Mind

January 15, 2017

I would propose that a logic is anything which can be modelled with a piece and parcel of brain.

I will show, surprisingly enough, that this is a further step in Cartesian Logic.

At first sight, it may look as if I were answering a riddle, by further mysteries. Indeed, but with mysteries which can be subjected to experimental inquiry (now or tomorrow).

What is a brain? A type of Quantum Computer! And what is Computing, and the Quantum? Well, works in progress. There is something called Quantum Logic, but it does not necessarily defines the world, as exactly what Quantum Physics is, is still obscure.

In practice? Logic is what works, a set of rules to go from a set A of statements to a set B of statements.

In this perspective, Medieval logic did not decline. Instead it transmutated into mathematics.

 The teaching of Logic or Dialetics from a collection of scientific, philosophical and poetic writings, French, 13th century; Bibliotheque Sainte-Genevieve, Paris, France. The 13th century was a time of extreme intellectual activity in Europe, superior to anything else in the world, centered 800 miles around Paris. In particular the heliocentric system was proposed by Buridan, after he overthrew Aristotelian Physics, by inventing and discovering inertia.

The teaching of Logic or Dialetics from a collection of scientific, philosophical and poetic writings, French, 13th century; Bibliotheque Sainte-Genevieve, Paris, France. The 13th century was a time of extreme intellectual activity in Europe, superior to anything else in the world, centered 800 miles around Paris. In particular the heliocentric system was proposed by Buridan, after he overthrew Aristotelian Physics, by inventing and discovering inertia.

An article in Aeon, “The Rise And Fall And Rise Of Logic”,

https://aeon.co/essays/the-rise-and-fall-and-rise-of-logic

Reflects on the importance on the history of the notion of logic:

Reflecting on the history of logic forces us to reflect on what it means to be a reasonable cognitive agent, to think properly. Is it to engage in discussions with others? Is it to think for ourselves? Is it to perform calculations?

In the Critique of Pure Reason (1781), Immanuel Kant stated that no progress in logic had been made since Aristotle. He therefore concludes that the logic of his time had reached the point of completion. There was no more work to be done. Two hundred years later, after the astonishing developments in the 19th and 20th centuries, with the mathematisation of logic at the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Arabic logic is.)”

The old racist Prussian, Kant, a fascist, enslaving cog in the imperial machine turned false philosopher was unsurprisingly incorrect.

The author of the referenced article, Catarina Dutilh Novaes, is professor of philosophy and the Rosalind Franklin fellow in the Department of Theoretical Philosophy at the University of Groningen in the Netherlands. Her work focuses on the philosophy of logic and mathematics, and she is broadly interested in philosophy of mind and science. Her latest book is The Cambridge Companion to Medieval Logic (2016).

She attributes the decline of logic, in the post-medieval period known as the Renaissance and the Enlightenment, to the rise of printed books, self-study and the independent thinker. She rolls out Descartes, and his break from formal logic:

Catarina writes: “Another reason logic gradually lost its prominence in the modern period was the abandonment of predominantly dialectical modes of intellectual enquiry. A passage by René Descartes – yes, the fellow who built a whole philosophical system while sitting on his own by the fireplace in a dressing gown – represents this shift in a particularly poignant way.”

Speaking of how the education of a young pupil should proceed, in Principles of Philosophy (1644) René Descartes writes:

After that, he should study logic. I do not mean the logic of the Schools, for this is strictly speaking nothing but a dialectic which teaches ways of expounding to others what one already knows or even of holding forth without judgment about things one does not know. Such logic corrupts good sense rather than increasing it. I mean instead the kind of logic which teaches us to direct our reason with a view to discovering the truths of which we are ignorant.

Catarina adds: “Descartes hits the nail on the head when he claims that the logic of the Schools (scholastic logic) is not really a logic of discovery. Its chief purpose is justification and exposition.”

Instead, Descartes claims and I claim that a new sort of logic arose: Medieval Logic transmuted itself into mathematics (Descartes does not say this, but he means it). And mathematics is not really logical in the strictest sense. As it has too many rules to be strictly logical.

Buridan, a great logician who studied well the Liar Paradox (which gave the Incompleteness Theorems) had students such as (bishop) Oresme, who demonstrated what, it turned out, were the first practical theorems in calculus (more than 2 centuries before the formal invention of calculus by Fermat, and Fermat’s discovery of the Fundamental Theorem of Calculus, that integration and differentiation are inverse to each other).

For example, under the influence of Buridan and then Oresme, graphs and later equations themselves were invented. So logic became mathematics. That was blatant by the time Descartes invented Algebraic Geometry. Algebraic Geometry gave ways to deduce, to go from a set A to a set B, using a completely new method never seen before.

In turn, by the Nineteenth Century, mathematical methods contributed to old questions in Logic (the most striking being the use of Cantor Diagonalization to show incompleteness, thanks to the Liar Paradox, self-referential method.

In this spirit, not only Set Theory, naive or not, but Category Theory can be viewed as types of logic. So is, of course, computer science. Logic is whatever enables to deduce. Thus even poetry is a form of logic.

Logic is everywhere there is mental activity, and it is never complete.

If logic is just pieces of brain, then what? Well, some progress in pure logic can be made, just paying attention to how the brain works. The brain works sequentially, temporally, with local linear logics (axonal and dendritic systems). The brain tends to be deprived of contradictions (but not always, and nothing infuriates people more, than to be exposed to their own contradictions and gaps in… logic). Also all these pieces of brain, these logics, are not just temporally ordered, but finite.

As we try to use logic to look forward, as a bunch of monkeys messing up our space rock, it is important to realize that what logic is, has not been properly defined, let alone circumscribed. Indeed, if, surprise, surprise, logic has not been properly defined, let alone circumscribed, much more is logically possible than people suspect!

Patrice Ayme’

 

Bees Learn From Culture & Experience

October 25, 2016

When “INSTINCT” IN BEES:TURNS OUT TO BE LEARNING JUST AS HUMANS DO. Bees Practice The Experimental Method, Observe Others & Transmit Knowledge To Others!

Bumblebees can experiment and learn to pull a string to get a sugar water reward and then pass that skill on to other bees.

This comforts a long-held opinion of mine. See: https://patriceayme.wordpress.com/2013/10/02/instinct-is-fast-learning/.

There I claimed that:

“Innate Knowledge” is a stupid idea. The truth is the exact opposite: LEARNING IS EVERYWHERE, OUT THERE. Learning is the opposite of innate. This insight has tremendous consequences on our entire prehension of the world.

My reasoning was typical philosophy: well-informed general reasons. Now there is increasing evidence that not only big brained vertebrates, but smaller brained invertebrates learn.

Conclusion: we humans do not differ from other animals, even insects, in kind, but in the amount of capability we enjoy. Thus, if we want to be truly human as much as we cannot just lay there like cows.  If we want to be fully human we must learn more of what is significant, and learn how to learn it. We cannot just sit on our hands and do as Barack Obama, the do-not much not-so-funny clown in chief, did, obsess about easy one liners and sport scores.

***

Intelligence Is A Fact, Instinct Just A Vague Theory:

For years, cognitive scientist Lars Chittka was intimidated by studies of apes, crows, parrots, and other brainy giants. Crows make tools. And they obviously talk to each other (my personal observation in the mountains). From the latest research in Brazil, parrots seem to have advanced language among themselves (which we don’t understand yet, as it too fast and high pitch for humans to hear it, and there is too much “austerity” around to pay scientists to understand the world as much as they could).

Chittka worked on bees, and almost everyone assumed that the insects acted on so-called instinct, not intelligence. Instinct? Come again.

As Bumblebees Can Learn To Pull Strings, So Can Plutocrats. Thus We Need To Outlaw Such Pluto Strings

Hillary Pulling Out Her Reward? As Bumblebees Can Learn To Pull Strings, So Can Plutocrats. Thus We Need To Outlaw Such Pluto Strings

Sophisticated behavior from “instinct” is a rather stupid assumption, because it is a superfluous assumption: Who needs instinct to explain an animal’s behavior, when we have simple, old fashion intelligence to explain it? Well, speciesists! (Same as who needs the Big Bang, a theory, when we have Dark Energy, a fact, to explain the expansion of the universe.)

Indeed we know of intelligence (some people, and certainly children, can be observed to have it). We can observe intelligence, and roughly understand how it works (it works by establishing better neurology, that is, neurology which fits facts better).

We can define intelligence, we cannot define instinct. But what is an instinct? We can neither observe “instinct”, for sure, instead of learning. Nor can we give a plausible mechanism of how “instinct” would generate complex behaviors (DNA does not code for “instinct”).  

When carefully analyzed, complex behaviors turn out to be learned. In humans, social motivations such as the Will to Power, are primary, thus Chitkka was motivated by : “…a challenge for me: Could we get our small-brained bees to solve tasks that would impress a bird cognition researcher?”

***

Einstein Bumblebees & Their Superstrings:

Now, it seems his team has succeeded in duplicating, with insects, what many birds and mammals are famous for. It shows that bumblebees can not only learn to pull a string to retrieve a reward, but they can also learn this trick from other bees, even though they have no experience with such a task in nature. Christian Rutz, a bird cognition specialist at St. Andrews university in Scotland concludes that the study “successfully challenges the notion that ‘big brains’ are necessary for new skills to spread”.  

Chittka and his colleagues set up a clear plastic table barely tall enough to lay three flat artificial blue flowers underneath. Each flower contained a well of sugar water in the center and had a string attached that extended beyond the table’s boundaries. The only way the bumble bee could get the sugar water was to pull the flower out from under the table by tugging on the string.

The team put 110 bumblebees, one at a time, next to the table to see what they would do. Some tugged at the strings and gave up, but two actually kept at it until they retrieved the sugar water: two Einstein bees out of 110! In another series of experiments, the researchers trained the bees by first placing the flower next to the bee and then moving it ever farther under the table. More than half of the 40 bees tested learned what to do with the strings. See: .Associative Mechanisms Allow for Social Learning and Cultural Transmission of String Pulling in an Insect.

Next, the researchers placed untrained bees behind a clear plastic wall so they could see the other bees retrieving the sugar water. More than 60% of the insects that watched knew to pull the string when it was their turn. In another experiment, scientists put bees that knew how to pull the string back into their colony and a majority of the colony’s workers picked up string pulling by watching one trained bee do it when it left the colony in search of food. The bees usually learned this trick after watching the trained bee five times, and sometimes even after one single observation. Even after the trained bee died, string pulling continued to spread among the colony’s younger workers.   

But pulling a string does not quite qualify as tool use, because a tool has to be an independent object that wasn’t attached to the flower in the first place. Yet other invertebrates have shown they can use tools: Digger wasps pick up small stones and use them to pack down their burrow entrances, for example.

***

Bees: New Aplysias For Intelligence & Culture?

Nobel laureate Eric Kandel, following a mentor of his in Paris, worked on the brain of the giant California sea snail, Aplysia Californica with its 26,000 neurons. This enabled to progress in the understanding of basic learning and memory mechanisms. However, Aplysias are not into tools and culture. Bees are. Bees have a million neurons, and a billion synapses.

[The bee brain is only .5 mm; whereas the human brain is ~ 400 larger, thus 4x 10^2 larger, its volume is thus ~ 10^2 x 10^6 = 10^8 larger than that of the bee brain; thus scaled up, with the same neuronal density, the human brain should have 10^14 neurons! Which is the number of synapses in the human brain. The density of the bee brain Thus we see, in passing, that human neurons pack up much more power than bee neurons! That has got to be a quantitative difference…]

The discovery of bee culture involved almost 300 bees, documenting how string pulling spread from bee to bee in multiple colonies. Cognitive studies of vertebrates like birds and monkeys typically involve smaller tribal units (30, not 300). Thus the bee studies on culture, more broadly based, show better propagation (at least at this point). .

Clearly bees are equipped, psychobiologically, for the meta behavior known as creative culture: learning from others, while experimenting on one’s own. Thinkers of old used to believe these behaviors were exclusively humans: animals were machines (Descartes) and only man used tools (Bergson, who called man ‘Homo Faber”, Homo Worker)

That insect can learn and experiment, and have culture was obvious all along, according to my personal observations of wasps’ intelligence: when I threaten a wasp. It gets the message, and flies away (I have done the experiment hundreds of times; it does not work with mosquitoes). Reciprocally, if I try to get a wasp out from behind a window, it somewhat cooperates, instead of attacking me. Whereas if I come next to a nest, I will be attacked when my intent is deemed aggressive (reciprocally if a nest is established in a high traffic area, the culture of the local wasps makes it so that they will not attack).   

What is the neural basis for these “smarts”? Some say that the insects might not be all that intelligent, but that instead, “these results may mean that culture-like phenomena might actually be based on relatively simple mechanisms.” Hope springs eternal that, somehow, human intelligence is different.

Don’t bet on it. Studying how bees think will help us find how, and why, we think. And the first conclusion is that it matters what we do with our brains. If we want to rise above insects, we cannot mentally behave as if we were insects all day long. Being endowed with human intelligence is not just an honor, but a moral duty. (Learn that, clown in chief!)

Patrice Ayme’


SEQUENTIAL LOGIC

New logic solving 25 centuries old logic problems such as the Liar Paradox And Incorporating Spirits of Quantum Logic, Local Time, And Local Truth. More General Than PDL ,

Croatian View

From Croatian perspective

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Patrice Ayme's Thoughts

Trying To Think Better By All & Any Means. To Be Human Is To Unleash As Much Intelligence As Possible, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

SEQUENTIAL LOGIC

New logic solving 25 centuries old logic problems such as the Liar Paradox And Incorporating Spirits of Quantum Logic, Local Time, And Local Truth. More General Than PDL ,

Croatian View

From Croatian perspective

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Patrice Ayme's Thoughts

Trying To Think Better By All & Any Means. To Be Human Is To Unleash As Much Intelligence As Possible, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.

SEQUENTIAL LOGIC

New logic solving 25 centuries old logic problems such as the Liar Paradox And Incorporating Spirits of Quantum Logic, Local Time, And Local Truth. More General Than PDL ,

Croatian View

From Croatian perspective

NotPoliticallyCorrect

Human Biodiversity, IQ, Evolutionary Psychology, Epigenetics and Evolution

Of Particular Significance

Conversations About Science with Theoretical Physicist Matt Strassler

Rise, Republic, Plutocracy, Degeneracy, Fall And Transmutation Of Rome

Power Exponentiation By A Few Destroyed Greco-Roman Civilization. Are We Next?

SoundEagle 🦅ೋღஜஇ

Where The Eagles Fly . . . . Art Science Poetry Music & Ideas

Artificial Turf At French Bilingual School Berkeley

Artificial Turf At French Bilingual School Berkeley

Patterns of Meaning

Exploring the patterns of meaning that shape our world

West Hunter

Omnes vulnerant, ultima necat

GrrrGraphics on WordPress

www.grrrgraphics.com

Skulls in the Stars

The intersection of physics, optics, history and pulp fiction

Patrice Ayme's Thoughts

Trying To Think Better By All & Any Means. To Be Human Is To Unleash As Much Intelligence As Possible, Instincts & Values Flow, Even Happiness. History and Science Teach Us Not Just Humility, But Power, Smarts, And The Ways We Should Embrace. Naturam Primum Cognoscere Rerum

Learning from Dogs

Dogs are animals of integrity. We have much to learn from them.