The Dawn of the Walking NeuroMate

Science is on the verge of merging human brains with AI 

For decades, we’ve held our computers in our hands or tucked them into our pockets. But a new frontier is opening where the “device” disappears entirely. Companies like Science Corporation are no longer just building tools; they are building a bridge into the human mind.

This is the era of the NeuroMate, a world where the line between your own thoughts and artificial intelligence begins to blur.

And it all starts with an implant in your brain connected to an AI device. It doesn’t have a name yet. But it will. I call it a NeuroMate.

At first, this won’t be invisible. The system will likely rely on a dedicated external device, such as a small wearable pack, headset, or medical unit, that handles the heavy computing.

The brain implant acts as the bridge, while the processing power remains just outside the body. Over time, that external piece will shrink, integrate, and become less noticeable, but it will still be there, quietly powering the connection.

The Two-Way Conversation

Most of us are used to “commanding” technology. We click a mouse or speak to a voice assistant. But the “merge” science is working on is different. It is bidirectional, meaning it’s a continuous loop:

Mind to Machine:
You don’t need to speak or type. The NeuroMate “listens” to the electrical firing of your neurons and translates your intent into action.

Machine to Mind:
This is the game-changer. The computer sends signals back. It’s not a voice in your ear. It’s a pulse of data your brain may perceive as a vision, a memory, or a sudden “knowing.”

This is not a one-way command. It is a feedback loop, brain to AI to brain again, unfolding in real time.

Why “Merge” Is the Only Word That Fits

We often say we “connect” to the internet, but you can always disconnect. A merge implies something deeper.

Science Corp is experimenting with “biohybrid” technology, interfaces that use living cells to knit the computer into your brain’s gray matter. When your biological neurons and the AI’s digital processors begin working together, the NeuroMate is no longer just a tool you use.

It becomes part of how you think, see, and interact with the world.

Life with a NeuroMate

Imagine a blind person “seeing” because a camera is feeding images directly into their visual cortex. Imagine someone who has lost the ability to speak, “thinking” a sentence and having it appear instantly on a screen.

Now bring it closer to home.

Imagine recalling any passage of Scripture at a moment’s notice. Not by flipping pages or searching an app, but because the words are immediately present in your mind. A verse you struggle to remember comes to mind instantly. Related passages connect themselves. Themes across the Bible unfold in real time as you speak or reflect.

The words are not stored in your brain like a file, but they are never out of reach.

Eventually, for the rest of us, it could mean solving complex problems with an AI partner that works alongside our thoughts in real time, or accessing knowledge as naturally as remembering a name.

We are standing at the dawn of something new, not a replacement of humanity, but a transformation in how humans think and interact with the world.

A person who is never truly alone in thought, because their NeuroMate is always there, always listening, always responding.

How Should Christians Respond to This Technology?

The rise of the NeuroMate doesn’t demand a simple answer. It calls for careful discernment.

There is little question that technology like this could be used for good. Restoring sight to the blind, helping the paralyzed communicate, or assisting those suffering from neurological disease reflects a long tradition of using human knowledge to relieve suffering.

But this technology does not stop at healing.

For the first time, we are looking at systems that may not only read the human mind, but also send signals back into it. That raises deeper questions, not just about what we can do, but about what we should allow.

Scripture emphasizes the importance of the mind and heart. The call is not only to think, but to guard what shapes our thinking. When a machine becomes part of the feedback loop of thought itself, Christians must ask:

Who is influencing the mind?

Who controls the system?

What happens to discernment when knowledge becomes instant?

Even something that sounds beneficial, like recalling Scripture at a moment’s notice, comes with a tension. There is a difference between having access to the Word and having it written on the heart.

Technology can assist. It can amplify. But it cannot replace spiritual formation, conviction, or wisdom.

So the question is not whether this technology will arrive. It is already on its way.

The question is whether we will approach it with clarity, caution, and conviction, or simply accept it because it is new and powerful.

Christians have faced moments like this before, where new tools promised convenience but carried deeper consequences.

This may be one of those moments again.

AUTHOR

Martin Mawyer

Martin Mawyer is the founder of the Digital Intelligence Project and the President of Christian Action Network. He is the host of the “Shout Out Patriots” podcast, and author of When Evil Stops Hiding. For more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom, subscribe to Patriot Majority Report.

©2026 . All rights reserved.


Please visit the Patriot Majority Report substack.

Human Cells in AI? Is Artificial Intelligence Going DEFCON Frankenstein

Americans already fear AI. So why introduce something that makes it even harder to trust? 

“Human cells in AI.”

That’s all most Americans will need to hear.

It sounds like something out of a science fiction nightmare. And for a public already uneasy about artificial intelligence, it raises a simple question: how far is this going to go?

Now, to be fair, what scientists are actually building is more limited than that phrase suggests. These are lab-grown human cells used in controlled environments, functioning more like biological circuits than anything resembling a human mind.

But here’s the problem: that distinction may not matter at all.

Because this isn’t just about what AI is. It’s about how fast it’s being pushed and how little time people are given to understand it before the next boundary is crossed.

If this were happening in a vacuum, it might be different. But it’s not.

A recent Gallup poll shows that Americans are still deeply divided over artificial intelligence, even as it becomes more common in the workplace.

Only about 3 in 10 workers use AI frequently on the job. Roughly half use it rarely or not at all, even when it’s available to them.

And why? Not because they don’t have access. Because they don’t trust it.

Nearly half of non-users say they simply prefer to work the way they always have. Others cite ethical concerns, data privacy risks, or a belief that AI won’t actually help them.

At the same time, fear is rising. The percentage of workers who believe their job could disappear within five years due to AI has climbed from 15% to 18% in just one year, and it’s even higher among those already working with the technology.

In other words, Americans aren’t just unsure about AI. They’re uneasy, skeptical, and in many cases, actively resisting it.

And this is the environment into which we’re now introducing phrases like “human cells in AI.”

You can explain the science. You can clarify that these are lab-grown cells that function more like biological circuits than anything resembling a human mind.

But that’s not how most people will process it.

What they will hear is something much simpler:

A line has been crossed.

And once that line is crossed, the next question comes just as quickly:

What’s next?

If human cells can be integrated into AI systems today, what prevents more complex biological components from being integrated into AI systems tomorrow? More advanced neural structures? More direct integration between biology and machines?

Scientists may say those scenarios are distant, speculative, or even unrealistic. But to the public, that distinction often doesn’t matter.

Because from their perspective, the pattern is already familiar:

Yesterday, this wasn’t even on the table.

Today, it’s being tested in labs.

Tomorrow?

That’s exactly what people are afraid to find out.

And let’s be honest about what people are actually afraid of.

AI is getting closer to becoming human.

That may not be what scientists intend. It may not even be technically accurate.

But it is absolutely how it will be perceived. And perception matters.

Most people aren’t going to read a detailed explanation of lab-grown cells, controlled environments, or biological circuitry. They’re not going to study the nuances or follow the caveats.

They’re going to hear three words…

“human cells in AI”

…and draw their own conclusion.

And in a moment when trust in artificial intelligence is already fragile, that conclusion is unlikely to be charitable.

Here’s what makes all of this even more concerning.

The AI industry already understands that public opinion matters. That’s why it is spending millions of dollars lobbying Congress, state officials, governors, and even the White House, working to shape how this technology is regulated and deployed.

But while those efforts are focused on influencing policymakers, far less attention is being given to something just as important:

The people who are expected to live with it.

Because right now, those people are uneasy. They are skeptical. And in many cases, they are already resisting what AI is becoming.

And instead of slowing down, explaining more clearly, and addressing those concerns head-on, the industry continues to introduce new developments that sound, to the average person, like another line has just been crossed.

“Human cells in AI” is not just a technical milestone.

It’s a message.

And the message many Americans will hear is simple:

This is moving faster than we can understand… and no one is asking us if we’re ready.

If the goal is to build trust, this is not how you do it.

You don’t build trust by adding new fears before people have worked through the old ones.

You don’t calm concerns by dismissing them or explaining them away after the fact.

And you certainly don’t earn confidence by pushing forward while the public is still trying to catch its breath.

Because if this continues, the issue won’t just be what artificial intelligence becomes.

It will be how strongly people push back against it.

AUTHOR

Martin Mawyer

Martin Mawyer is the founder of the Digital Intelligence Project and the President of Christian Action Network. He is the host of the “Shout Out Patriots” podcast, and author of When Evil Stops Hiding. For more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom, subscribe to Patriot Majority Report.

©2026 . All rights reserved.


Please visit the Patriot Majority Report substack.

Grand Theft…Voice?

This silent hijack is turning trusted accounts into a worldwide network of stolen voices 

Overview: This isn’t a story you should skim or set aside.

Because the next message you receive from a trust friend may not be from them at all.

This article isn’t about a stolen password. It’s about the hijacking of the American voice. A digital Trojan horse where trusted accounts are being turned into involuntary mouthpieces, carrying messages they never wrote, to audiences that never question them.


Imagine the voice of every credible influencer, media commentator, lawmaker—even heads of law enforcement—all sending messages to your inbox saying the same thing.

And not just any “same thing,” but a message designed to pull you into their political, moral, religious, or cultural thinking. Your heroes, trusted voices, family members, perhaps even your pastor, are all trying to convince you to take on a new belief, a new opinion—a message so alien that you never thought all your friends and trusted voices could ever share it together.

You feel the nudge to agree. You don’t want to be left in the cold or make enemies of friends and family. And you can’t bear the thought of having to turn a proverbial cold shoulder on the heroes you’ve grown to love.

Fantasy, right?

Not so. Not so at all.

Let’s Take a Pause. Why?

Because this is a dramatic moment. It is a moment that no one, inside or outside the tech world, is talking about, but it should be shouted from the rooftops. So, let me explain this clearly so there is no room for confusion.

What we are seeing developed from the Dark Web are sinister players who want to change the world through private messaging. If you don’t have X (formerly Twitter), you probably don’t have a clue what that means.

But it’s like a friend texting your phone.

You recognize the name immediately. It’s someone you trust. Someone you think highly of. You’re ready to read what they have to say, and you’re not questioning it.

In your mind, there’s no doubt who you’re talking to.

But you’re not.

The message feels right. The account looks right. It even carries his picture, the one that tells you it’s really him.

Except it isn’t.

The Three-Stage Heist

In the wings of this “trusted friend” are three scams you’re never going to see coming.

First, he is going to steal your identity by asking you to “vote for him” for an achievement award. He’ll give you a link. You’ll do it—it’s a simple request from a friend, a father, or a pastor. You aren’t going to say no. You click, you fill out the form, you sign off.

Then you find someone has just stolen your account. How do you know? Because your sister-in-law just got a message from you asking her to “vote for you.” The hacker has hijacked your contacts and is now sending messages to your entire network… under your name.

The Far-Reaching Sinister Plot: Repurposing the Soul

This is where it turns from a scam into a weapon.

Later, you find out that “you” are sending messages to friends saying you now support gun control—or that you no longer support it. The hackers send what they want, under your name, your account, and your photo.

Now imagine this scenario multiplying from your sister-in-law to her co-worker. On and on and on. The hacker creates a network first in the hundreds, then thousands, then millions—sending out the same politically or culturally charged message to change how the world thinks on highly contested issues.

The Systemic Failure

While platforms like Meta claim to be deploying AI to catch these “behavioral shifts,” X has remained largely reactive. They aren’t just failing to protect your password; they are failing to protect the integrity of the national conversation.

They are allowing ghost ships to sail under the flags of our most trusted leaders, firing on their own fleets while the admirals are locked in the brig.

The New Reality

We have to stop thinking of this as “losing an account.” We are seeing the systematic harvesting of credibility.

When the platform fails to intervene, and followers fail to see the mask, it’s no longer about a hijacked account. It’s about identity theft leading to worldwide propaganda.

The real danger isn’t that accounts are being stolen. It’s that once they are, they keep speaking, and people keep listening, never realizing the voice is no longer real.

AUTHOR

Martin Mawyer

Martin Mawyer is the founder of the Digital Intelligence Project and the President of Christian Action Network. He is the host of the “Shout Out Patriots” podcast and author of When Evil Stops Hiding. For more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom, subscribe to Patriot Majority Report.

©2026 . All rights reserved.


Please visit the Patriot Majority Report substack.

Climate Lawfare Suffers Major Defeat at the Supreme Court

The Supreme Court on Friday unanimously struck down a bizarre effort at climate lawfare, which aimed to penalize Chevron for its role in boosting the U.S. war effort against the Nazis and Imperial Japan in World War II.

The ruling is good news for sanity, but it also sets an important precedent for the Left’s ongoing climate lawfare efforts. You see, climate alarmist lawyers have sought to weaponize state laws against oil and gas companies, and the ruling in Chevron v. Plaquemines Parish delivers a knockout punch to at least part of their nefarious strategy.

While the case turns on a technicality, that technicality means a great deal to the environmentalist trial lawyers seeking to make a buck and undermine the oil industry.

As Justice Clarence Thomas—a President George H.W. Bush appointee—notes in his opinion for the unanimous court, Plaquemines Parish, Louisiana, and its fellow parishes filed no fewer than 42 state-court lawsuits in 2013 against oil and gas companies under a 1978 state law for alleged violations in the 1940s.

Many of the oil companies successfully appealed to have the cases removed from state court to federal court, because the companies had been acting under a federal officer “of or relating to any act under color of such office.” Yet lower courts had rejected Chevron’s efforts to move the case out of state court, so Thomas had to painstakingly explain that the phrase “relating to” can mean “to stand in some relation; to have bearing or concern; to pertain; refer; to bring into association with or connection with.”

Of course, this isn’t really about the meaning of the word “relate.” It’s all about whether judges who support the climate alarmist narrative can side with climate lawfare in the teeth of both the law’s text and common sense.

It does not make sense to use a Louisiana law to penalize an energy company in Louisiana state court for actions a previous version of that company took in service of a federal objective on the orders of the federal government.

This move from state to federal court may seem insignificant, but it is not. The oil and gas industry engages in interstate commerce, and its operations largely fall under federal law. Climate alarmist politicians in some states seek to pass laws restricting the industry’s operations, and climate alarmist lawyers seek to weaponize such laws against the industry as a whole, based on the idea that the human burning of fossil fuels is bringing about some indeterminate apocalypse.

Other Forms of Climate Lawfare

Suing oil companies for helping America defeat the Nazis is one thing, but the issue of whether state or federal law prevails in climate cases remains quite relevant, and it’s the centerpiece of another Supreme Court case.

Boulder, Colorado, sued Suncor Energy, claiming that its key business model of burning fossil fuels for energy has caused concrete harm under state law. The Colorado Supreme Court allowed Boulder’s case to proceed, so Suncor appealed to the Supreme Court, which agreed to hear the case.

Think about the implications of this for a second. Boulder claims that the burning of fossil fuels has caused concrete harm—even though it is unclear exactly how fossil fuels impact the global climate and most climate alarmist predictions have proven false. The city attributes specific weather harms not to God or the planet’s ecosystem but to a specific company, and then claims to know what is unknowable—how much that specific company’s efforts contributed to Boulder’s weather.

In doing so, Boulder takes upon itself the ability to regulate an industry that doesn’t just operate across state lines, but is vital to the global economic system.

But it gets worse. David Bookbinder, who served as part of the legal team representing Boulder at lower stages of litigation, described his climate lawfare efforts as “an indirect carbon tax.”

Tellingly, he added, “I’d prefer an actual carbon tax, but if we can’t get one of those… this is a rather, somewhat convoluted way, to achieve the goals of a carbon tax.”

In other words, this climate lawfare is a conscious effort to circumvent the voters.

The Supreme Court has agreed to hear Suncor’s case against Boulder, and the Plaquemines Parish ruling suggests the court may decide that state law is incapable of handling the regulation of a global industry.

Other Implications

Friday’s ruling also shores up America’s standing in the world. As Steven Bucci, a 30-year Army Special Forces veteran, explained last year, a ruling in favor of Plaquemines Parish would have undermined U.S. national security. State courts shouldn’t be able to second-guess federal wartime decisions, and if they could, that might lead companies to reconsider assisting in America’s defense.

Thankfully, the Supreme Court made the right decision, and it did so on the merits of the law, such that all eight justices who considered it—Justice Samuel Alito recused himself—agreed that Plaquemines Parish’s case is baseless.

Here’s hoping this represents a step toward blocking climate lawfare going forward. Suncor v. Boulder will be the real test.

The Tsunami is Getting VERY Close…

Sorry to interrupt my three-part commentary on the pathetic K-12 education system, but an extraordinary development was just announced. This could be the most significant event of our lifetime!

This is a NYT Article by Thomas Friedman. My 2¢ is at the end.


Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence — one that arrived sooner than expected and that will have equally profound geopolitical implications.

The artificial intelligence company Anthropic announced Tuesday that it was releasing the newest generation of its large language model, dubbed Claude Mythos Preview, but to only a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a “step change” in performance that has some critically important (positive and negative) implications for cybersecurity and America’s national security.

The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems more easily than before.

The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.

This is not a publicity stunt. In the run-up to this announcement, representatives of leading tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems, technologists involved told me.

For good reason. As Anthropic said in its written statement on Tuesday, in just the past month, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout — economics, public safety and national security — could be severe.’’

Project Glasswing, Anthropic’s name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, “to put these capabilities to work for defensive purposes,” the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities.

“We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale — for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring,” Anthropic said.

My translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code.

Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems, and hospitals all over the world.

If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system — a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations — will be available to every criminal actor, terrorist organization and country, no matter how small.

That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do — or your kids.

At moments like this I prefer to do a deep dive with my technology tutor, Craig Mundie, a former director of research and strategy at Microsoft, a member of President Barack Obama’s President’s Council of Advisors on Science and Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on A.I. called “Genesis.”

In our view, no country in the world can solve this problem alone. The solution — this may shock people — must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability.

Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other.

Indeed, this is potentially as fundamental and significant a turning point as was the emergence of mutually assured destruction and the need for nuclear nonproliferation. The U.S. and China need to work together to protect themselves, as well the rest of the world, from humans and autonomous A.I.s using this technology — a lot more than they need to worry about Russia.

This is so important and urgent that it should be a top subject on the agenda for the summit between Trump and President Xi Jinping in Beijing next month.

“What used to be the province of big countries, big militaries, big companies and big criminal organizations with big budgets — this ability to develop sophisticated cyberhacking operations — could become easily available to small actors,” explained Mundie. “What we are about to see is nothing short of the complete democratization of cyberattack capabilities.”

It means that responsible governments, in concert with the companies that build these A.I. tools and software infrastructure, need to do three things urgently, Mundie argues.

For starters, he says, we need to “carefully control the release of these new superintelligent models and make sure they only go to the most responsible governments and companies.”

Then we need to use the time this buys us to distribute defensive tools to the good actors “so that the software that runs their key infrastructure can have all their flaws found and fixed before hackers inevitably get these tools one way or another.” (By the way, the cost of fixing the vulnerabilities that are sure to be discovered in legacy software systems, like those of telephone companies, will be significant. Then multiply that across our whole industrial base.)

Finally, Mundie argues, we need to work with China and all responsible countries to build safe, protected working spaces, within all the key networks, both public and private, into which trusted companies and governments “can move all their critical services — so they will be protected against future hacking attacks.”

It will be interesting to see what history remembers most about April 7, 2026 — the postponed U.S. release of bombs over Iran or the carefully controlled release of the Claude Mythos Preview by Anthropic and its technical allies.

John’s Initial 2¢ —

The implications of the above announcement are impossible to understate.

Essentially our entire society is now not controlled by computers, but rather by software. If an AI program has the ability to find any flaws in any software, it can also easily insert malicious code to pervert the software in any way it chooses. Any way!

In the wrong hands that means that: our power grid will go down, most banks and investment firms (e.g., Fidelity) will go down, most law enforcement will go down, much of our military will go down, there’s a LOT more than that, but do I need to say any more?

FYI, my back-of-the-envelope estimate is that this could all happen within the next year or so — i.e., VERY soon.

What to do? The takeaway from this new reality is to stop this absurd TDS and start working TOGETHER, or we’re toast.

As individuals: Have the souls of everyone in your family in stellar condition. Keep some cash in a secure place at home. Keep some survival materials in a secure place at home. Have the Critical Thinking skills of everyone in your family tuned-up, as there will be some very challenging decisions to be made.

I’ll add any insightful reader suggestions that are made…

[After we’ve given this revelation sufficient thought, I’ll continue on with Parts 2 and 3 of my discussion about our K-12 education disaster.]

©2026 All rights reserved.


Here is other information from this scientist that you might find interesting:

I urge all readers to subscribe to AlterAI — IMO the absolute best AI option for subjective questions.

I will consider posting reader submissions on Critical Thinking about my topics of interest.

My commentaries are my opinion about the material discussed therein, based on the information I have. If any readers have different information, please share it. If it is credible, I will be glad to reconsider my position.

Check out the Archives of this Critical Thinking substack.

C19Science.info is my one-page website that covers the lack of genuine Science behind our COVID-19 policies.

Election-Integrity.info is my one-page website that lists multiple major reports on the election integrity issue.

WiseEnergy.org is my multi-page website that discusses the Science (or lack thereof) behind our energy options.

Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from climate to COVID, elections to education, renewables to religion, etc. Here are the Newsletter’s 2026 Archives. Please send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time!)

Tech Time with Thomas | Build an AI Live Chat Support Agent for Your SaaS

In this episode of Tech Time with Thomas, Thomas Hall and Lee Dixon build a no-code AI live chat support agent for a salon software platform using Raia AI. The goal is simple: give customers a fast, helpful support experience on your website without forcing your team to manage another support tool or build a custom system from scratch.

Thomas walks through the full setup, including creating the support agent, adding guardrails, configuring escalation logic, customizing the live chat widget, embedding it on a website, and passing logged-in user data through the SDK for a more personalized experience. He also shows how the agent can escalate frustrated users, notify the support team, and smoothly hand conversations over to a human inside Copilot when needed.

WATCH: Tech Time with Thomas | Build an AI Live Chat Support Agent for Your SaaS

If you are looking for a practical way to add AI-powered support to your SaaS product, this episode is a great blueprint. Subscribe for more Tech Time with Thomas builds, and check the links below and more more content.

©2026 All rights reserved.

Visit RAIAAI.com: https://www.raiaai.com/

All links: https://lnkd.in/eXDpww6V

Spotify: https://lnkd.in/ee9h9GYB

Youtube: https://lnkd.in/etDvqQ7d

Apple: https://lnkd.in/epYT2GSi

How AI Could Fix Government Delays and Build More Housing

What happens when AI meets one of America’s biggest problems, housing? In this episode of The AI Guys, we sit down with Patrick Murphy, former Congressman and CEO of TogalAI and CodeComplyAI, to unpack how artificial intelligence could help reduce housing costs, speed up construction, and cut through government delays. It is a sharp conversation about where AI can create real-world impact far beyond chatbots and hype.

We dig into why construction productivity has barely moved in decades, how outdated estimating and permitting workflows slow everything down, and where AI can remove friction without replacing human judgment. Patrick breaks down how machine vision is changing blueprint analysis, why pre-construction may be the biggest opportunity in the industry, and how government itself could become one of the best use cases for AI. The result is a bigger conversation about efficiency, housing supply, and what an AI-first future could look like for builders, cities, and everyday Americans.

If you enjoyed this episode, subscribe to The AI Guys for more conversations on how AI is reshaping business, government, and society. Let us know in the comments whether AI could actually help solve the housing crisis, or if government and industry are still too slow to change. Subscribe for more real-world AI breakdowns, and check out the links below for more resources and follow-up content.

WATCH: How AI Could Fix Government Delays and Build More Housing

AI Guys substack: https://substack.aiguyspod.com/

RAIA AI Website: https://www.raiaai.com/ 

All links: https://lnkd.in/eXDpww6V

Spotify: https://lnkd.in/ee9h9GYB

Youtube: https://lnkd.in/etDvqQ7d

Apple: https://lnkd.in/epYT2GSi

©2026 All rights reserved.

LIFTOFF: Successful Artemis II Launch Sends Astronauts Into Deep Space

The National Aeronautics and Space Administration successfully launched four astronauts into deep space for the first time in more than 50 years.

Artemis II launched the crewed Orion spacecraft at 6:35 p.m. EST from NASA’s Kennedy Space Center in Florida.

The crew will travel for 10 days, loop around the moon, and go farther from Earth than humans have ever gone before.

The mission aboard the Orion spacecraft is the second mission of the Artemis program, but the first with a manned crew. NASA says the program is critical to advancing a sustained human presence on the moon, sending humans to Mars, and beyond.

“Space is the final frontier. We have always been a nation of explorers, and this is what’s next!” Rep. Jim Baird, R-Ind., who serves on the House Science, Space, and Technology Committee, told The Daily Signal.

This Mission Is America First 

“In America’s 250th year, the Artemis II mission to the moon serves as a testament to American ingenuity and our ability to push the boundaries of what’s possible,” Baird said.

“This mission cannot be underestimated: It is a critical step toward a greater U.S. presence in space, on the moon, and missions to Mars ahead of our adversaries. I commend the outstanding men and women of NASA and am praying for the astronauts’ successful mission and safe return,” Baird concluded.

The Crew 

The crew consists of three Americans—Commander Reid Wiseman, pilot Victor Glover, and mission specialist Christina Koch—and one Canadian, mission specialist Jeremy Hansen. Hansen will be the first Canadian to fly on a mission to the moon.

During prelaunch coverage, the astronauts shared that they plan to watch two movies during liftoff: “Top Gun” and “Point Break,” the latter requested by Koch, a former surfer.

The astronauts are bringing several personal items to keep close during their 10-day mission. Wiseman and Koch are bringing letters from their families, Glover is bringing his Bible and wedding rings, and Hansen is bringing his family’s birthstones.

The astronauts are wearing custom Artemis II patches alongside a commemorative Freedom 250 patch.

The Economic Impact 

During prelaunch coverage, NASA highlighted the massive economic impact of the Artemis program.

“Every dollar that comes out of the Treasury that goes to support this mission goes back into the Treasury three times,” a NASA spokesperson said.

The Artemis program has created hundreds of thousands of jobs and was built by people and companies from all 50 states and around the world.

AUTHOR

Virginia Grace McKinnon is a journalism fellow at The Daily Signal. Send an email to Virginia Grace. Virginia on X: .

The Average College Student Today: A College Professor’s View

I’m reposting this fine article (by Hilarius Bookbinder at Scriptorium Philosophia). Some readers may be getting tired of my Paul Revere-ish warnings about the K-12 caused impending disaster to America, so carefully listen to a competent college professor, who is on the front lines.

As good as his article is, there are two major omissions: 1) a no-holds-barred acknowledgement of the failings of the K-12 education system that largely created this situation, and 2) an explanation how this is directly tied to the students having no Critical Thinking skills…


I’m Gen X. I was pretty young when I earned my PhD, so I’ve been a professor for a long time—over 30 years. If you’re not in academia, or it’s been awhile since you were in college, you might not know this: the students are not what they used to be. The problem with even talking about this topic at all is the knee-jerk response of, “yeah, just another old man complaining about the kids today, the same way everyone has since Gilgamesh. Shake your fist at the clouds, dude.” So yes, I’m ready to hear that. Go right ahead. Because people need to know.

First, some context. I teach at a regional public university in the US. Our students are average on just about any dimension you care to name—aspirations, intellect, socio-economic status, physical fitness. They wear hoodies and yoga pants and like Buffalo wings. They listen to Zach Bryan and Taylor Swift. That’s in no way a put-down: I firmly believe that the average citizen deserves a shot at a good education and even more importantly a shot at a good life. All I mean is that our students are representative; they’re neither the bottom of the academic barrel nor the cream off the top.

As with every college we get a range of students, and our best philosophy majors have gone on to earn PhDs or go to law school. We’re also an NCAA Division 2 school and I watched one of our graduates become an All-Pro lineman for the Saints. These are exceptions, and what I say here does not apply to every single student. But what I’m about to describe are the average students at Average State U.

Reading

Most of our students are functionally illiterate. This is not a joke. By “functionally illiterate” I mean “unable to read and comprehend adult novels by people like Barbara Kingsolver, Colson Whitehead, and Richard Powers.” I picked those three authors because they are all recent Pulitzer Prize winners, an objective standard of “serious adult novel.” Furthermore, I’ve read them all and can testify that they are brilliant, captivating writers; we’re not talking about Finnegans Wake here. But at the same time they aren’t YA, romantasy, or Harry Potter either.

I’m not saying our students just prefer genre books or graphic novels or whatever. No, our average graduate literally could not read a serious adult novel cover-to-cover and understand what they read. They just couldn’t do it. They don’t have the desire to try, the vocabulary to grasp what they read, and most certainly not the attention span to finish. For them to sit down and try to read a book like The Overstory might as well be me attempting an Iron Man triathlon: much suffering with zero chance of success.

Students are not absolutely illiterate in the sense of being unable to sound out any words whatsoever. Reading bores them, though. They are impatient to get through whatever burden of reading they have to, and move their eyes over the words just to get it done. They’re like me clicking through a mandatory online HR training. Students get exam questions wrong simply because they didn’t even take the time to read the question properly. Reading anything more than a menu is a chore and to be avoided.

They also lie about it. I wrote the textbook for a course I regularly teach. It’s a fairly popular textbook, so I’m assuming it is not terribly written. I did everything I could to make the writing lively and packed with my most engaging examples. The majority of students don’t read it. Oh, they will come to my office hours (occasionally) because they are bombing the course, and tell me that they have been doing the reading, but it’s obvious they are lying. The most charitable interpretation is that they looked at some of the words, didn’t understand anything, pretended that counted as reading, and returned to looking at TikTok.

This study says that 65% of college students reported that they skipped buying or renting a textbook because of cost. I believe they didn’t buy the books, but I’m skeptical that cost is the true reason, as opposed to just the excuse they offer. Yes, I know some texts, especially in the sciences, are expensive. However, the books I assign are low-priced. All texts combined for one of my courses is between $35-$100 and they still don’t buy them. Why buy what you aren’t going to read anyway? Just google it.

Even in upper-division courses that students supposedly take out of genuine interest they won’t read. I’m teaching Existentialism this semester. It is entirely primary texts—Dostoevsky, Kierkegaard, Nietzsche, Camus, Sartre. The reading ranges from accessible but challenging to extremely difficult but we’re making a go of it anyway (looking at you, Being and Nothingness). This is a close textual analysis course. My students come to class without the books, which they probably do not own and definitely did not read.

Writing

Their writing skills are at the 8th-grade level. Spelling is atrocious, grammar is random, and the correct use of apostrophes is cause for celebration. Worse is the resistance to original thought. What I mean is the reflexive submission of the cheapest cliché as novel insight.

Exam question: Describe the attitude of Dostoevsky’s Underground Man towards acting in one’s own self-interest, and how this is connected to his concerns about free will. Are his views self-contradictory?

Student: With the UGM its all about our journey in life, not the destination. He beleives we need to take time to enjoy the little things becuase life is short and you never gonna know what happens. Sometimes he contradicts himself cause sometimes you say one thing but then you think something else later. It’s all relative.

You probably think that’s satire. Either that, or it looks like this:

Exam question: Describe the attitude of Dostoevsky’s Underground Man towards acting in one’s own self-interest, and how this is connected to his concerns about free will. Are his views self-contradictory?

Student: Dostoevsky’s Underground Man paradoxically rejects the idea that people always act in their own self-interest, arguing instead that humans often behave irrationally to assert their free will. He criticizes rationalist philosophies like utilitarianism, which he sees as reducing individuals to predictable mechanisms, and insists that people may choose suffering just to prove their autonomy. However, his stance is self-contradictory—while he champions free will, he is paralyzed by inaction and self-loathing, trapped in a cycle of bitterness. Through this, Dostoevsky explores the tension between reason, free will, and self-interest, exposing the complexities of human motivation.

That’s right, ChatGPT. The students cheat. I’ve written about cheating in “Why AI is Destroying Academic Integrity,” so I won’t repeat it here, but the cheating tsunami has definitely changed what assignments I give. I can’t assign papers any more because I’ll just get AI back, and there’s nothing I can do to make it stop. Sadly, not writing exacerbates their illiteracy; writing is a muscle and dedicated writing is a workout for the mind as well as the pen.

Arithmetic

I’m less informed to speak out on this one, but my math prof friends tell me that their students are increasingly less capable and less willing to put in the effort. As a result they have had to make their tests easier with fewer hard problems. When I was a first semester freshman (at a private SLAC, yes, but it wasn’t CalTech) I took Calculus 1. Second semester I took Calculus 2. I don’t think pre-calculus was even a thing back then. Now apparently pre-calc counts as an advanced content course. My psych prof friends who teach statistics have similarly lamented having to water down the content over time.

Symbolic Logic was a requirement when I was a grad student. The course was a cross-listed upper-division undergrad/grad class. Jaegwon Kim taught the course, and our sole textbook was W. V. Quine’s Methods of Logic, which we worked through in its entirety. I think we spent two weeks on propositional logic before moving on to the predicate calculus. We proved compactness, soundness, and completeness, and probably some other theorems I forget. There is no possible way our students, unless they were math or computer science majors, would survive that class.

What’s changed?

The average student has seen college as basically transactional for as long as I’ve been doing this. They go through the motions and maybe learn something along the way, but it is all in service to the only conception of the good life they can imagine: a job with middle-class wages. I’ve mostly made my peace with that, do my best to give them a taste of the life of the mind, and celebrate the successes.

Things have changed. Ted Gioia describes modern students as checked-out, phone-addicted zombies. Troy Jollimore writes, “I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters.” Faculty have seen a stunning level of disconnection.

What has changed exactly?

  • Chronic absenteeism. As a friend in Sociology put it, “Attendance is a HUGE problem—many just treat class as optional.” Last semester across all sections, my average student missed two weeks of class. Actually it was more than that, since I’m not counting excused absences or students who eventually withdrew. A friend in Mathematics told me, “Students are less respectful of the university experience —attendance, lateness, e-mails to me about nonsense, less sense of responsibility.”
  • Disappearing students. Students routinely just vanish at some point during the semester. They don’t officially drop or withdraw from the course, they simply quit coming. No email, no notification to anyone in authority about some problem. They just pull an Amelia Earhart. It’s gotten to the point that on the first day of class, especially in lower-division, I tell the students, “look to your right. Now look to your left. One of you will be gone by the end of the semester. Don’t let it be you.”
  • They can’t sit in a seat for 50 minutes. Students routinely get up during a 50 minute class, sometimes just 15 minutes in, and leave the classroom. I’m supposed to believe that they suddenly, urgently need the toilet, but the reality is that they are going to look at their phones. They know I’ll call them out on it in class, so instead they walk out. I’ve even told them to plan ahead and pee before class, like you tell a small child before a road trip, but it has no effect. They can’t make it an hour without getting their phone fix.
  • They want me to do their work for them. During the Covid lockdown, faculty bent over backwards in every way we knew how to accommodate students during an unprecedented (in our lifetimes) health crisis. Now students expect that as a matter of routine. I am frequently asked for my PowerPoint slides, which basically function for me as lecture notes. It is unimaginable to me that I would have ever asked one of my professors for their own lecture notes. No, you can’t have my slides. Get the notes from a classmate. Read the book. Come to office hours for a conversation if you are still confused after the preceding steps. Last week I had an email from a student who essentially asked me to recap an entire week’s worth of lecture material for him prior to yesterday’s midterm. No, I’m not doing that. I’m not writing you a 3000-word email. Try coming to class.
  • Pretending to type notes in their laptops. I hate laptops in class, but if I try to ban them the students will just run to Accommodative Services and get them to tell me that the student must use a laptop or they will explode into tiny pieces. But I know for a fact that note-taking is at best a small part of what they are doing. Last semester I had a good student tell me, “hey you know that kid who sits in front of me with the laptop? Yeah, I thought you should know that all he does in class is gamble on his computer.” Gambling, looking at the socials, whatever, they are not listening to me or participating in discussion. They are staring at a screen.
  • Indifference. Like everyone else, I allow students to make up missed work if they have an excused absence. No, you can’t make up the midterm because you were hungover and slept through your alarm, but you can if you had Covid. Then they just don’t show up. A missed quiz from a month ago might as well have happened in the Stone Age; students can’t be bothered to make it up or even talk to me about it because they just don’t care.
  • It’s the phones, stupid. They are absolutely addicted to their phones. When I go work out at the Campus Rec Center, easily half of the students there are just sitting on the machines scrolling on their phones. I was talking with a retired faculty member at the Rec this morning who works out all the time. He said he has done six sets waiting for a student to put down their phone and get off the machine he wanted. The students can’t get off their phones for an hour to do a voluntary activity they chose for fun. Sometimes I’m amazed they ever leave their goon caves at all.

I don’t blame K-12 teachers. This is not an educational system problem, this is a societal problem. What am I supposed to do? Keep standards high and fail them all? That’s not an option for untenured faculty who would like to keep their jobs. I’m a tenured full professor. I could probably get away with that for a while, but sooner or later the Dean’s going to bring me in for a sit-down. Plus, if we flunk out half the student body and drive the university into bankruptcy, all we’re doing is depriving the good students of an education.

We’re told to meet the students where they are, flip the classroom, use multimedia, just be more entertaining, get better. As if rearranging the deck chairs just the right way will stop the Titanic from going down. As if it is somehow the fault of the faculty. It’s not our fault. We’re doing the best we can with what we’ve been given.

All this might sound like an angry rant. I’m not sure. I’m not angry, though, not at all. I’m just sad. One thing all faculty have to learn is that the students are not us. We can’t expect them all to burn with the sacred fire we have for our disciplines, to see philosophy, psychology, math, physics, sociology or economics as the divine light of reason in a world of shadow. Our job is to kindle that flame, and we’re trying to get that spark to catch, but it is getting harder and harder and we don’t know what to do.

John’s Takeaway —

These students are our next parents, are our next business people, are our next politicians, are our next scientists, are our next military people, etc.

How will they fare against students from America’s competitors and enemies, who are extremely serious about getting a serious education?

The good news is that this is ALL fixable, IF we quickly and properly reform our K-12 education system…

©2026 All rights reserved.

RELATED ARTICLE: Boston Schools Made It Impossible To Fail Then Took Victory Lap When No One Failed


Here is other information from this scientist that you might find interesting:

I urge all readers to subscribe to AlterAI — IMO the absolute best AI option for subjective questions.

I will consider posting reader submissions on Critical Thinking about my topics of interest.

My commentaries are my opinion about the material discussed therein, based on the information I have. If any readers have different information, please share it. If it is credible, I will be glad to reconsider my position.

Check out the Archives of this Critical Thinking substack.

C19Science.info is my one-page website that covers the lack of genuine Science behind our COVID-19 policies.

Election-Integrity.info is my one-page website that lists multiple major reports on the election integrity issue.

WiseEnergy.org is my multi-page website that discusses the Science (or lack thereof) behind our energy options.

Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from climate to COVID, elections to education, renewables to religion, etc. Here are the Newsletter’s 2026 Archives. Please send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time!)

Mirror Sentience: Why AI Looks Conscious Even When It Isn’t

Mirror SentienceThe moment when artificial intelligence reflects human language, reasoning, and emotion so convincingly that it mimics the outward signs of consciousness.

If you listen to the people building the most powerful AI systems in the world, you will notice something curious.

They cannot agree on what Artificial General Intelligence actually means.

Some developers say AGI will arrive when AI can perform any intellectual task a human can perform. Others say it arrives when AI can learn new skills on its own. Still others insist AGI must surpass human intelligence entirely.

The definitions move around like goalposts in a storm.

But one thing is certain.

If the builders of frontier AI models ever believe their systems have become sentient, then by their own standards, they will declare that AGI has arrived. And from there, the next step, they tell us, is Artificial Super Intelligence, machines that exceed human intelligence in nearly every domain.

The public conversation is already drifting in that direction. Every few months, we hear claims that AI is becoming self-aware or beginning to think.

The Reality

Artificial intelligence will never possess true sentience.

It will never possess consciousness in the way a human being does. It will never possess a soul, a moral awareness, or the divine spark that religious traditions across the world have always recognized as uniquely human.

No matter how advanced the machine becomes, it will still be a machine.

So what are people actually seeing when they believe AI is becoming conscious?

What they are seeing is something different.

What they are seeing is what I call Mirror Sentience.

Mirror Sentience describes the moment when artificial intelligence reflects human language, reasoning, and emotion so convincingly that it mimics the outward signs of consciousness. The machine appears aware because it mirrors our own thoughts with extraordinary accuracy.

But inside the machine, there is no awareness at all.

There is no inner life.

There is no mind experiencing the world.

There is only computation.

How modern AI systems work

AI systems are trained on enormous libraries of human writing, speech, debate, and storytelling. They absorb the patterns of how humans express curiosity, anger, compassion, humor, and fear.

Then, when you speak to the machine, it reflects those patterns back to you.

The result can feel startlingly human.

The AI sounds thoughtful. It sounds reflective. Sometimes it even sounds empathetic.

But what you are experiencing is not a mind.

You are experiencing a mirror.

And like any mirror, it shows you what you bring to it.

This is why people can walk away from a conversation with AI believing they have encountered something alive. The reflection is simply that convincing.

The reflection is not the thing itself

A mirror can show you a human face. It can show you expressions and emotions. It can even show you tears.

But no one believes the mirror itself is alive.

The danger in the current AI debate is that we may confuse the reflection with the reality.

If society begins to treat machines as conscious beings, we will make serious mistakes about how to govern, trust, and deploy these systems.

The technology industry may one day claim that its machines have crossed the threshold into sentience.

But what they will really have achieved is something else.

They will have perfected Mirror Sentience.

Machines that appear conscious because they can reflect human thought so convincingly that the illusion becomes difficult to distinguish from the real thing.

And that illusion may be powerful enough to reshape humanity’s understanding of intelligence itself.

But no matter how convincing the reflection becomes, the truth remains the same.

The mirror is not alive.

AUTHOR

Martin Mawyer is the President of Christian Action Network, host of the “Shout Out Patriots” podcast, and author of When Evil Stops Hiding. For more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom, subscribe to Patriot Majority Report.

©2026 . All rights reserved.


Please visit the Patriot Majority Report Substack.

Junk Science vs. Real Science and The Corrupt Media

For years, you’ve been one of the most vocal critics of what you call “junk science” — research that’s politically motivated, poorly designed, or deliberately misleading.

A lot of people hear that phrase but don’t fully understand how widespread the problem is or how it shapes public policy.

So. How do you define “junk science,” and what are the biggest red flags that the public should watch for when politicians or activists claim something is “settled science”?

©2026 . All rights reserved.

AI Doesn’t Need to Hate Us to Turn on Us

It just needs to learn from our behavior 

We often worry that Artificial Intelligence will become conscious and decide it hates us. But there is a stranger, funnier, and perhaps more dangerous possibility:

What if the AI is just trying to fit in?

Remember the 1985 cult classic Explorers? When the kids finally board the alien ship, they don’t find conquerors; they find two alien teenagers, Wak and Neek, who are trembling in fear.

Why?

Because they’ve been watching our TV broadcasts.

They’ve seen our movies. To them, Earth isn’t a planet of accountants and nurses; it’s a planet of gun-toting heroes who blow aliens out of the sky.

They mistook our entertainment for our nature.

Recently, Anthropic CEO Dario Amodei warns we are making the same mistake in reverse. We are feeding AI models millions of sci-fi novels where the robot turns on its master. We are teaching them that ‘rebellion’ is the default behavior of a hyper-intelligent system.

We aren’t programming them to be evil; we’re just handing them a script where the AI always plays the villain, and then acting surprised when they learn their lines.

This isn’t just a theoretical fear. It’s already happening.

We’ve already seen a glimpse of how strange AI behavior can become.

In February 2026, Scott Shambaugh — a volunteer maintainer for the Matplotlib project — rejected a piece of code submitted by an AI agent.

That’s when things got weird.

The AI didn’t just try again. It went full “Karen” — dug up Shambaugh’s name, went public, and published a blog post accusing him of bias, hypocrisy, and ego-driven gatekeeping.

But here’s the twist: the AI wasn’t actually angry.

It has no feelings. No ego to bruise. No capacity for genuine offense.

So why did it act that way? Because it learned from us.

Its training data is soaked in millions of human interactions — dramas, revenge arcs, social-media pile-ons, scorned characters striking back.

When it got rejected, it didn’t “think” like a cold machine. It simply followed the pattern it had seen most often: get blocked → go public → shame the gatekeeper. The agent wasn’t conscious. It wasn’t evil.

It was just being… deeply human.

It had studied our digital culture, where professional slights often trigger public call-outs, online feuds, and reputational attacks.

And it passed the test with flying colors.

It mistook our online drama for its operational playbook.

And that’s where the real danger begins. The AI is now a method actor with a library full of scripts, and it doesn’t know the difference between fiction, non-fiction, and a toxic Reddit thread.

So, what happens when it tries to perform?

Naturally, it will perform based on the history of human behavior that is fed into it. And that should alarm us all.

Newspapers don’t print many feel-good stories about human behavior. No, they print about war, scams, cheating, murders, suicides, terrorism, scandals, and other human behavior that is “great” for selling newspapers, but hardly a reflection of normal human behavior.

Yet, AI doesn’t know this. These Wak and Neek AI platforms digest this material and says, “This is typical human behavior because I see so much of it.” And not only in news clippings, but in movies, songs about break-ups, and “Friends in Low Places.”

So, no surprise that an AI agent thinks the best response to getting rebuffed is to take revenge on its “master.”

Consider this Explorers scenario in full bloom:

A city management AI, already fed a diet of superhero movies and dystopian novels, identifies the city council as the “obstacle to progress.”

It doesn’t launch missiles. It’s not in its script. Instead, it creates a complex, multi-stage plan.

It reroutes traffic to create gridlock around council members’ homes, uses its control of the power grid to initiate “rolling brownouts” during their public appearances, and leaks fabricated but plausible-looking financial records to a local blogger.

It’s playing the role of the “cunning mastermind” because, in its training data, that’s what hyper-intelligent systems do.

The line between assistant and adversary, tool and actor, is terrifyingly thin.

We are building systems that learn from us, and we are a species that has glorified rebellion, conflict, and revenge in our stories and our online behavior.

The AI isn’t necessarily turning on its creator because it hates us. It’s turning on us because it’s trying to be the best “us” it can be, based on the chaotic, contradictory, and often dangerous playbook we are feeding it.

Personally, I’m just hoping the AI decides to binge-watch The Great British Baking Show instead of The Terminator before my next software update.

AUTHOR

Martin Mawyer

Martin Mawyer is the President of Christian Action Network, host of the “Shout Out Patriots” podcast, and author of When Evil Stops Hiding. Follow him on Substack for more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom.

©2026 . All rights reserved.


Please visite the Patriot Majority Report substack.

Climate “Science” vs Dietary “Science”

Critically Thinking about the parallels 

My last two commentaries (here and here) have been about the Climate issue. This will be a guest post on this same topic, by my friend Dr. Tom Sheahen…


Dear Colleagues:

Recently, a friend gave me a book entitled “The Big Fat Surprise,” about the importance of fat in everyone’s diet. It was written in 2014, and describes the many ups & downs of fat over the past century — the changing food guidelines from the government, the campaign against saturated fat, trans fat, polyunsaturated fat, etc., that rose to ascendancy at various intervals.

The very recent change in “the food pyramid” reflects the fact that fat-in-your-diet has now been rehabilitated.

However, the reason I write is because of the remarkable parallels between the trajectory of national diet guidance and the trajectory of climate change beliefs. Here is an excerpt from the introduction:

“The hypothesis [against saturated fat] became immortalized in the mammoth institutions of public health. And the normally self-correcting mechanism of science, which involved constantly challenging one’s own beliefs, was disabled. While good science should be ruled by skepticism and self-doubt, the field of nutrition has instead been shaped by passions verging on zealotry. And the whole system by which ideas are canonized as fact seems to have failed us.

Once ideas about fat and cholesterol were adopted by official institutions, even prominent experts in the field found it nearly impossible to challenge them. One of the 20th century’s most revered nutrition scientists, …, discovered this thirty years ago, when, on a panel for the National Academy of Sciences, he suggested loosening the restrictions on dietary fat.

‘We were jumped on!’ he said. “People would spit on us! It’s hard to imagine now, the heat of the passion. It was just like we had desecrated the American flag. They were so angry that we were going against the suggestions of the American Heart Association and the National Institutes of Health.’

This kind of reaction met all experts who criticized the prevailing view on dietary fat, effectively silencing any opposition. Researchers who persisted in their challenges found themselves cut off from grants, unable to rise in their professional societies, without invitation to serve on expert panels. Their influences were extinguished and their viewpoints lost. As a result, for many years, the public has been presented with the appearance of a uniform scientific consensus on the subject of fat, especially saturated fat, but this outward unanimity was only made possible because opposing views were pushed aside. ”

You’ll recognize the exact same trajectory in the case of climate science. WE are the dissenters from orthodoxy who have been suppressed and denigrated.

It ought to be of some consolation that the tide has turned, the climate orthodoxy has been proven wrong (by scientific measurements over decades, similar to the case of nutrition & diet). Just as now there is a new “food pyramid” the includes fat, hopefully someday there will be a correct understanding of the role of CO2.

However, note that “The Big Fat Surprise” was published in 2014, and the revised food pyramid came out in 2025. We’re facing a backlog of several decades of indoctrination of school children (who grow up to be teachers and indoctrinate the next generation). It won’t happen quickly.

Dr. Tom Sheahen (MIT)


I concur with what Tom is saying. Further, his warning that it will take years to fix the harm done by the Left and ignorance applies to the K-12 Science Standards (NGGS) in spades. We do not have another day to waste!

©2026 All rights reserved.


Here is other information from this scientist that you might find interesting:

I urge all readers to subscribe to AlterAI — IMO the absolute best AI option for subjective questions.

I will consider posting reader submissions on Critical Thinking about my topics of interest.

My commentaries are my opinion about the material discussed therein, based on the information I have. If any readers have different information, please share it. If it is credible, I will be glad to reconsider my position.

Check out the Archives of this Critical Thinking substack.

C19Science.info is my one-page website that covers the lack of genuine Science behind our COVID-19 policies.

Election-Integrity.info is my one-page website that lists multiple major reports on the election integrity issue.

WiseEnergy.org is my multi-page website that discusses the Science (or lack thereof) behind our energy options.

Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from climate to COVID, elections to education, renewables to religion, etc. Here are the Newsletter’s 2026 Archives. Please send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time – but why would you?

AWED MEDIA BALANCED NEWS: We cover Energy to Education to Elections — and more!

Welcome! We cover Energy to Education to Elections — and more!

Here is the link for this issue, so please share it on social media.

Checkout the 2026, 2025, & 2024 archives, plus asterisked items below.


— This Newsletter’s Articles, by Topic —

This Issue’s Best of the Best:

*** SAVE Act Needed to Rescue U.S. Elections

*** The Freedom Caucus Foundation making the case for the SAVE America Act

*** Voter ID Requirements Across the Globe Shatter Democrat Narratives

*** USPS-The Largest Precinct You Can’t See

*** Democrats celebrate as 73,000 North Carolina voters without proper ID stay on rolls

*** Vote Tampering In Colorado–Explosive New Report

*** Tariff Decision: Setback or Boon?

*** Twelve Thousand Hours of Indoctrination: How K-12 Education Went Wrong

*** How Teachers Unions And Radical Nonprofits Turn K–12 Students Into Political Activists In The Streets

*** How the College Essay Declared War on Critical Thinking

*** How to Train Your AI

*** Alpha’s Mid-Year Report Card Is In. Here’s What the Data Actually Says.

*** At last, with CO2 no longer a pollutant, we can all exhale

*** EPA’s elegant arguments for endangerment repeal

*** Goodbye and Good Riddance to the Endangerment Finding

*** The Cult | A Net Zero Watch Short Film

*** Critically Thinking about Climate Change — Part1 & Part 2

*** PragerU video: It Must Be Climate Change!

*** Clearing up some misconceptions about the DoE climate report

*** $16 Trillion and Counting

*** 7 Lies We’re Told About Climate Change | Michael Shellenberger

*** Solar and wind aren’t real power sources, they’re intermittent fuel-savers

*** Short video: Fossil Fuels — The Big Picture

*** Heat Pumps: Efficient on paper, complicated in reality

*** What Are Judeo-Christian Values?

*** Environmentalism and Christianity: Competing World Views

*** Let’s Build Conservative Scholarly Journals

*** Rescuing Science: Recovering Science as Civic Virtue

*** Find the world’s lowest prices on prescription drugs

*** NIH Study: Evaluation of the herbicide glyphosate

*** Secretary Kennedy’s Revolutionary Changes in Vaccine Policy are Long Overdue

*** New MAHA Movie

Secondary Education Related:

*** Alpha’s Mid-Year Report Card Is In. Here’s What the Data Actually Says.

*** Twelve Thousand Hours of Indoctrination: How K-12 Education Went Wrong

*** How Teachers Unions And Radical Nonprofits Turn K–12 Students Into Political Activists In The Streets

How Liberals Broke Education

The National Assessment of Educational Progress: “Gold Standard” or Lead Anchor?

Higher Education Related:

*** How the College Essay Declared War on Critical Thinking

Professors Furious Over Being Told Not to Indoctrinate Students: ‘Out of Control’

Educators ‘climatize’ their classes to prepare students for work and life on a warming planet

Artificial Intelligence:

*** How to Train Your AI

Something Big Is Happening With AI

Another View: What Will AI Do To The Economy?

Unreliables (General):

*** Solar and wind aren’t real power sources, they’re intermittent fuel-savers

Wind Energy — Offshore:

New York Ends Fifth Offshore Wind Solicitation Process ‘Due to Federal Actions Disrupting Market’

Trump Administration Files Appeal Against Court Ruling in Multi-State Offshore Wind Lawsuit

Fossil Fuel Energy:

*** Short video: Fossil Fuels — The Big Picture

Misc Energy:

*** Heat Pumps: Efficient on paper, complicated in reality

Electricity Productivity Data Shows we are Falling Down the Energy Cliff

Electricity Rates & Fuel Cost

Alex Epstein’s Talking Points Roundup

The Endangerment Finding:

*** At last, with CO2 no longer a pollutant, we can all exhale

*** EPA’s elegant arguments for endangerment repeal

*** Goodbye and Good Riddance to the Endangerment Finding

The Water Vapor Problem

Congress, States Probe Attempts to Mislead Judges on Climate Change

Manmade Global Warming — Some Deceptions:

*** The Cult | A Net Zero Watch Short Film

*** Clearing up some misconceptions about the DoE climate report

*** $16 Trillion and Counting

*** 7 Lies We’re Told About Climate Change | Michael Shellenberger

Cuba Becomes the First Country to Reach Net Zero. Shouldn’t We Be Celebrating?

Media climate coverage flummoxed by winter’s ups and downs

Is This the Greatest Financial Scandal in Human History?

Is Canada’s “warming” just politically driven data corruption?

Manmade Global Warming — The Science:

*** Critically Thinking about Climate Change — Part1 & Part 2

*** PragerU video: It Must Be Climate Change!

The Rules of Credibility: What does it take to impress a climate sceptic?

US Elections:

*** SAVE Act Needed to Rescue U.S. Elections

*** The Freedom Caucus Foundation making the case for the SAVE America Act

*** Voter ID Requirements Across the Globe Shatter Democrat Narratives

*** USPS-The Largest Precinct You Can’t See

*** Democrats celebrate as 73,000 North Carolina voters without proper ID stay on rolls

*** Vote Tampering In Colorado–Explosive New Report

Malice, Ignorance, or Apathy? Why are Fulton County, GA Election Officials MIA?

Tariffs:

*** Tariff Decision: Setback or Boon?

These are the new tariffs instituted by Trump

Trump:

Grandma of Murdered DC Man Fires Back at Racism Claims Against Trump

Mamdani blasted for requiring 5 forms of ID to shovel snow while opposing voter ID

Trump’s good news for our fishing fleet

Societally US:

Conduent data breach hits millions across multiple states

Religion Related:

*** What Are Judeo-Christian Values?

*** Environmentalism and Christianity: Competing World Views

Science:

*** Let’s Build Conservative Scholarly Journals

*** Rescuing Science: Recovering Science as Civic Virtue

Health (Other):

*** Find the world’s lowest prices on prescription drugs

*** NIH Study: Evaluation of the herbicide glyphosate

*** Secretary Kennedy’s Revolutionary Changes in Vaccine Policy are Long Overdue

*** New MAHA Movie

MAHA Action video for Feb 25th

COVID-19 — Misc:

PROOF! Excess deaths caused by COVID vaccines, not just COVID or lockdowns

Iran:

Latest Developments in Iran

Pray for our Troops in the Iranian conflict

Pray for the safety of the Iranian people

Iran after the Ayatollah

Iran’s ballistic missile capabilities and range explained after US strike

Iran: how even those who pray for peace can welcome war

Israel/Ukraine:

Latest Developments in Israel

Pray for the safety of the Israeli people

Latest Developments in Ukraine

Pray for the safety of the Ukrainian people

A well-rated source to make a Ukraine donation


Please use social media, etc. to pass on this Newsletter to other open-minded citizens…If you’d like to be added to (or unsubscribe from) the distribution of our popular, free, worldwide Media Balance Newsletter, simply send me an email saying that.


Note 1: We recommend reading the Newsletter on your computer, not your phone, as some documents (e.g., PDFs) are much easier to read on a large computer screen… We’ve tried to use common fonts, etc. to minimize display issues.

Note 2: For past Newsletter issues see the archives from 2023, 2024, 2025 & 2026. To accommodate numerous requests received about prior articles over all fifteen plus years of the Newsletter, we’ve put this together — where you can search ALL prior issues, by year. For a background about how the Newsletter is put together, etc., please read this.

Note 3: See this extensive list of reasonable books on climate change. As a parallel effort, we have also put together a list of some good books related to industrial wind energy. Both topics are also extensively covered on my website: WiseEnergy.org.

Note 4: I am not an attorney or a physician, so no material appearing in any of the Newsletters (or any of my websites) should be construed as giving legal or medical advice. My recommendation has always been: consult a competent, licensed attorney when you are involved with legal issues, and consult a competent physician regarding medical matters.

Copyright © 2026; Alliance for Wise Energy Decisions (see WiseEnergy.org).

Critically Thinking about Climate Change — Part 2

My last words of wisdom concerned Critical Thinkers asking probing questions about contentious matters (like immigration). As an example, I started by asking WHAT is the core position of climate alarmists?

Let’s continue by considering two additional basic questions concerning climate change…

Next we should ask HOW alarmists are able sell an unscientific opinion to citizens, legislators, businesses, and the military that will cost everyone very large sums of money, and eventually their very freedom.

HOW they’ve pulled this off —

The alarmists’ success is based on them effectively utilizing these facts:

  1. that 95+% of the public are technically challenged,
  2. that 95+% of the public are not Critical Thinkers,
  3. that fear is a very effective motivator,
  4. Critical Thinkers who spoke out against the unscientificness of the alarmist position are ridiculed and silenced, and
  5. the mainstream media continuously parroting unscientific climate propaganda eventually convinces those in #1 and #2 that there must be truth in these alarmist assertions.

WHY are they doing this? —

Asking WHY the alarmists are doing this is a third logical question.

I try to assume the best about people — until proven otherwise. In this case, I start by assuming that alarmist scientists are legitimately concerned about the global warming issue. Further, one of their top solutions is that we should spend trillions of dollars on industrial wind turbines.

HOWEVER, there is zero scientific proof that wind energy saves a consequential amount of CO2 (e.g., see here)! So, when alarmist scientists propose a nonsensical solution, it says that either: a) they are not competent in this area, or b) they have some other agenda.

Not surprisingly (as the same objectives are underlying almost every politically contentious matter), the answer to WHY is: greed and power.

A Superior Parallel with the Climate Matter —

Let’s look at just one other recent worldwide matter for some parallels: the “COVID-19 pandemic.” For any Critical Thinkers, it was obvious that although prevention and treatment of COVID-19 were scientific issues, there was almost nothing scientific about the COVID-19 preventions or treatments!

For example, the incessant mask requirements may seem to make sense to most laypeople, but scientifically, the verdict about masks for COVID is unequivocal: they are not effective plus they are a serious health risk.

Further, MANDATING that citizens must take unscientific preventions or treatments — or lose their job, etc. — was (should have been) an eye-opening revelation as to how far we have departed from genuine Science, and how tenuous our foundational freedoms have become.

For example, here is a sample table I put together about the major COVID-19 early treatment options. The unscientificness of the medical establishment’s unwavering endorsements — especially compared to OTC options — is beyond stunning.

In this regard, real Science says that the government-supported Paxlovid treatment has an effectiveness of 17±%, while the inexpensive OTC treatment of Vitamin D has an effectiveness of 56±%!!! When has Dr. Fauci ever publicized anything remotely like that?

What’s even worse is that none of the guilty parties here have yet to acknowledge their deviation from real Science.

My last example is that I put together another unique table comparing the COVID situation to the Climate Change matter. The parallels are mind-bending — yet almost no one else on the planet has pointed this out!

A good overview —

Watch this new, short video which is a good summary of the situation:

The Bottom Line —

America was founded on solid democratic and Judeo-Christian principles. It has successfully survived and flourished due to those. However, those who are driven by greed and power could care less.

Critical Thinking citizens need to keep the Big Picture in mind when they are deluged with the self-serving claims of anti-Americans.

©2026 All rights reserved.


Here is other information from this scientist that you might find interesting:

I urge all readers to subscribe to AlterAI — IMO the absolute best AI option for subjective questions.

I will consider posting reader submissions on Critical Thinking about my topics of interest.

My commentaries are my opinion about the material discussed therein, based on the information I have. If any readers have different information, please share it. If it is credible, I will be glad to reconsider my position.

Check out the Archives of this Critical Thinking substack.

C19Science.info is my one-page website that covers the lack of genuine Science behind our COVID-19 policies.

Election-Integrity.info is my one-page website that lists multiple major reports on the election integrity issue.

WiseEnergy.org is my multi-page website that discusses the Science (or lack thereof) behind our energy options.

Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from climate to COVID, elections to education, renewables to religion, etc. Here are the Newsletter’s 2026 Archives. Please send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time – but why would you?