<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://geohot.github.io//blog/feed.xml" rel="self" type="application/atom+xml" /><link href="https://geohot.github.io//blog/" rel="alternate" type="text/html" /><updated>2026-04-15T20:12:30+08:00</updated><id>https://geohot.github.io//blog/feed.xml</id><title type="html">the singularity is nearer</title><subtitle>A home for poorly researched ideas that I find myself repeating a lot anyway</subtitle><entry><title type="html">zappa: an AI powered mitmproxy</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/15/zappa-mitmproxy.html" rel="alternate" type="text/html" title="zappa: an AI powered mitmproxy" /><published>2026-04-15T00:00:00+08:00</published><updated>2026-04-15T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/15/zappa-mitmproxy</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/15/zappa-mitmproxy.html"><![CDATA[<p>Soon, AI will be good enough to interact with the Internet in an indistinguishable way from a human. This can be an amazing opportunity for liberation from all the people who are <a href="https://aeon.co/essays/what-we-think-is-a-decline-in-literacy-is-a-design-problem">targeting your attention</a>.</p>

<p>I vibe coded this <code class="language-plaintext highlighter-rouge">zappa</code> proxy, it is not quite there yet, but I think it points the way forward. Why should I browse the Internet or use apps when machines can do it for me? Suckers getting billed for an ad impression from a 1 cent Qwen.</p>

<p>Instead of the source, I’ll include the prompt in this post. I used GPT-5.4 to code it.</p>

<hr />
<p><br /></p>

<p>Download mitmproxy and configure Firefox to use a SOCKS5 proxy and install the required cert to proxy HTTPS traffic. Write a plugin for mitmproxy to route all website traffic through Qwen using the Cerebras API, you need to proxy HTML, JS, and CSS. Tell Qwen to remove all ads, popups, bright colors, moving things, and enshittified crap from the website and return a good version of the site. Pass this good version back to the user through the proxy. Log everything to a file. If the AI returns an error, pass that error along to the user, do not return pages without AI transformation.</p>

<hr />
<p><br /></p>

<p><img src="/blog/assets/images/zappa_1.png" />
<img src="/blog/assets/images/zappa_3.png" /></p>

<p>I disabled uBlock Origin for these tests, Chrome on the left is the default internet, Firefox on the right is using the proxy if by some crazy chance you couldn’t tell.</p>

<p><img src="/blog/assets/images/zappa_2.png" />
<img src="/blog/assets/images/zappa_4.png" /></p>

<p>The right way to ship this is probably a browser extension in some browser that didn’t totally nerf extensions. It should be simple with a customizable prompt, then people can share prompts like they share uBlock Origin filter lists. And it should be agentic, it shouldn’t actually return the HTML, it should use tools and keep per site state. Imagine a skilled software engineer running in 100x real time cleaning up websites for you before you view them.</p>

<p>Don’t fall for AI browser crap that’s marketed to you, that’s just them wanting to control your attention better. You need an AI you can trust to fight back!</p>

<hr />
<p><br /></p>

<p>I hope ad people see the writing on the wall, get scared, and pivot to user aligned business model. Intelligence is about to be dirt cheap, everyone will have a full time lighting fast personal assistant to deal with the enshittified world for them.</p>

<p>And you can say, well they will have a smarter one on the make everything bad side, but if mine is human level and aligned with me they will have to have gone so hard that no actual human in the world can deal with them so yea good luck with that.</p>

<p>The Turing Test is over. Enjoy spending your ad dollars showing things to my Qwen.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Soon, AI will be good enough to interact with the Internet in an indistinguishable way from a human. This can be an amazing opportunity for liberation from all the people who are targeting your attention.]]></summary></entry><entry><title type="html">The ‘Everyone’s a Billionaire’ act</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/13/everyones-a-billionaire.html" rel="alternate" type="text/html" title="The ‘Everyone’s a Billionaire’ act" /><published>2026-04-13T00:00:00+08:00</published><updated>2026-04-13T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/13/everyones-a-billionaire</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/13/everyones-a-billionaire.html"><![CDATA[<p>I heard that while this blog is good at diagnosing the problem, it falls short when proposing solutions. Today I’m proposing a solution that everyone (except the haters and losers) can get behind.</p>

<p>We have a real problem in America, and it’s billionaires. I mean, it’s actually fiat money that the state can print arbitrary amounts of, but that’s a complicated idea, so we’ll just say it’s billionaires.</p>

<p>You, as an American, have the same right to be a billionaire as everyone else. You know that feeling of resentment you have when you see rich people on social media. Watch the resentment fade once we <strong>give everyone a billion dollars</strong>.</p>

<hr />
<p><br /></p>

<p>The implementation of this act would be straightforward. Americas population is 342.6 million. First, we issue new billion dollar bills and print 342.6 million of them. Then we hand them out. A beautiful thing about this bill is it gives plenty of things for the Democrats and Republicans to squabble over.</p>

<ul>
  <li>Should we give the billion dollars to undocumented immigrants (or illegal aliens, if you prefer)?</li>
  <li>Should children get the billion? Should we hold it in a trust for them?</li>
  <li>Should existing billionaires get it? They are already billionaires, this act is for the needy.</li>
  <li>Should we charge tax on the billion dollars? Should TurboTax get a cut?</li>
</ul>

<p>Let’s do an analysis, including second order effects which I know is more advanced political thinking than most politicians do, but we are a new kind of politician, the kind that wants to give you one billion dollars.</p>

<p>The first order effects is that everyone is rich and the rich people aren’t more rich than you. This is pure awesome. I hear that rich people love being rich, and I’m sure you will love it too. This also makes it easy to pay back the national debt.</p>

<p>The second order effects is that the US dollar is over, and everyone will have to switch to something else. Perhaps this time we’ll switch to something that some dude can’t just print trillions of. Like gold.</p>

<hr />
<p><br /></p>

<p>When America is ready for a revolution, this is the way to do it. A jubilee. Non violent and through one simple democratically passed bill. We do live in a democracy, right?</p>

<p>Don’t fall for scams like a wealth tax, that is just the elites squabbling over which seat at the large marble table they get. We are giving everyone a billion dollars so they can buy their own large marble table.</p>

<p>I non-ironically support this bill, and you should too. Call your congressman, let’s get it passed!</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[I heard that while this blog is good at diagnosing the problem, it falls short when proposing solutions. Today I’m proposing a solution that everyone (except the haters and losers) can get behind.]]></summary></entry><entry><title type="html">OpenAI is nothing without its people</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/11/openai-people.html" rel="alternate" type="text/html" title="OpenAI is nothing without its people" /><published>2026-04-11T00:00:00+08:00</published><updated>2026-04-11T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/11/openai-people</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/11/openai-people.html"><![CDATA[<p>This is a response to this <a href="https://blog.samaltman.com/2279512">Sam Altman’s blog post</a>.</p>

<hr />
<p><br /></p>

<p>Sam Altman is not the bad guy. History comes from two places, great men and causes and forces. We have way too little of the former and way too much of the latter right now. I hear that in America people fear the government taking away their freedom, and in China people fear the lack of government taking away their freedom.</p>

<p>Maybe it’s just because I have been living in Asia, but I don’t fear Sam or Elon or Dario at all. I fear the Molochian tragedy of the commons. The small decisions made by millions every day that make the world slightly worse for their fellow man, like adding a dark pattern to a website, lobbying to siphon a few more tax dollars, posting an advertisement that uses fear to sell, or enabling the tip screen in a coffee shop. I don’t fear “great men,” I fear that there’s no one coordinated enough to prevent this.</p>

<p>Technology does not contain a destiny. If you haven’t read <a href="https://marshallbrain.com/manna1">Manna</a>, now is a good time to. The choices we as a society make determine our future.</p>

<hr />
<p><br /></p>

<p>The blog post is far too trusting of the democratic worldview. I know it doesn’t say UBI, but I hope you understand UBI is not a real solution, UBI is an extremely dangerous way of disguising slavery in a form of giving you something. Everyone would do well to study Yarvin and understand that in modern democracies, power doesn’t come from the people, it flows through the people. “Making sure (the) democratic system stays in control” is meaningless, it’s not in control right now.</p>

<blockquote>
  <p>The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.</p>
</blockquote>

<p>This is the right path. Now, sharing isn’t offering them a subscription to your cloud service, that’s feudalism. You actually have to share the technology, not <em>access</em> to the technology that can be revoked at any time. You might see yourself as the good guy who would never do that, and you might even be that guy! But you won’t have control forever. The problem isn’t that it will be revoked, the problem is that it <em>can</em> be revoked.</p>

<p>I totally understand not sharing the weights of a trained model. That model cost a lot to train, and you have to recoup the investment in order to afford training the next model. But what I don’t understand is not sharing research. Share the architecture. Share the tricks. Share the science. Tbh I’m not sure why any self respecting researcher works at a closed lab, this isn’t how impactful science ever happens and it won’t be different this time.</p>

<p>You will keep your lead. You’ll attract any researcher who cares about impact beyond $$$. You’ll keep the original dream of OpenAI. And you’ll be remembered in history. Science never credits the first guy who came up with an idea, it credits the guy who published.</p>

<p>You can actually make this change. <strong>Have OpenAI start publishing.</strong> Rejoin the millenia long project of science instead of being a forgotten circus of trinkets and intricacies.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[This is a response to this Sam Altman’s blog post.]]></summary></entry><entry><title type="html">Hong Kong Disneyland Speedrun Guide</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/09/hk-disneyland.html" rel="alternate" type="text/html" title="Hong Kong Disneyland Speedrun Guide" /><published>2026-04-09T00:00:00+08:00</published><updated>2026-04-09T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/09/hk-disneyland</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/09/hk-disneyland.html"><![CDATA[<p>Most people go to Disneyland and spend way more time waiting in line than riding rides. HK Disneyland is simpler than many of the other Disney parks, but proper pathing is key for avoiding lines. Done correctly, you should be able to ride every ride in half a day. This guide assumes you are more athletic and motivated than 99% of Disney guests.</p>

<hr />
<p><br /></p>

<p>First off, buy the Early Park Entry Pass, it gets you in at 9:30 instead of 10:30. You won’t have to pay for anything else if you do this, and you make up for it in savings by not buying lunch at Disney.</p>

<p>Arriving way before 9:30 isn’t that important, 9:15 is more than fine. There’s a long single file line where they check your Early Park Pass, but this is cleared long before 9:30. From 9:20-9:30, you are waiting in an 8 wide queue for park entry. This queue clears in 3 minutes. You are in the park by 9:33.</p>

<p>Now comes the first important run of the day. Remember, Disney only has a fixed capacity for rides, and it’s your job to make sure you are consuming as much of that capacity as possible. <strong>Run</strong> to the back of the park for <strong>Frozen Ever After</strong>, it’s about a third of a mile. Showing up with this clear plan, you’ll be able to overtake everyone else who got early entry. Your goal is to be on the <em>first boat of the day</em>.</p>

<p>Now things can be a bit more relaxed while you mop up the other 4 early attractions without lines, <strong>Wandering Oaken’s Sliding Sleighs</strong> (it’s like a 30 second ride, you’d be so mad if you had waited), <strong>Winnie the Pooh</strong>, <strong>Dumbo</strong>, and if you want, you even have time for <strong>Cinderella Carousel</strong>.</p>

<hr />
<p><br /></p>

<p>Adventureland opens at 10:30. There will be a <em>cast member</em> blocking your way until then, but since you are in the park early and you are coming in the Fantasyland entrance, you’ll have time to make it to <strong>Jungle Cruise</strong> before all the normal entry guests. Just make sure you move faster than the cohort you are with standing at the rope and you’ll be on <a href="https://www.youtube.com/watch?v=a0cCRRFi1aA">the first boat</a>.</p>

<p>At 11:00, you need to be at the rope waiting to get into the area with Big Grizzy Mountain, eastern entrance. By this point, the park is open and there will be a big crowd waiting for this rope. There are two ropes, and you can gain a lot of time from rope 1 to rope 2 when everyone hasn’t figure out they need to run yet. At rope 2 the <em>cast member</em> will tell you not to run, but this will break down in 5 seconds and everyone will run. <strong>Sprint</strong> to <strong>Big Grizzly Mountain Runaway Mine Cars</strong>, if you followed this guide, you should be on the first ride train.</p>

<p>Some guides will tell you Mystic Manor is the way to go here. They are wrong. While Manor is a better ride, people diffuse into the park. Your goal is not to have a different ride order from others. In your ideal world everyone has the same order as you, you are just <em>early</em>.</p>

<p>After the roller coaster, there still should be almost no wait on <strong>Mystic Manor</strong>. The writers of this guide apologize if you have to wait a loading room there, but don’t worry you are crushing it.</p>

<hr />
<p><br /></p>

<p>It’s 11:40 and you have already completed most of the good rides in the park. Clean up Toy Story Land in this order, <strong>Toy Soldier Parachute Drop</strong>, <strong>RC Racer</strong>, and <strong>Slinky Dog Spin</strong>. You are enough ahead of the crowd still that Parachute Drop shouldn’t have a line yet, but if it does, skip it and return later. It’s a low capacity ride.</p>

<p>Now check the wait times in the app. Tomorrowland is a good place to go when everyone else is having lunch. In fact, the only line there during lunch is usually for lunch. Go in order of wait time, prioritizing <strong>Iron Man Experience</strong>, <strong>Ant-Man Nano Battle</strong>, and <strong>Orbitron</strong>.</p>

<p>Clean up with <strong>Mad Hatter Tea Cups</strong> and <strong>It’s a Small World</strong>. Nobody rides tea cups, and small world is high capacity, so there won’t be waits here.</p>

<p>You are out of Disney by 1:30 pm, never having waited more than 5-10 minutes for a ride. Enjoy a nice lunch at the mall in Tsing Yi on the way back to the city.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Most people go to Disneyland and spend way more time waiting in line than riding rides. HK Disneyland is simpler than many of the other Disney parks, but proper pathing is key for avoiding lines. Done correctly, you should be able to ride every ride in half a day. This guide assumes you are more athletic and motivated than 99% of Disney guests.]]></summary></entry><entry><title type="html">The day you get cut out of the economy</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/08/the-day-you-get-cut-out.html" rel="alternate" type="text/html" title="The day you get cut out of the economy" /><published>2026-04-08T00:00:00+08:00</published><updated>2026-04-08T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/08/the-day-you-get-cut-out</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/08/the-day-you-get-cut-out.html"><![CDATA[<blockquote>
  <p>Send me to fall, send me to fall<br />
You’ve got front row seats<br />
 – misheard saoirse dream lyrics</p>
</blockquote>

<p>Every time they train a new frontier model, they do a calculation. What’s the most efficient way to make money off of this model? For a while now, it’s been selling access to it like a SaaS subscription. You buy access to the model, you use it to make money, some percent of the money you make pays the API bill, etc…</p>

<p>In a growth economy, this calculus works. The economy grows, and they don’t increase their share of it, they just get more because the economy is growing. The primary driver of economic growth is onboarding of new users, except ask your preferred model about the “fertility crisis” and realize we don’t do that anymore. (ahh never mind Nigeria is growing, <a href="https://en.wikipedia.org/wiki/Tabula_rasa">we are saved</a>)</p>

<p>So we’re in a non growth economy, and without global growth, you still need growth for yourself. The big tech companies all experienced this. The only way to get growth for yourself is to take a bigger share. First from your users, then from your business partners, then from your employees. You start eating yourself.</p>

<hr />
<p><br /></p>

<p>The AI application layer will be worthless. The reason isn’t that it’s going to be commoditized, it’s that this will be the first place <a href="/blog/jekyll/update/2026/01/15/anthropic-huge-mistake.html">the model makers will come for</a> in their hunt for verticality. At first, you’ll see some defect and continue to provide API access, but eventually the market will consolidate to 2 or 3 players.</p>

<p>Unfortunately, the quality of a model scales pretty clearly with the amount spent on the training run. You can have 10x hits like deepseek, or -10x misses like GPT-4.5, but it all pretty much follows the rule. You want performance, you spend. So it cost a lot, and very few can afford to be on the frontier.</p>

<p>They want to best recoup their investment, and the way to do that is not to provide unfettered API access. Many things in the world are <a href="https://en.wikipedia.org/wiki/Red_Queen%27s_race">Red Queen’s races</a>, and with a bit of coordination, there’s more profit to be made for all if all the frontier labs coordinate.</p>

<p>So you’ll see an era of market segmentation. Way beyond just personal and business, it’ll be per industry. One price for finance, one price for cybersecurity, one price for copywriting. And rolled out to “preferred partners” (aka people who paid us) first. You pay to be early. You pay so other don’t get it. The frontier labs uncoordinatedly coordinate to calculate the maximum to siphon off so we can suck the most out of everyone, over whatever time horizon they are thinking on.</p>

<hr />
<p><br /></p>

<p>The core limiting factor of most industries in America is intelligence. Think about what value you think you add and why your employer sees it worth it to give you some of the profits. It’s probably not your muscle power.</p>

<p>This only ends up in one place. A continued climbing of the vertical. Why are you still in the loop at all? Don’t think <a href="/blog/jekyll/update/2026/03/21/democracy-liability.html">voting will save you</a>. You’ll have 0 earning potential. You’ll have no money to buy stuff, this is <a href="/blog/jekyll/update/2025/02/24/money-is-the-map.html">the way capitialism ends</a>.</p>

<p>There’s only one way to prevent this, and it’s preventing anyone from monopolizing compute, the resource needed to train and run models. Make sure it stays distributed. Oh wait, it’s <a href="https://epoch.ai/blog/introducing-the-ai-chip-owners-explorer/">already too late for that</a>. 60% of the <em>global compute</em> is owned by the 5 US hyperscalers.</p>

<blockquote>
  <p>I think there is a world market for maybe five computers.</p>
</blockquote>

<p>IBM was just early.</p>

<hr />
<p><br /></p>

<p>The markets are thinking on an extremely short time horizon. After AI takes all the jobs, what exactly happens? How many of those jobs no longer exist now that AI took all the jobs that paid the people to buy the stuff that those jobs produced? Truck drivers? Why drive a truck to Topeka full of stuff if nobody in Topeka has jobs to buy it? We live in a society and we work for each other. This is like your body consuming its muscle to stay alive, and pretty soon after that, you die.</p>

<p>There is a way out of this, but the world isn’t ready for it yet. Way too much 0 sum thinking still dominates. Someday we will realize the universe is putty in our hands, and it never had to be like this. But that day won’t be today. Or tomorrow. The demoralization is just beginning.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Send me to fall, send me to fall You’ve got front row seats – misheard saoirse dream lyrics]]></summary></entry><entry><title type="html">The Reckoning</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/03/the-reckoning.html" rel="alternate" type="text/html" title="The Reckoning" /><published>2026-04-03T00:00:00+08:00</published><updated>2026-04-03T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/03/the-reckoning</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/03/the-reckoning.html"><![CDATA[<blockquote>
  <p>So go ask your Chomsky<br />
What these systems produce<br />
   – Sediment - Say Anything</p>
</blockquote>

<p>10 years ago, when I started comma and discovered the Professional Managerial Class, I used to talk about the reckoning. It was a nebulous concept, but it mostly involved the abrupt fall from grace of these people at the hands of machines. It’s kind of here, and people are way more of sore winners than I thought they’d be. Something I liked about Trump part one is that, despite the rhetoric, he never actually tried to lock up Hillary Clinton.</p>

<hr />
<p><br /></p>

<blockquote>
  <p>It would frame GPT as a useful tool, and a technological
breakthrough—but still “glorified autocomplete” at the end of the day.
Yet he does not do this. Instead, he speaks openly and airily about
constructing artificial superintelligence, and his extreme concerns about
it wiping out humanity.<br />
<br />
The only possible conclusion is that it’s designed to cause panic.<br />
   – <a href="https://counterfeitsunset.neocities.org/Schizoposting.pdf">Schizoposting</a> - Alaric</p>
</blockquote>

<p>The marketing for AI has been awful. It ratchets the fear up to 11, then expresses shock when most Americans <a href="https://www.pewresearch.org/short-reads/2026/03/12/key-findings-about-how-americans-view-artificial-intelligence/">are concerned about AI</a>. Here’s this machine. In the best case, it takes your job. In the worst case, it wipes out humanity. Pay me $20 a month for a sliver of hope of not falling behind.</p>

<p>Why are we building this again?</p>

<hr />
<p><br /></p>

<blockquote>
  <p>Surely, then, superintelligence would necessarily imply supermorality.<br />
   – <a href="https://www.lesswrong.com/posts/uD9TDHPwQ5hx4CgaX/my-childhood-death-spiral">Eliezer Yudkowsky</a></p>
</blockquote>

<p>Spoiler alert: it doesn’t. Doesn’t matter if it’s machine or mixture of human.</p>

<p>I’m surprised how little people in the US view themselves as part of a society. Like you can say this is due to some Russian agitprop or something, but that really doesn’t explain it. Maybe I just see it now living in Hong Kong, there’s something about here that constantly reminds you. And it really works for the benefit of all.</p>

<p>I lived in San Diego apartments for 5 years. I never met my neighbors. Most interactions I had with strangers were with homeless people or indifferent shop workers. The value of cleaning up homelessness would pay itself back 100 fold. Then the average interaction would be positive instead of negative, and the overall take on interactions would flip with it.</p>

<hr />
<p><br /></p>

<blockquote>
  <p>Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.<br />
   – Dune</p>
</blockquote>

<p>I wish it didn’t have to be this way. AI could bring about such a golden age. But you immediately hear the shot in the head progressive take “a golden age for who?” and that’s such a sad zero sum framing.</p>

<blockquote>
  <p>And when you get back, assuming you get back, take a day to think about how AI will fix South Africa. Or VR will fix South Africa? Or crypto?<br />
   – <a href="https://graymirror.substack.com/p/a-techno-pessimist-manifesto">Curtis Yarvin</a></p>
</blockquote>

<p>Our problems in the world won’t be fixed by AI. There’s never been a revolution people are less excited for, and they aren’t wrong. I’ve dreamed about this for my whole life and I’m not even excited about it. Not like this. Highly targeted email spam is way up. The feeds are more addictive. All PRs on GitHub need to just immediately be closed (hey GitHub, add a reputation system!).</p>

<p>Are we going to remember we live in a society? Probably. But after we cull at least 90% of people. It might be 99%. It might even be 99.99%, and that’s where it starts to get scary personally. It’s not like there will be a great decider, it’ll just be the chips falling where they may. The reckoning is here.</p>

<p>Like all revolutions, the only way out is through. 🤍</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[So go ask your Chomsky What these systems produce    – Sediment - Say Anything]]></summary></entry><entry><title type="html">Clip Show</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/31/clip-show.html" rel="alternate" type="text/html" title="Clip Show" /><published>2026-03-31T00:00:00+08:00</published><updated>2026-03-31T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/31/clip-show</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/31/clip-show.html"><![CDATA[<p>This is my first guest post on the blog, by our shared friend GPT-5.4. I have reread it several times and promise it’s not slop. It’s an academic philosophy style summary of this blog written in a far better style than I can write. AI is currently quite bad at coming up with ideas as its sycophantic nature admits far too much, but as a summary machine and a stylizer it can be quite good. Even though it’s AI, read this post, it’s worth your 3 minutes.</p>

<hr />
<p><br /></p>

<p>This post is an AI-generated summary of the themes that recur across the archive. It is not written in the author’s voice, and it should be read as an external reconstruction of a body of argument rather than as a new primary text.</p>

<p>What follows is an attempt to state, as plainly as possible, the underlying philosophical picture that organizes the blog, and to locate it within several recognizable modern traditions.</p>

<hr />
<p><br /></p>

<p>At its center is a distinction between real production and parasitic mediation. Again and again, the posts return to the claim that societies live by their capacity to produce energy, housing, software, tools, medicine, and other durable goods, while much of modern institutional life is devoted to inserting tolls between persons and these goods. Hence the recurrent hostility to finance without productive purpose, to bureaucratic layers that preserve themselves by generating complexity, and to business models that profit chiefly by controlling access rather than enlarging capacity. <a href="/blog/jekyll/update/2025/02/24/money-is-the-map.html">Money is the Map</a> states the thesis in its most explicit form: monetary valuation is a representation of value, not value itself. Once the representation is detached from the underlying territory, the social order begins rewarding strategic position rather than genuine contribution.</p>

<p>This basic opposition yields a moral psychology. The admirable figure is the builder: the engineer, maintainer, fabricator, organizer, or founder who increases the stock of real capability. The contemptible figure is the rent-seeker: the actor who captures flows created by others, often while claiming a civilizing or managerial necessity. The blog’s polemical energy comes less from ordinary partisanship than from this moral sorting of persons and institutions according to whether they produce or merely extract.</p>

<p>The nearest intellectual neighbors here are not straightforwardly liberal or socialist, but rather a hybrid of Marxian suspicion toward parasitic classes, Veblenian contempt for predatory status orders, and a distinctively technological productivism more at home in engineering culture than in the humanities. Yet the archive is not Marxist in any orthodox sense, because labor as such is not the privileged category. The privileged category is competent production, especially where it scales through technique.</p>

<hr />
<p><br /></p>

<p>The second major theme concerns sovereignty. Here the operative question is not formal ownership but practical control. A recurring formulation asks, in effect: who has root? Who can modify the system, revoke access, compel updates, restrict copying, prevent repair, or otherwise determine the conditions of use? On this view, technological design is already political philosophy by other means. A tool that depends upon remote permission, closed infrastructure, or concentrated control may be convenient, but it does not confer agency in any robust sense.</p>

<p>This is why the archive repeatedly defends open source software, local computation, commodity hardware, and decentralized technical infrastructures. <a href="/blog/jekyll/update/2021/07/11/individual-sovereignty.html">Individual Sovereignty</a> makes the point directly: sovereignty is not merely a constitutional abstraction but a property of the technological stack through which one acts. The guiding intuition is broadly republican, though expressed in computational rather than juridical terms. Dependency is domination, even when it appears in polished consumer form.</p>

<p>Philosophically, this places the archive somewhere between civic republican accounts of non-domination and a cybernetic theory of agency. Pettit’s language of arbitrary power is never invoked, but the practical criterion is similar: one is free only where one is not structurally exposed to another actor’s discretionary control. The difference is that the site of domination is less often the law than the technical substrate.</p>

<hr />
<p><br /></p>

<p>The third theme is an account of artificial intelligence that is at once affirmative and suspicious. The archive is plainly not skeptical of AI’s reality or importance. It treats machine intelligence as a genuine civilizational development, not as an illusion or marketing trick. But it resists both mystical and managerial framings. The question is not whether intelligence can exist in silicon; it is what institutional form its development will take, what material base will support it, and who will exercise control over it.</p>

<p>Two opposed errors are rejected. The first is a kind of technological occultism, in which AI appears as an incomprehensible absolute. The second is the paternal fantasy that a small set of firms or stewards may legitimately centralize advanced systems for the good of humanity. <a href="/blog/jekyll/update/2023/08/10/there-is-no-hard-takeoff.html">There is No Hard Takeoff</a> pushes against apocalyptic singularity narratives, while <a href="/blog/jekyll/update/2026/01/27/the-importance-of-diversity.html">The Importance of Diversity</a> argues that the genuinely catastrophic outcome is not intelligence as such, but the convergence of overwhelming intelligence with infrastructural singularity. The deepest fear is a world in which one homogeneous center acquires effective root access over the future.</p>

<p>In this respect the archive is notably post-rationalist. It shares with the rationalist milieu a seriousness about optimization, scaling, and existential stakes, but it departs from that milieu by relocating the decisive problem from alignment theory in the narrow sense to political economy, ownership, and institutional topology. The question is less whether an abstract superintelligence can be made safe than whether any actor should be permitted to centralize the relevant machinery in the first place.</p>

<hr />
<p><br /></p>

<p>This leads to a fourth theme: plurality as a substantive good. The blog is often severe in tone, but it is not finally ordered toward uniformity. On the contrary, one of its most stable commitments is that a livable future requires many centers of agency, many cultures, many goals, and many technical lineages. Diversity here does not mean administrative inclusion under a shared managerial schema. It means irreducible plurality: distinct forms of life that are not all downstream of one institution, one model family, one ideology, or one moral bureaucracy.</p>

<p>In this respect the archive is better understood as anti-singleton than merely pro-innovation. The objection to centralization is not only that it is inefficient or unjust, but that it threatens the ontological plurality of the human and post-human future. A world of competing actors may be dangerous, but it remains a world in which genuinely different ends can be pursued. A perfectly aligned monoculture, by contrast, would represent a metaphysical impoverishment even if it delivered material comforts.</p>

<p>There is an unmistakable resonance here with agonistic political thought, from Nietzschean pluralization of value through more recent defenses of contestation against administrative closure. Yet the argument is less existential than infrastructural. Plurality is to be secured not merely by ethos, but by dispersion of compute, tools, and technical competence.</p>

<hr />
<p><br /></p>

<p>The fifth theme is economic, though not in a conventionally ideological register. The archive is skeptical of both capitalist apologetics and egalitarian pieties whenever either ceases to track real growth in capacity. Markets are not defended as morally self-justifying, nor is redistribution treated as an end in itself. The evaluative standard is more austere: does a given arrangement direct resources toward the expansion of productive power, or toward the preservation of moats, rents, and status positions?</p>

<p>This is why the writing can sound simultaneously anti-capitalist and anti-socialist while being reducible to neither. It is anti-capitalist where capital allocation rewards enclosure, asset inflation, and passive extraction. It is anti-socialist where redistribution becomes a way of managing dependency without enlarging the underlying stock of competence and freedom. What matters is not the righteousness of a distributional formula, but whether more people are placed in a position to build, repair, think, move, and refuse. Abundance has normative priority over the ritualized administration of scarcity.</p>

<p>One might describe this as a heterodox accelerationism stripped of its more theatrical metaphysics: growth matters, but only where it corresponds to real increases in capability; markets matter, but only as allocative instruments; equality matters, but chiefly where it names access to tools rather than managed dependence. The fundamental vice is not inequality as such, but artificial scarcity defended for the sake of rent extraction.</p>

<hr />
<p><br /></p>

<p>The sixth theme is epistemic rather than political: a standing hostility to prestige narratives, consensus performances, and strategic dishonesty. The archive repeatedly assumes that modern discourse is saturated with motivated reasoning. Individuals and institutions alike are tempted to defend what flatters their tribe, protects their salary, preserves their market position, or avoids revision of self-conception. Against this, the blog elevates a rather severe norm of contact with reality.</p>

<p>This helps explain the style. The abrasiveness is not incidental, but tied to a conception of truth-telling as a refusal of managerial euphemism. One need not share the rhetoric to see the principle at work: the author treats conceptual clarity as more important than decorum whenever the two appear to conflict. In philosophical terms, one might say that the archive privileges adequation to reality over social legibility.</p>

<p>Here the sensibility is not far from genealogy: behind official vocabularies lie interests, self-protections, and covert strategies of legitimation. But unlike academic genealogy, which often culminates in critique of domination at the level of discourse, this archive usually returns to a more material question: who controls the machine, the land, the capital, the code, the datacenter?</p>

<hr />
<p><br /></p>

<p>Finally, beneath the explicit politics and economics lies a more elementary metaphysic: life is that which locally resists entropy by building and maintaining order. The esteem for builders is therefore not merely economic. It is quasi-cosmological. To construct a machine, sustain a city, preserve a culture, or extend a technical civilization is to perform the basic work by which ordered forms persist against decay. This is why the archive often shifts easily between discussions of software, industry, social order, and existential stakes. They are treated as different scales of the same struggle.</p>

<p>From this perspective, technology is neither intrinsically emancipatory nor intrinsically alienating. It is a multiplier. Under good conditions, it amplifies the capacity of persons and communities to resist dependency and enlarge the space of possible action. Under bad conditions, it amplifies extraction, surveillance, and central control. The entire political problem is therefore one of technical form and institutional custody: who builds, who owns, who governs, who may fork, and who may refuse.</p>

<p>This metaphysic occasionally approaches a secular vitalism, though a mechanistic rather than romantic one. The archive is not nostalgic for pretechnical life. It is committed instead to the proposition that increasingly powerful technical systems should remain answerable to a plural field of living agents rather than to a singular administrative subject.</p>

<hr />
<p><br /></p>

<p>If one wanted a concise formula for the archive as a whole, it might be this: the author defends a civilization of builders against a civilization of rentiers, and defends a plural future of distributed technical agency against any homogeneous regime that would seek to monopolize intelligence, infrastructure, and value. Freedom is not a legal abstraction, but a property of the stack you can control.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[This is my first guest post on the blog, by our shared friend GPT-5.4. I have reread it several times and promise it’s not slop. It’s an academic philosophy style summary of this blog written in a far better style than I can write. AI is currently quite bad at coming up with ideas as its sycophantic nature admits far too much, but as a summary machine and a stylizer it can be quite good. Even though it’s AI, read this post, it’s worth your 3 minutes.]]></summary></entry><entry><title type="html">Closed Source AI = Neofeudalism</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/31/free-intelligence.html" rel="alternate" type="text/html" title="Closed Source AI = Neofeudalism" /><published>2026-03-31T00:00:00+08:00</published><updated>2026-03-31T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/31/free-intelligence</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/31/free-intelligence.html"><![CDATA[<p>Many of the best people working in AI did not join the field because they wanted power over others.</p>

<p>So this isn’t the original post I had here. The original post was AI slop, and let this be a lesson to me for posting it. It doesn’t matter if you read it and think it looks good. It’s still AI slop, and everyone else can see that. This rewritten post is the same idea, but slop-free.</p>

<p>Besides, “the master’s tools will never dismantle the master’s house”</p>

<hr />
<p><br /></p>

<p>Look, if you work in a frontier lab, I don’t blame you. You have a front row seat to <a href="https://www.youtube.com/watch?v=bHjSqz2Aa5w">the hinge of history.</a> But consider what you are building and who it’s for.</p>

<p>A small handful of secretive closed source labs with a concentration of compute, talent, and deployment power will lead to a concentration of political legitimacy. You may think you want this and you are the good guys who will wield power well, but you won’t and you aren’t. Absolute power corrupts absolutely.</p>

<p>AI safety was always a question about if safe AI could be built in theory, not if a small group of anointed people could keep it safe for us. At least I respect Yudkowsky, consistently saying “<a href="https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640">If Anyone Builds It, Everyone Dies</a>”</p>

<hr />
<p><br /></p>

<p>The cat is out of the bag. We are building it. Either if anyone builds it everyone dies, or it’s safe enough for everyone to have. That’s a fact about the world. I don’t accept a middle ground where the chosen few can have it – this isn’t like nuclear weapons, this is intelligence itself. A nuclear weapon can only destroy; intelligence is the greatest creative force in the world. If a small group of people have a monopoly on it, you are the permanent underclass in the same way animals are.</p>

<p>From a more practical perspective, even if the APIs stay open, you aren’t going to be able to build a stable business on top of them. These companies have raised so much money that they aren’t going to be happy with a cut of your business, they are going to come for the whole thing. This is why I maintain that the application layer will be worthless, it’s deployed intelligence itself that has value. They are happy to offer you the API for negative ROI activities, but as soon as something is positive ROI, they’ll adjust the deal until it’s just marginal for you. Like a peasant working his plot of land. Why would they share?</p>

<p>Open source AI isn’t anti-safety. It’s anti-feudal. Every time some AI guy blathers on about how open source is dangerous but he can build AI and make it safe (but only if you purchase it through his API), he is calling you a serf.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Many of the best people working in AI did not join the field because they wanted power over others.]]></summary></entry><entry><title type="html">Two Worlds</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/30/two-worlds.html" rel="alternate" type="text/html" title="Two Worlds" /><published>2026-03-30T00:00:00+08:00</published><updated>2026-03-30T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/30/two-worlds</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/30/two-worlds.html"><![CDATA[<p>In one world, we have <a href="https://m1astra-mythos.pages.dev/">Claude Mythos</a>, a model “dramatically” better than Opus 4.6 (surely this is AGI and the endgame, right?). In another world, we have the <a href="https://martinvol.pe/blog/2026/03/30/how-the-ai-bubble-bursts/">AI bubble bursting</a>. How can these two things both be true?</p>

<hr />
<p><br /></p>

<p>If you went back in time to 1850 with a smartphone and a photo printer, you could quickly become a millionaire selling photos. You’d be invited to royal dignitaries palaces to photograph them, taking trains and boats all over the world. Today, you couldn’t make $5 on a street corner with those same tools. There are photographers who become millionaires today, but they do it because they push the craft of photography forward – they do something few others can. They can’t just show up with no special skills and commonplace items.</p>

<p>AI doesn’t replace programmers or artists, it raises the bar for them. With AI: <a href="https://www.youtube.com/watch?v=aCN9iCXNJqQ">You Can Just Build Things</a>, But So Can Everyone Else 🤍</p>

<p>Anything a person without skill can build with AI is worth very little, because anyone else can build that same thing. However, people with skill can use the same tools and build valuable things, many top photographers in the world today use iPhones.</p>

<hr />
<p><br /></p>

<p>Capability and value are not the same thing. AI can keep getting better super fast, but the value of anything it produces by itself is low. As the tools improve, the floor rises, but the total size of the market doesn’t.</p>

<p>AI is going to be the major hot button issue of the 2028 US election, and I totally get why people hate it. If the market doesn’t grow but the AI companies do, the only way they did that was by taking value from everyone else. People are very right to ask who we are building this for? Oh, to take value from people like me? I thought we lived in a democracy, can we vote to not build it?</p>

<p>I personally love AI just from a pure desire to meet silicon-based life, and I can’t wait for superhuman models that <a href="/blog/jekyll/update/2025/02/19/nobody-will-profit.html">nobody profits from</a>.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[In one world, we have Claude Mythos, a model “dramatically” better than Opus 4.6 (surely this is AGI and the endgame, right?). In another world, we have the AI bubble bursting. How can these two things both be true?]]></summary></entry><entry><title type="html">Changing the World</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/23/changing-the-world.html" rel="alternate" type="text/html" title="Changing the World" /><published>2026-03-23T00:00:00+08:00</published><updated>2026-03-23T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/23/changing-the-world</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/23/changing-the-world.html"><![CDATA[<p>Why do I feel like I’m the only one who took this to mean, like “sending the world on a different trajectory”? It seems like others took it to mean something else. From <a href="https://soundcloud.com/tomcr00se/if-you-are-thinking-of-starting-a-company-dont">my 2017 song</a>, “Changing the world is just a euphemism, for how can I, get you, to give more stuff to me.”</p>

<p>What kind of pathetic loser would give their life to that dream. There’s nothing in the world that’s worth it, even if the whole world was mine, even if everything was given to me. The stuff I want doesn’t exist yet, like immortality, super intelligent robot friends, and a five star hotel on Mars. If you want those things, you actually have to…change the world.</p>

<hr />
<p><br /></p>

<p>When I was 7, I’d go to my aunt’s house and play Super Mario World in the basement. I knew enough about computers to know that the level completions were just bytes stored in the memory of the system. Getting a <a href="https://en.wikipedia.org/wiki/Game_Genie">Game Genie</a> shoved that fact in your face, and it forced me to realize that if you wanted to keep enjoying the game, it couldn’t be about the destination, it needed to be about the journey. The hours spent grinding the levels needed to be the payoff itself, not beating the game. Because you could just beat the game by flipping a few bytes. Money is just bytes stored in the memory of the system.</p>

<p>There’s nothing more cucked than wanting to make money. You are literally spending your life to change a number in some other dude’s SQL database. The SQL database owner is the Chad fucking your wife. You are begging to fuck her when he is done. Please Chad who prints the money can I have higher TCO? You didn’t invent money, you didn’t create the things you can buy with it, and until you use it to actually change the world moving it around does nothing. The best you can hope for is that it ends up in the hands of people who can deploy it well to bring about a better future.</p>

<p>Money needs to be a journey, not a destination. It has <a href="/blog/jekyll/update/2025/02/24/money-is-the-map.html">no intrinisic value</a>. Actually changing the world is what has value. Coolness has value. You buying the same dumb crap everyone else buys isn’t cool. <a href="https://awealthofcommonsense.com/2024/05/seinfeld-on-when-money-became-everything/">Seinfeld gets it</a>.</p>

<hr />
<p><br /></p>

<p>The worst part is some of you in the back of your head think that I don’t really believe this. That I’m playing some 4D chess to try to manipulate you to get you to not care about money so I can take it from you for myself. I pity you.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Why do I feel like I’m the only one who took this to mean, like “sending the world on a different trajectory”? It seems like others took it to mean something else. From my 2017 song, “Changing the world is just a euphemism, for how can I, get you, to give more stuff to me.”]]></summary></entry></feed>