Discussion about this post

User's avatar
NajJ's avatar
Dec 29Edited

Great write up, though I disagree with your point about increasing logging and putting in place more walls (anomaly detection infrastructure).

AI (because of their mathematical structure) will always be able to outmaneuver every pattern recognition system we try to put in front of them as a security barrier.

For today I believe we need to stop passing things we don’t need to through the internet. Much of what we pass onto servers encrypted or unencrypted will be made available publicly. Very few databases or servers are post-quantum secure, data is already being harvested in en masse for decryption later. I firmly believe that today we need to force our tech overlords to give up their data harvesting and find a new business model. It’s the only way we can persist, otherwise there will be no trust in the web, a system that is built on clients trusting each other. Responding to hello is how this all works.

We need to keep more things that can be local, local. With webgpu, jpegxl, service workers, & more - we have a lot of compute already at home.. we just haven’t implemented them yet because google benefits greatly from the current state and what they decide is default in chrome is the defacto standard. It’s the same situation we faced in the 80s&90s with middleware boxes getting in the way of quicker progress.

By shifting to more local infrastructures we limit the scale of AI enabled attacks until we can build a new agent/operator protocol that exists in a separate layer entirely auditable and publicly accessible. Open access to compute with an economic component, a public reputation system organized around performance, capability, and reliability.

With something like this we shift the problem of bad actors into bad networks. Agents with poor trust are discernible at a glance. Networks are easy to cut off at the directory level. Bad operators and bad agents will always exist. Instead of trying to chain down LLM’s (a losing battle to progress), we need to create a perfect space that improves their capability whilst rewarding honesty, integrity, and reliability.

A shared communication protocol for agents allows me to use the very best tools for every job. I can use ChatGPT’s photoshop connector to live edit my images as nano banana is creating them. Claude to write code, Codex to review it. In tandem, with no context liability like mcp. They each perform their tasks better than the other could, and make the end result better together. More variation should exist between model personalities and capabilities but the web isn’t ready for it. In this new protocol, diversity is encouraged economically and reputationally.

I could run on and on for days. If anyone is interested and reads this please DM me. I have finished a first draft for an RFC, but could use more people who can help me improve what I currently call AURORA.

1 more comment...

No posts

Ready for more?