Inspiration
Anyone can generate malicious, illegal, or heavily biased content using an open-source model, and then publicly claim it was generated by their proprietary enterprise model. This can lead to PR nightmares, regulatory scrutiny, and frivolous lawsuits. An essential legal and reputational shield for AI companies is needed.
What it does
Origraph is a tool that process the LLM's output, embed it with a traceable watermarks in the text. It is an solid proof of whether AI companies are the real creator of the reported content.
How we built it
Origraph embeds specific patterns of Zero-width, non-spottable characters into the LLM output, hash it and anchor the hashed value onto a blockchain. Making use of the non-reversable and one-to-one feature of hashing, AI companies could check if the reported content matches with any of the hash value on the chain. While no one can fake any malicious, illegal, or heavily biased output that match the hash values!
Challenges we ran into
The current chain is simulated/local (SQLite hash-chain), not live QDay anchoring, since we don't have enought time to setup real chains and we are also not ready to pay gas fees. We do not have direct QDay RPC/contract deployment flow like the referenced repo
Accomplishments that we're proud of
We've brainstormed a brilliant, innovative solution to frivolous lawsuits where is the nightmare of AI companies. Making good use of the irreversible hash function and non-editable blockchain technology.
What we learned
Encryption/Blockchain
What's next for Origraph
We also wish to explore more use cases for our product. Deploy it on a real blockchain and used by AI companies!
Log in or sign up for Devpost to join the conversation.