The Supreme Court settled something the AI industry has been circling for years: AI-generated works without meaningful human creative contribution have no copyright protection. The Thaler v. Perlmutter ruling is being covered as an AI authorship story, but for identity and likeness, the implications go further. When AI systems generate content using a real person’s face, voice, the output may lack ownership, but the identity does not. This reframes the problem. The question is no longer who owns the content, but whether the underlying identity was used with authorization. As generative systems scale, the ability to verify and authorize identity before generation is becoming critical. Without it, the gap between creation speed and enforcement will only widen. More: https://lnkd.in/edFb2aWW #LotiAI #AIPolicy #Copyright #GenerativeAI #DigitalRights #LikenessRights #NIL
Loti
Technology, Information and Internet
Seattle, WA 7,069 followers
In an AI-driven world, Loti ensures your identity remains in your control.
About us
Loti AI protects major celebrities, public figures, and corporate IP from online threats, specializing in deepfake and impersonation detection. Founded in 2022, we offer likeness protection, content location and removal, and contract enforcement across the entire public internet including social media platforms, and adult sites. Loti AI leads the way to a future where people effortlessly control their digital selves. Our platform enhances human uniqueness, empowering individuals to choose what they share or keep private. We're building a world where personal autonomy and technological advancement are in harmony, fostering creativity and benefiting society. We believe all individuals have the right to protect their digital identities and will soon be introducing creator and consumer protection products. For more information, please visit lotiai.com
- Website
-
https://www.lotiai.com
External link for Loti
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- Seattle, WA
- Type
- Privately Held
Locations
-
Primary
Get directions
Seattle, WA, US
-
Get directions
8 The Green
STE R
Dover, Delaware 19901, US
Employees at Loti
Updates
-
Identity is becoming one of the most contested assets in the global economy. At the Semafor World Economy Summit in Washington, Loti AI CEO Luke Arrigoni joined global leaders, policymakers, and executives for a week of conversations shaping how AI is understood at the highest levels. What stood out was a clear shift in focus. The conversations around AI are no longer about capabilities, they are about control: who owns identity, who authorizes its use, and how those decisions get enforced at scale. As generative systems make it trivial to produce content, the constraints have shifted from technical to economic and legal. Consent, attribution, and identity are becoming the foundation that determines how value moves through these systems. As Luke shared after the summit: “Across conversations, the same tension kept coming up. We’ve made it incredibly easy to generate content, but we haven’t built equally robust systems to determine who can benefit from it.” The infrastructure being built today around consent, attribution, and identity protection will determine who benefits from the generative AI economy and who gets left unprotected by it. More on the summit: https://lnkd.in/g5WRfcb5 #LotiAI #SemaforWorldEconomy #GenerativeAI #DigitalRights #LikenessProtection #NIL #AI
-
The EU just banned AI nudifiers while the US is still asking courts to figure out if deepfakes violate anything at all. The EU Council's negotiating position adds a flat prohibition on non-consensual intimate AI content to the AI Act. They're also giving platforms more time to comply with high-risk AI rules, which means they're tightening protections where harm is documented while acknowledging that building infrastructure takes time. Compare that to the US approach: eliminate state-level protections, offer no federal replacement, and let litigation sort it out over the next decade Platforms building consent and clearance systems now are not just preparing for EU compliance. They are building toward the standard the rest of the world is converging on. Full piece: https://lnkd.in/d6AnBi93 #LotiAI #AIPolicy #GenerativeAI #DigitalRights #LikenessRights #EUAIAct #ContentCompliance
-
The White House released its national AI framework, and buried inside it is a position that will shape the rights and likeness landscape for years. The administration does not believe training AI models on copyrighted material violates copyright law, and is leaving that fight to the courts. At the same time, it is pushing Congress to preempt the state-level AI protections that currently exist in the absence of federal standards. Federal law has not moved fast enough. Just last month Washington passed two AI bills on chatbot safety and content provenance, and New York passed a GenAI warning bill. Preempting state laws shifts responsibility to the federal level. The question is whether a single national policy on artificial intelligence be stronger, or simply more uniform than the dozens of state-level NIL and deepfake laws already on the books. Full piece: https://lnkd.in/dXFEwY2m #LotiAI #AIPolicy #GenerativeAI #DigitalRights #LikenessRights #NIL #ContentCompliance
-
From modern-day love stories to reality-shattering, AI-powered romance scams. This is the newest fear unlocked in contemporary dating. Last year alone, Americans lost more than $3 billion to romance scams, up from $250 million in 2023. Nearly half of online daters report being targeted, and 74% of those became victims. AI-generated pictures, voice clones, and real-time deepfake video calls have made the old identity verification methods obsolete. The gold standard for spotting a catfish used to be a live video call. Real-time face-swapping has made that unreliable too. When a convincing identity can be generated on demand, the platforms hosting those interactions carry a growing responsibility to verify who is actually on the other side. Platforms can no longer rely on users to detect what’s real, they need systems designed to verify identity before the interaction ever begins, not after. Full piece: https://lnkd.in/gfnfM3MV #LotiAI #Deepfakes #GenerativeAI #DigitalRights #IdentityVerification #AIPolicy #OnlineSafety
-
310 coordinated investment scam campaigns. 26,000 ad sightings. 15+ languages. One consistent entry point: a stolen celebrity identity. A recent report from Bitdefender shows how scammers are using fake celebrity narratives across Meta platforms to funnel victims into investment fraud. The tactic is simple: borrow trust from a real person through fabricated interviews, staged scandals, and “exclusive” financial revelation then redirect them toward a fake opportunity. Last year, Meta removed over 159 million scam ads, with 92% taken down before anyone reported them. Enforcement is scaling, but it enters the process after distribution. By the time content is taken down, the likeness has already been used, trust has already been transferred, and the damage is already in motion. Without controls around how identity is used at the point of creation, these operations remain a highly effective and scalable entry point for fraud. Full piece: https://lnkd.in/gVVswayT #LotiAI #Deepfakes #GenerativeAI #DigitalRights #LikenessProtection #AIPolicy #OnlineFraud
-
A woman amassed hundreds of thousands of followers on Instagram, TikTok, and Facebook promoting wellness supplements. She does not exist. "Melanskia" is one of several AI-generated personas deployed by wellness brand Modern Antidote to sell products across social platforms, with no disclosure that any of them are fake. The brand owner's take: "Every piece of the business is being AI-ified." This is not a fringe case. Researchers note that people consistently overestimate their ability to identify AI-generated faces, making synthetic personas an increasingly effective vehicle for reaching audiences who believe they are engaging with a real person. A convincing synthetic persona built around a fabricated identity exploits the same trust that real creators spend years earning. This is the new reality: influence without identity. When trust can be manufactured at scale, platforms and brands need infrastructure that makes origin, authenticity, and intent visible—not optional. More: https://lnkd.in/gqWxK5NY #LotiAI #GenerativeAI #DigitalRights #LikenessProtection #CreatorEconomy #AIPolicy #Deepfakes
-
“Today, it’s not just content that can be scaled. It’s identity.” In the latest episode of Changing Frequencies, Loti AI CEO Luke Arrigoni joins host Josh Miely to unpack what generative AI is doing to identity, ownership, and control, and what it means for the creator economy. For years, the conversation around AI focused on what it could create. Now, it is about who it can become. Name, image, and likeness was once a conversation reserved for athletes. It is now relevant to every creator with an audience. Concepts like consent, compensation, and control sound straightforward until you try to build systems around them, and the gap between what AI can do and what policy, platforms, and business models can support continues to widen. As Luke puts it, “The core way to sell and defend your brand is through defense, being able to ensure that when your rules aren’t respected and something goes out, it gets taken down. That is what our core product does.” As AI expands what is technically possible, creators are being pushed to define what is personally permissible, and how much of their identity they are willing to hand over in the process. Full episode presented by NAB Show: https://lnkd.in/eYi7cRjA #LotiAI #CreatorEconomy #GenerativeAI #NIL #DigitalRights #LikenessProtection #AI
Changing Frequencies: Episode 4 | The Creator Economy at Risk
https://www.youtube.com/
-
With midterm elections approaching and concerns around AI-generated impersonation growing, platforms are starting to respond. One recent example is YouTube’s expansion of its likeness detection tool to politicians, journalists, and public officials. These verified users can now flag content featuring their facial likeness and request removal. The tool is designed to identify and manage unauthorized uses of a person’s likeness once content is already online. That shift toward proactive detection is meaningful, but how effective are these systems in practice? 1. Detection does not equal removal. YouTube has made clear that removal requests are evaluated on a case-by-case basis, meaning flagged content can stay up. 2. Access is still limited as the tool remains invite-only, leaving the vast majority of people outside its scope entirely. 3. The gap between facial detection and voice cloning remains wide, leaving voice impersonation largely unaddressed. Taken together, these limitations point to a broader gap. Detection helps surface the problem, but it does not prevent it. Addressing these gaps requires systems that verify consent and clears rights before content is generated, rather than relying on systems that respond after the fact. Until that layer exists, platforms will continue to chase the problem instead of getting ahead of it. More: https://lnkd.in/gTvVXPsi #LotiAI #Deepfakes #GenerativeAI #DigitalRights #AIPolicy #LikenessProtection #Misinformation
-
During the Milan Winter Olympics, U.S. athletes competed for medals. At the same time, their faces and voices were being used without their consent across the internet. Users on 4chan generated and shared sexualized imagery of figure skaters Alysa Liu, Amber Glenn, and Isabeau Levito, as well as skiers Mikaela Shiffrin and Eileen Gu. The White House posted an AI-generated video falsely depicting hockey player Brady Tkachuk mocking Canadians after Team USA's gold medal win, viewed tens of millions of times despite an AI disclaimer. What makes this harder to contain is not just the technology. As Graphika senior analyst Cristina López G. notes in a new CyberScoop piece by Derek B. Johnson, the real problem is that the people generating this content do not connect the harm to the actual person. In their minds, because it is not real, no one is getting hurt. That disconnect is a cultural problem, but the distribution is a technical one. Open-source AI models with no safety guardrails can be downloaded, fine-tuned for nudification, and shared across communities. Users are no longer trading outputs, they are now trading the ability to generate infinite outputs, targeted at specific people. Detection and takedown approaches were not designed for this volume or speed of abuse. With the potential for harm now compounding, platforms need to make unauthorized generation and distribution preventable in the first place. Full piece: https://lnkd.in/eZDqxJ2h #LotiAI #Deepfakes #GenerativeAI #DigitalRights #LikenessRights #NIL #AIPolicy