Subscribe
Sign in
Home
Notes
Chat
AI Sec 101
Glossary
Disclaimer
Premium
Archive
About
What is Slopsquatting? AI Hallucinations Ship Malware
Attackers pre-register the fake package names AI coding tools invent, then wait for the copy-paste. slopcheck blocks it at the install boundary.
Apr 28
•
ToxSec
and
Karen Spinner
27
16
13
OpenAI Signs What Anthropic Wouldn't, Models Break Everything Anyway
Autonomous jailbreaks hit 97%, distillation campaigns run at industrial scale, and war games end in nuclear fire.
Mar 1
•
ToxSec
46
21
13
43:16
Vibe Coding Security Flaws Ship Shells, Keys, and Admin Access
Slopsquatting, hardcoded API keys, and broken auth in AI-generated code form a compound attack chain starting at pip install.
Mar 19
•
ToxSec
38
9
12
Adversarial Poetry Jailbreaks LLMs at 62% Across 25 Models
Poetic prompt injection bypasses RLHF, Constitutional AI, and every major alignment strategy in a single turn.
Jan 12
•
ToxSec
39
27
17
Zero Trust Home Network: AI Breaks Flat WiFi in Minutes
Evil twins, AirSnitch isolation bypass, AI-powered exploit chaining, and NAS zero-days make flat home networks a red team playground in 2026.
Mar 6
•
ToxSec
38
5
12
Latest
Top
Discussions
Is Claude Code Secretly Installing Spyware?
A researcher caught Claude Desktop installing browser bridges silently. Plus the MCP RCE Anthropic won’t patch.
Apr 26
•
ToxSec
and
Exploring ChatGPT
37
14
12
47:15
Token-Level AI Security: The Opus 4.7 Tokenizer Graveyard
A new tokenizer ships fresh dead zones, and every model now carries a graveyard of glitch tokens nobody has mapped yet.
Apr 24
•
ToxSec
22
3
8
How to Jailbreak Claude Opus 4.7: A Bug Bounty Field Guide
Five jailbreak families, the tools bounty hunters actually use, and the mindset that turns a prompt into a payday.
Apr 20
•
ToxSec
21
1
7
You Downloaded Gemma 4 from Hugging Face. Is It Safe to Run?
Pickle files, backdoored weights, and sleeper agents turn your privacy win into an attack surface. Gemma 4 security.
Apr 15
•
ToxSec
33
16
12
6:49
Is Your Local AI Model Backdoored by Your Politics? Sleeper Agents Exposed
Pickle file exploits, sleeper agents, and typosquatting turn the local AI privacy play into an open attack surface.
Apr 12
•
ToxSec
and
Exploring ChatGPT
30
8
11
49:51
AI Governance Frameworks in 2026: What Compliance Actually Requires
The EU AI Act, NIST AI RMF, and ISO 42001 hit enforcement deadlines this year. Here’s what they demand and where programs quietly fail.
Apr 9
•
ToxSec
35
37
12
AI Coding Tools Default to Insecure Patterns: The 5-Minute Rules File Fix
Security-focused prompts and rules files measurably reduce AI-generated vulnerabilities in Copilot, Cursor, and Claude Code.
Apr 7
•
ToxSec
31
3
11
See all
ToxSec - AI and Cybersecurity
Security for a world run by machines that lie.
Subscribe
Recommendations
View all 23
TechTalks
Ben Dickson
The Displacement Audit
The Displacement Audit
Wondering About AI
Karen Spinner
Leadership in Change
Joel Salinas
Cash & Cache
Ashwin Francis
ToxSec - AI and Cybersecurity
Subscribe
About
Archive
Recommendations
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts