OWASP LA Virtual Event Fulfilling Your LLM Deployment Dreams
As organizations race to integrate Large Language Models into core business processes, they face a difficult trade off unlock major productivity gains or expose themselves to data leakage, shadow AI, and architectural risk.
This month, Aaron Ansari joins OWASP LA to break down what secure, enterprise grade LLM deployment actually looks like.
Aaron served 13 years as Co-Chapter Chair of OWASP Central Ohio and brings over two decades of application security experience from roles at BMW Group, Trend Micro, and more. He has taught programming and secure coding in Python for 18+ years at Franklin University and contributed to early JavaScript security projects alongside Kevin Wall.
Talk Abstract: Fulfilling Your LLM Deployment Dreams
This session moves beyond basic chat interfaces and into the technical foundations of a secure generative AI architecture.
Key topics include:
• The Risk Landscape Prompt injection (OWASP LLM01), insecure output handling, training data poisoning, and emerging LLM threats.
• Architectural Defenses Using Retrieval-Augmented Generation (RAG) to preserve accuracy while avoiding the risks of fine-tuning on sensitive PII.
• Data Governance Implementing fine-grained access control and role based accounting within vector databases.
• Operational Security A layered security model, from hyperparameter tuning to rate limiting and semantic caching.
You will leave with a practical framework for deploying AI systems that are innovative, compliant, and resilient.
If you are working with LLMs, evaluating generative AI internally, or responsible for AI governance and security, this session is directly relevant.
Register here: https://luma.com/owaspla
See you virtually.