For regular content, subscribe to Trelis on Youtube or Substack (newsletter).
AI Tools: Self-Serve Github Repositories
Get access to Trelis’ repos, including support through Github issues.
Fine-tuning Services
Trelis offers done-for-you custom model fine-tuning services, with a focus on voice models (TTS, STT).
Trelis Multi-Repo Bundle – Get access to all seven GitHub Repos
- Access to all SEVEN Trelis Repos (ADVANCED- Voice, Vision, Fine-tuning (LLMs), Inference, Evals, Time-Series, and Robotics)
- Support via Github Issues and Trelis’ Private Discord.
Large Language Model – Training + Fine-tuning
- Dataset filtering and preparation techniques.
- Synthetic data generation methods.
- Fine-tuning methods including LoRA, full fine-tuning, DPO, ORPO.
- Support for Open Source (Llama, Qwen etc.) and OpenAI models.
- Single and multi-GPU training (including DDP and FSDP).
Audio Model Fine-tuning + Inference (Transcription, Voice Cloning and Speech to Speech)
- Dataset preparation for transcription, voice cloning or multi-modal audio + text models
- Fine-tune multi-modal Audio + Text Models (Qwen 2 Audio)
- Speech-to-Text Transcription (Fine-tuning and Serving Whisper Turbo)
- Text-to-Speech / Voice Cloning (Fine-tuning StyleTTS2)
- Run Speech-to-Speech models locally or in the cloud
Robotics
- Data Preparation and Training Guides
- ACT (Action Chunking Transformer)
- GR00T-N1
Vision + Diffusion Model Fine-tuning + Inference
- Dataset Preparation for multi-modal image + text models OR diffusion models.
- Fine-tuning diffusion models for image generation with FLUX Schnell or FLUX dev.
- Fine-tuning vision models for custom image OR text datasets. [Qwen VL, Pixtral, Florence 2, LLaVA].
- Fine-tune vision models for bounding box detection.
- Deploying a server / api endpoint for Qwen VL.
Language Model Performance Evaluation (Evals)
- Creating evaluation datasets
- Setting up LLM as a judge
- Measuring the performance of your AI system (i.e. running “evals”)
Time Series Forecasting
- Forecasting with transformers (e.g. weather, power demand, prices)
- Forecasting and Evaluation Scripts
- Training and Fine-tuning Scripts
Language Model – Inference
- API Setup guides: RunPod, VastAI or your own laptop | SGLang, Nvidia NIM, vLLM, TGI, llama.cpp
- Serverless Inference Guide.
- Inference speed-up techniques: Context caching, speculative decoding, quantization, output predictions.
- Function calling and structured data extraction (e.g. json) methods.
- Sensitive data redaction (with presidio).
- Security precautions (against jailbreaking and prompt injection).
- Build an AI assistant (RAG pipeline) including vector search, bm25 and citation verification.
Support / Questions
- YouTube Videos: Post a comment on YouTube. I respond to most comments (same for comments on Substack or X).
- Questions about Github Repos/Scripts: Post a question on the corresponding product page (“Learn More” links above).
- Github Repo Support (after purchase): Create an issue in the relevant GitHub repo.
- Other topics/questions: You can purchase lifetime access to the Trelis Discord here. Access is also included when you buy the Github Repo Bundle.
©️ Trelis LTD 2025