Skip to content
View Godmook's full-sized avatar
  • University of Southern California
  • Los Angeles
  • 16:12 (UTC -07:00)

Block or report Godmook

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Godmook/README.md

Mook

I'm an MLOps / Infrastructure engineer focused on LLM inference and GPU serving.

Currently finishing my M.S. in CS at USC. Built a two-level K8s dispatcher for a heterogeneous 700-GPU cluster — session-sticky routing, gRPC state sync, sub-GPU partitioning with HAMi + Kueue. Also worked on CUDA kernel optimization and distributed training pipelines.

Interested in the systems side of AI: how inference actually runs at scale, where the bottlenecks are, and how to push utilization without blowing up memory. Lately spending time in vLLM and SGLang internals — and looking to do more.


Stack · Python · C++ · CUDA · PyTorch · SGLang · Kubernetes · AWS · vLLM · JAX · GCP

GitHub · changmoo@usc.edu

Pinned Loading

  1. vllm-project/vllm vllm-project/vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 79.5k 16.6k

  2. sgl-project/sglang sgl-project/sglang Public

    SGLang is a high-performance serving framework for large language models and multimodal models.

    Python 27.6k 5.8k

  3. hao-ai-lab/FastVideo hao-ai-lab/FastVideo Public

    A unified inference and post-training framework for accelerated video generation.

    Python 3.5k 329