Senior Data Scientist at Wolters Kluwer, Kaggle Competitions Expert, Author at JarvisLabs, MSc in Computer Science
Currently spending most of the time in Deep learning. Specifically Transformer's models(Encoders, LLM's) training and understanding different architectures.
Global
- CIBMTR - Equity in post-HCT Survival Predictions - 22th / 3,325 (Top 1%) (Silver Medal) (Experiments) (Kaggle) [MAR 2025]
- Stanford RNA 3D Folding - 77th / 1516 (Top 6%) (Bronze Medal) (Kaggle) [SEPT 2025]
- Learning Agency Lab - Automated Essay Scoring 2.0 - 1212th / 2706 (Top 45%) (Experiments) (Kaggle) [APR 2024]
- HMS - Harmful Brain Activity Classification - 1231th / 1206 (Top 45%) (Kaggle) [JAN 2024]
- UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) - 1118th / 1326 (Top 85%) (Kaggle) [OCT 2023]
- NeurIPS 2024 - Predict New Medicines with BELKA - 231th / 1336 (Top 12%) (Kaggle) [APR 2024]
- NeurIPS 2023 - Machine Unlearning - 164th / 1336 (Top 14%) (Kaggle) [SEP 2023]
- Quantization Aware Training and Post training quantization using unsloth ang torchAO : (Link)
- Speculative Decoding in vLLM: Complete Guide to Faster LLM Inference : (Link)
- draft model,n-gram, suffix, mlp speculators, eagle techniques
- The Complete Guide to LLM Quantization with vLLM: Benchmarks & Best Practices : (Link)
- AWQ, GPTQ, Marlin-AWQ, Marlin-AWQ, GGUF, BnB.
- vLLM Optimization Techniques: 5 Practical Methods to Improve Performance : (Link)
- Prefix caching, kv-cache quantization, CPU offloading, Disaggregated prefill/decode, zero reload sleep mode.
- Scaling LLM Inference: Data, Pipeline & Tensor Parallelism in vLLM : (Link)
- Tensor Parallelism, Data Parallelism, Pipeline Parallelism
My Blogs (NeuroBits)
- Understanding Model Memory Calculations (Link)
- RNA Transformer : Encoder modeling RNA Sequnces to predict 3D Structure (Link)
- Optimizing PyTorch Model Training: Balancing Speed and Memory Efficiency (Link)
- RhoFold+: A Revolutionary Framework for RNA 3D Structure Prediction (Link)
- AI Agent Frameworks Quick View (Link)
- SHADE Arena : Evaluating Sabotage and Monitoring in AI Agnets. (Link).
- Standing Ovation Award - January 2025 by Wolters Kluwer
- Wolters Kluwer - Code Games 2023 Winner - First runner up
- Experiementation pipeline for image classification. Used to performe 50+ experiments in kaggle competiton named ISIC (Link)
- Experiementation pipeline for text classification. Used to performe 50+ experiments in kaggle competiton named Automated Essay Scoring (Link)
- Developed VLM from scratch in Pytorch (Link)

