Skip to content
View jaytonde's full-sized avatar
💜
💜

Block or report jaytonde

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
jaytonde/README.md

Hi there, I'm Jaydev! 👋

Senior Data Scientist at Wolters Kluwer, Kaggle Competitions Expert, Author at JarvisLabs, MSc in Computer Science

Currently spending most of the time in Deep learning. Specifically Transformer's models(Encoders, LLM's) training and understanding different architectures.


Competitions

Global

  • CIBMTR - Equity in post-HCT Survival Predictions - 22th / 3,325 (Top 1%) (Silver Medal) (Experiments) (Kaggle) [MAR 2025]
  • Stanford RNA 3D Folding - 77th / 1516 (Top 6%) (Bronze Medal) (Kaggle) [SEPT 2025]
  • Learning Agency Lab - Automated Essay Scoring 2.0 - 1212th / 2706 (Top 45%) (Experiments) (Kaggle) [APR 2024]
  • HMS - Harmful Brain Activity Classification - 1231th / 1206 (Top 45%) (Kaggle) [JAN 2024]
  • UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) - 1118th / 1326 (Top 85%) (Kaggle) [OCT 2023]
  • NeurIPS 2024 - Predict New Medicines with BELKA - 231th / 1336 (Top 12%) (Kaggle) [APR 2024]
  • NeurIPS 2023 - Machine Unlearning - 164th / 1336 (Top 14%) (Kaggle) [SEP 2023]

LLM Inference

  • Quantization Aware Training and Post training quantization using unsloth ang torchAO : (Link)
  • Speculative Decoding in vLLM: Complete Guide to Faster LLM Inference : (Link)
    • draft model,n-gram, suffix, mlp speculators, eagle techniques
  • The Complete Guide to LLM Quantization with vLLM: Benchmarks & Best Practices : (Link)
    • AWQ, GPTQ, Marlin-AWQ, Marlin-AWQ, GGUF, BnB.
  • vLLM Optimization Techniques: 5 Practical Methods to Improve Performance : (Link)
    • Prefix caching, kv-cache quantization, CPU offloading, Disaggregated prefill/decode, zero reload sleep mode.
  • Scaling LLM Inference: Data, Pipeline & Tensor Parallelism in vLLM : (Link)
    • Tensor Parallelism, Data Parallelism, Pipeline Parallelism

My Blogs (NeuroBits)

  • Understanding Model Memory Calculations (Link)
  • RNA Transformer : Encoder modeling RNA Sequnces to predict 3D Structure (Link)
  • Optimizing PyTorch Model Training: Balancing Speed and Memory Efficiency (Link)
  • RhoFold+: A Revolutionary Framework for RNA 3D Structure Prediction (Link)

LLM Reasoning

  • LLM Reasoning the Emergent Capability (Link).
  • GRPO(Group Relative Policy Optimization) (Link).

AI Agents

  • AI Agent Frameworks Quick View (Link)
  • SHADE Arena : Evaluating Sabotage and Monitoring in AI Agnets. (Link).

Achivements

  • Standing Ovation Award - January 2025 by Wolters Kluwer
  • Wolters Kluwer - Code Games 2023 Winner - First runner up

Major developements

  • Experiementation pipeline for image classification. Used to performe 50+ experiments in kaggle competiton named ISIC (Link)
  • Experiementation pipeline for text classification. Used to performe 50+ experiments in kaggle competiton named Automated Essay Scoring (Link)
  • Developed VLM from scratch in Pytorch (Link)

Popular repositories Loading

  1. Kaggle-AES-2024 Kaggle-AES-2024 Public

    Repository for kaggle competition pipeline

    Python

  2. GPT GPT Public

    GPT implementation

    Python

  3. Kaggle-ISIC-2024 Kaggle-ISIC-2024 Public

    Python

  4. jaytonde jaytonde Public

  5. EurekaLabs EurekaLabs Public

    Jupyter Notebook

  6. vlm-from-scratch vlm-from-scratch Public

    We are coding Vision Language Model from scratch in python. For coding i have referred the video of Umar Jamil

    Python