<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Shiva Bhattacharjee - Full Stack Developer &amp; Researcher</title>
    <description>Hello there I am Shiva a full stack developer and researcher. I love to build products that make people&apos;s life easier. Explore my projects, research publications, experience, and journey in software development and AI research.</description>
    <link>https://shiva.codes</link>
    <language>en-us</language>
    <lastBuildDate>Thu, 09 Apr 2026 08:24:18 GMT</lastBuildDate>
    <atom:link href="https://shiva.codes/rss.xml" rel="self" type="application/rss+xml"/>
    <category>Technology</category>
    <category>Software Development</category>
    <category>Web Development</category>
    <category>Research</category>
    <category>Artificial Intelligence</category>
    <generator>Next.js Portfolio RSS Generator</generator>
    
    <item>
      <title>Welcome to My Portfolio</title>
      <description><![CDATA[
        <p>Hello there! I'm Shiva Bhattacharjee, a full-stack developer passionate about creating amazing products that make people's lives easier.</p>
        <br/>
        <p>I specialize in:</p>
        <ul>
          <li>Software Engineering</li>
          <li>Web Development</li>
          <li>Graphic Design</li>
          <li>Problem Solving</li>
          <li>Creative Thinking</li>
        </ul>
        <br/>
        <p>Currently working as Co-Founder and Software Engineer at Navdyut AI Tech and Research Labs Pvt. Ltd., where I lead AI research and develop innovative solutions.</p>
      ]]></description>
      <link>https://shiva.codes</link>
      <guid>https://shiva.codes#about</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>About</category>
    </item>
    
    <item>
      <title>Image Sonification - Research Project</title>
      <description><![CDATA[
        <p>Converts images to audio and vice versa by mapping pixel colour and position to audio frequencies.</p>
        <br/>
        <p><strong>Tech Stack:</strong> React, TypeScript</p>
        <p><strong>Status:</strong> active</p>
        <p><strong>Repository:</strong> <a href="https://sonification.shiva.codes">https://sonification.shiva.codes</a></p>
      ]]></description>
      <link>https://sonification.shiva.codes</link>
      <guid>https://sonification.shiva.codes</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Project</category>
    </item>
    
    <item>
      <title>Software Developer Intern at GITCS.</title>
      <description><![CDATA[
        <p><strong>Duration:</strong> Feb 2024 - Sept 2024</p>
        <br/>
        <p><strong>Responsibilities:</strong></p>
        <p>Develop websites and systems to be used by its clients and maintain current existing websites and systems.</p>
        <br/>
        <p><strong>Technologies Used:</strong> ReactJS, NextJS, Frame Motion, ThreeJS</p>
      ]]></description>
      <link>https://shiva.codes/experience</link>
      <guid>https://shiva.codes/experience#gitcs</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Experience</category>
    </item>
    <item>
      <title>Software Engineer at TTIPL</title>
      <description><![CDATA[
        <p><strong>Duration:</strong> Oct 2024 - Jan 2025</p>
        <br/>
        <p><strong>Responsibilities:</strong></p>
        <p>Built internal ERP modules for billing, vendor, and project tracking used by construction operations. Developed semantic project document search using RAG pipelines with OpenAI embeddings and vector databases for construction drawings, BOQs, and reports. Optimized API performance and database queries improving internal tool response times and reliability.</p>
        <br/>
        <p><strong>Technologies Used:</strong> ReactJS, NextJS, Tailwindcss, Prisma, Supabase, OpenAI, Vector DB, RAG</p>
      ]]></description>
      <link>https://shiva.codes/experience</link>
      <guid>https://shiva.codes/experience#ttipl</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Experience</category>
    </item>
    <item>
      <title>Member of Technical Staff at Navdyut AI</title>
      <description><![CDATA[
        <p><strong>Duration:</strong> Jan 2025 - July 2025</p>
        <br/>
        <p><strong>Responsibilities:</strong></p>
        <p>Built an Assamese chatbot on a 22B Mistral model with RAG pipelines for translation and government applications. Scaled the system to 500+ users and contributed to deployments for public sector use. Project work was featured in regional newspapers.</p>
        <br/>
        <p><strong>Technologies Used:</strong> Mistral, RAG, Langchain, LlamaIndex, Pinecone, NextJS, Tailwindcss, Supabase</p>
      ]]></description>
      <link>https://shiva.codes/experience</link>
      <guid>https://shiva.codes/experience#navdyut-ai</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Experience</category>
    </item>
    <item>
      <title>Applied AI &amp; Full Stack Engineer at Bez</title>
      <description><![CDATA[
        <p><strong>Duration:</strong> July 2025 - Present</p>
        <br/>
        <p><strong>Responsibilities:</strong></p>
        <p>Reduced jewelry design turnaround from days to minutes by building AI agent workflows using the Vercel AI SDK with observability via Langfuse. Built an interactive jewelry design canvas using React + XYFlow enabling credit-gated editing and real-time agent-driven design iteration. Developed a Redis queue pipeline generating 70+ jewelry design variations in 5 minutes per batch. Built a custom memory system with rolling context per user to retain design preferences and reduce duplicate generations. Improved reliability and performance across microservices deployed with Docker, Firebase, and GCP.</p>
        <br/>
        <p><strong>Technologies Used:</strong> Vercel AI SDK, React, XYFlow, Redis, Firebase, GCP, Docker, Langfuse, NextJS</p>
      ]]></description>
      <link>https://shiva.codes/experience</link>
      <guid>https://shiva.codes/experience#bez</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Experience</category>
    </item>
    
    <item>
      <title>PolySpeech-HS: Multilingual Non-Autoregressive Text-to-Speech Synthesis with Hidden-State Adapters</title>
      <description><![CDATA[
        <p><strong>Category:</strong> Speech Synthesis &amp; Multilingual AI</p>
        <br/>
        <p><strong>Abstract:</strong></p>
        <p>A non-autoregressive text-to-speech (TTS) multilingual synthesis framework designed to address the linguistic diversity and real-time deployment challenges of Indian languages. By deploying a unified encoder-decoder architecture paired with lightweight hidden-state adapters, PolySpeech-HS enables efficient cross-lingual generalization while preserving language-specific prosodic nuances. Achieved state-of-the-art performance with MOS of 4.30, MCD of 4.7 dB, and RTF of 0.13 across six Indian languages.</p>
        <br/>
        <p><strong>Journal:</strong> IEEE Transactions on Audio, Speech and Language Processing</p>
        <p><strong>Year:</strong> 2025</p>
        <p><strong>Collaboration:</strong> Vellore Institute of Technology</p>
        <p><strong>Status:</strong> Under Review</p>
        <br/>
        <p><strong>Technologies/Methods:</strong> TTS, Non-Autoregressive, Hidden-State Adapters, Multilingual AI, Indian Languages, AMO-HSA</p>
      ]]></description>
      <link>https://shiva.codes/research</link>
      <guid>https://shiva.codes/research#polyspeech-hs-multilingual-non-autoregressive-text-to-speech-synthesis-with-hidden-state-adapters</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Research</category>
    </item>
    <item>
      <title>A Novel Data-Centric Transformer Fine-Tuning: A Modular Framework for Rapid Domain Adaptation and Deployment</title>
      <description><![CDATA[
        <p><strong>Category:</strong> Large Language Models &amp; Domain Adaptation</p>
        <br/>
        <p><strong>Abstract:</strong></p>
        <p>A data-centric, hardware-light workflow for fine-tuning transformers that sidesteps costly LLM APIs. Automatically scrapes high-signal web content and converts it into Q&A pairs to fine-tune a GPT-2-Medium model (355M parameters) in ~7 minutes on a single RTX-3060. Achieves 67.3% accuracy (+34% over base model) with 1.4s median latency and zero inference cost.</p>
        <br/>
        <p><strong>Journal:</strong> IEEE Transactions on Computational Social Systems</p>
        <p><strong>Year:</strong> 2025</p>
        <p><strong>Collaboration:</strong> Vellore Institute of Technology</p>
        <p><strong>Status:</strong> Under Review</p>
        <br/>
        <p><strong>Technologies/Methods:</strong> GPT-2, LoRA, 8-bit Adam, Domain Adaptation, Next.js, Q&A Generation, Fine-tuning</p>
      ]]></description>
      <link>https://shiva.codes/research</link>
      <guid>https://shiva.codes/research#a-novel-data-centric-transformer-fine-tuning-a-modular-framework-for-rapid-domain-adaptation-and-deployment</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Research</category>
    </item>
    <item>
      <title>Fine-Tuning Mistral 22B: The First Large Language Model for Assamese Language Tasks</title>
      <description><![CDATA[
        <p><strong>Category:</strong> Low-Resource Language Processing</p>
        <br/>
        <p><strong>Abstract:</strong></p>
        <p>The first fine-tuned Large Language Model specifically engineered for Assamese, a low-resource Indo-Aryan language spoken by approximately 15 million individuals. Introduces AssamText-750K dataset and custom Unicode mapping system exclusively for Assamese. This pioneering work becomes the first and only Assamese LLM backed by language-specific Unicode infrastructure, achieving 20% average improvement across text generation fluency, sentiment analysis accuracy, and Assamese-to-English translation quality.</p>
        <br/>
        <p><strong>Journal:</strong> IEEE Transactions on Neural Networks and Learning Systems</p>
        <p><strong>Year:</strong> 2025</p>
        <p><strong>Collaboration:</strong> Vellore Institute of Technology</p>
        <p><strong>Status:</strong> Under Review</p>
        <br/>
        <p><strong>Technologies/Methods:</strong> Mistral 22B, LoRA, Unicode Mapping, Assamese NLP, Low-Resource Languages, AssamText-750K</p>
      ]]></description>
      <link>https://shiva.codes/research</link>
      <guid>https://shiva.codes/research#fine-tuning-mistral-22b-the-first-large-language-model-for-assamese-language-tasks</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Research</category>
    </item>
    <item>
      <title>Contact and Connect</title>
      <description><![CDATA[
        <p>Interested in collaborating or have questions? I'd love to hear from you!</p>
        <br/>
        <p>You can reach out to me through my portfolio website for any opportunities, collaborations, or just to say hello.</p>
        <br/>
        <p>Let's build something amazing together! 🚀</p>
      ]]></description>
      <link>https://shiva.codes#contact</link>
      <guid>https://shiva.codes#contact</guid>
      <pubDate>Thu, 09 Apr 2026 08:24:18 GMT</pubDate>
      <category>Contact</category>
    </item>
  </channel>
</rss>