Ankita Mungalpara


Example

I’m a researcher and data scientist with hands-on experience in computer vision, deep learning, and generative AI. My current work focuses on advancing agentic AI and multimodal large language models (LLMs). I hold a Master of Science in Data Science from the University of Massachusetts Dartmouth.

With over three years of industry experience in data science and machine learning, I’ve built scalable, real-world AI solutions—from working as an ML Engineer at Tiger Analytics to developing a conversational LLM agent during my Summer 2024 internship at Johnson & Johnson Innovative Medicine. I'm passionate about solving complex problems, pushing the boundaries of AI research, and transforming emerging ideas into impactful, real-world solutions.

GitHub Medium Substack LinkedIn

All Projects


This is a collection of my hands-on projects in agentic AI, generative AI, LLMOps, computer vision, and MLOps. Each project jumps into a key concept or framework in AI and ML, often supported by practical implementations or tutorials.

  1. Multimodal Video RAG Agent
  2. Model Context Protocol (MCP)
  3. Managing Memory, Context, and State in an AI Agent
  4. Document Intelligence: Modern Approaches to Extracting Structured Information
  5. Advanced RAG
  6. Building CLIP From Scratch
  7. Fine-Tune Mistral-7B Model with LoRA: Sentiment Classification
  8. Fine-Tune Mistral-7B Model with QLoRA: Financial Q&A
  9. LLM From Scratch
  10. Generative AI with LangChain
  11. NLP with Hugging Face
  12. Kubernetes: Ingress & FastAPI Model Deployment
  13. YOLOv5 Custom Object Detection
  14. Docker Compose and PyTorch Lightning
  15. Hyperparameter Tuning and Experiment Tracking
  16. Deployment with Gradio
  17. Deployment with Litserve

Certification


Publication


Recent Blogs


Model Context Protocol illustration

Understanding MCP (Model Context Protocol)

Published · May 10 2025

Agentic AI Model Context Protocol (MCP) Generative AI

In today's fast-paced AI era, one of the most difficult tasks for developers is seamlessly connecting large language models (LLMs) to the data sources and tools required...

Read more →
Model Context Protocol illustration

How I Fine-Tuned Mistral-7B Model with LoRA (Low-Rank Adaptation)

Published · May 10 2025

SFT PEFT Post-Training LLMs Generative AI

Large Language Models (LLMs) are initially trained on vast, different text corpora scraped from the internet. This pre-training phase teaches them statistical...

Read more →
Model Context Protocol illustration

Building CLIP (Contrastive Language–Image Pre-training) From Scratch

Published · May 10 2025

CLIP Training Multi-Head Attention Positional Embedding

Contrastive Language-Image Pre-training (CLIP) was developed by OpenAI and first introduced in the paper “Learning Transferable Visual Models From Natural...

Read more →

Feel free to connect or explore more on my GitHub or LinkedIn.