Medical Chatbot using Gamma LLMV2 and Comparison Using BERT Models
Keywords:
LLMV2Abstract
The study introduces a sophisticated medical chatbot that uses vector-based retrieval and Meta’s LLaMA 2 model to
provide accurate, context-aware recommendations on symptoms, drugs, and diets. The system incorporates Pinecone
for vector embeddings, LangChain for interactions, and Python for logic. ”The GALE Encyclopedia of Medicine,” a
637-page medical dataset, is broken up into text segments for effective semantic search.Flask is used for web
infrastructure and Streamlit for interaction with the chatbot’s front end. User queries use LLaMA 2 to deliver answers,
create embeddings, and do vector
searches. While handling edge circumstances and data quality provide issues, evaluation emphasizes accuracy,
timeliness, and engagement. Real-time updates and sophisticated fine-tuning are examples of upcoming
enhancements.In terms of language interpretation, generation, and reasoning, Gamma LLM v2 performs better than
proprietary models when compared to other models. This enables fine-tuning on bespoke datasets and lessens the need
for APIs. It outperforms RoBERTa (0.77), MedBERT (0.95), and BERT (0.86) in medical question-answering tasks
and is available in 7B, 13B, and 70B parameters. Index Terms— Gamma LLM-V2,Chat bot, BioBERT, MedBERT,
Langchain.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Authors

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.










