Spaces:
Build error
Build error
File size: 11,731 Bytes
d70b8e1 04d1e41 3ee9737 d70b8e1 0b7d727 d70b8e1 3ee9737 f5a0b35 5957a6c 77f2b3e d70b8e1 00f30b1 3ee9737 613f00e 3ee9737 00f30b1 3ee9737 613f00e 3ee9737 f663376 00f30b1 613f00e 00f30b1 613f00e 00f30b1 3ee9737 613f00e 00f30b1 613f00e 00f30b1 613f00e 00f30b1 613f00e 00f30b1 613f00e 00f30b1 613f00e 3ee9737 00f30b1 3ee9737 613f00e 3ee9737 613f00e 00f30b1 613f00e 3ee9737 00f30b1 3ee9737 613f00e 00f30b1 613f00e 3ee9737 00f30b1 3ee9737 00f30b1 3ee9737 00f30b1 db0b943 00f30b1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
---
title: MedCodeMCP
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.33.0
app_file: app.py
pinned: false
license: apache-2.0
short_description: an MCP Tool for Symptom-to-ICD Diagnosis Mapping.
tags:
- mcp-server-track
- medical-ai
- diagnostic-assistant
- icd10
- speech-recognition
- multimodal
- agentic-ai
- llamaindex-integration
- gradio-app
- sambanova-cloud
- fast-inference
- custom-components
- healthcare
- reusability
- ui-polish
---
A voice-enabled medical assistant that takes patient audio complaints, engages in follow-up questions, and returns structured ICD-10 diagnosis suggestions via an MCP endpoint.
# Features
- **Automatic speech recognition (ASR)**: Transcribe real-time patient audio using [Gradio](https://www.gradio.app/guides/real-time-speech-recognition).
- **Interactive Q&A Agent**: The LLM dynamically asks clarifying questions base on ICD codes until it can diagnose with high confidence.
- **Multi-backend LLM**: Switch between OpenAI GPT, Mistral (HF), or a local transformers model via environment flags.
- **ICD-10 Mapping**: Use LlamaIndex for vector retrieval of probable ICD-10 codes with confidence scores.
- **MCP-Server Ready**: Exposes a `/mcp` REST endpoint for seamless agent integration.
# Getting Started
My submission for this hackathon is posted at the following Hugging Face Space:
https://huggingface.co/spaces/Agents-MCP-Hackathon/MedCodeMCP
Note that I am also backing up my code to my personal space in case the hackathon space is ever removed:
https://huggingface.co/spaces/gpaasch/Grahams_Gradio_Agents_MCP_Hackathon_2025_Submission
## Clone & Install
```bash
git clone https://huggingface.co/spaces/Agents-MCP-Hackathon/MedCodeMCP
cd MedCodeMCP
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
````
## Environment Variables
| Name | Description | Default |
| -------------------------- | --------------------------------------------------------- | ---------------------- |
| `OPENAI_API_KEY` | OpenAI API key for GPT calls | *required* |
| `HUGGINGFACEHUB_API_TOKEN` | HF token for Mistral/inference models | *required for Mistral* |
| `USE_LOCAL_GPU` | Set to `1` to use a local transformers model (no credits) | `0` |
| `LOCAL_MODEL` | Path or HF ID of local model (e.g. `distilgpt2`) | `gpt2` |
| `USE_MISTRAL` | Set to `1` to use Mistral via HF instead of OpenAI | `0` |
| `MISTRAL_MODEL` | HF ID for Mistral model (`mistral-small/medium/large`) | `mistral-large` |
| `MISTRAL_TEMPERATURE` | Sampling temperature for Mistral | `0.7` |
| `MISTRAL_MAX_INPUT` | Max tokens for input prompt | `4096` |
| `MISTRAL_NUM_OUTPUT` | Max tokens to generate | `512` |
## Launch Locally
```bash
# Default (OpenAI)
python app.py
# Mistral backend
export USE_MISTRAL=1
export HUGGINGFACEHUB_API_TOKEN="hf_..."
python app.py
# Local GPU (no credits)
export USE_LOCAL_GPU=1
export LOCAL_MODEL="./models/distilgpt2"
python app.py
```
Open [http://localhost:7860](http://localhost:7860) to interact with the app.
## MCP API Usage
```bash
curl -X POST http://localhost:7860/mcp \
-H "Content-Type: application/json" \
-d '{"tool":"transcribe_and_respond","input":{"audio":"<base64_audio>","history":[]}}'
```
## Project Structure
```
├── app.py # HF entrypoint
├── src/app.py # Core Gradio & agent logic
├── utils/llama_index_utils.py # LLM predictor & indexing utils
├── data/icd10cm_tabular_2025/ # ICD-10 dataset
├── requirements.txt # Dependencies
└── README.md # This file
```
# Hackathon Timeline
Here are the key dates for the Gradio Agents & MCP Hackathon:
* **May 20 – 26, 2025**: Pre-Hackathon announcements period.
* **June 2 – 10, 2025**: Official hackathon window (sign-ups remain open).
* **June 3, 2025 — 9 AM PST / 4 PM UTC**: Live kickoff YouTube event.
* **June 4 – 5, 2025**: Gradio Office Hours with MCP Support, MistralAI, LlamaIndex, Custom Components team, and Sambanova.
* **June 10, 2025 — 11:59 PM UTC**: Final submission deadline.
* **June 11 – 16, 2025**: Judging period.
* **June 17, 2025**: Winners announced.
# Key Players
## Sponsors
* **Modal Labs**: \$250 GPU/CPU credits to every participant ([modal.com](https://modal.com)).
* **Hugging Face**: \$25 API credits to every participant ([huggingface.co](https://huggingface.co)).
* **Nebius**: \$25 API credits to first 3,300 participants ([nebius.com](https://nebius.com)).
* **Anthropic**: \$25 API credits to first 1,000 participants ([anthropic.com](https://www.anthropic.com)).
* **OpenAI**: \$25 API credits to first 1,000 participants ([openai.com](https://openai.com)).
* **Hyperbolic Labs**: \$15 API credits to first 1,000 participants ([hyperbolic.xyz](https://hyperbolic.xyz)).
* **MistralAI**: \$25 API credits to first 500 participants ([mistral.ai](https://mistral.ai)).
* **Sambanova.AI**: \$25 API credits to first 250 participants ([sambanova.ai](https://sambanova.ai)). ([huggingface.co](https://huggingface.co/Agents-MCP-Hackathon))
## Panel of Judges
Judging will be conducted by representatives from sponsor partners and the Hugging Face community team, including Modal Labs, MistralAI, LlamaIndex, Sambanova.AI, and Hugging Face. To be properly judged, ensure the project is in the [proper space](https://huggingface.co/Agents-MCP-Hackathon) on hugging face, not just in a personal space. Click join organization, then click new to create a space that will be judged.
## Office Hours Hosts
* **Abubakar Abid** (MCP Support) — [@abidlabs](https://huggingface.co/abidlabs)
* **MistralAI Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=TkyeUckXc-0)
* **LlamaIndex Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=Ac1sh8MTQ2w)
* **Custom Components Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=DHskahJ2e-c)
* **Sambanova Office Hours** — [Watch on YouTube](https://www.youtube.com/watch?v=h82Z7qcjgnU)
## Primary Organizers
* **Yuvraj Sharma (Yuvi)** (@yvrjsharma) — Machine Learning Engineer & Developer Advocate, Gradio Team at Hugging Face
* **Abubakar Abid** (@abidlabs) — Developer Advocate & MCP Support Lead at Hugging Face
* **Gradio Team at Hugging Face** — Core organizing team providing platform infrastructure, logistics, and community coordination
# Resources
* **Hackathon Org & Registration**: [Agents-MCP-Hackathon](https://huggingface.co/Agents-MCP-Hackathon)
* **Discord**: [discord.gg/agents-mcp-hackathon](https://discord.gg/agents-mcp-hackathon)
* **Slides from Kickoff**: [PDF](https://huggingface.co/spaces/Agents-MCP-Hackathon/README/blob/main/Gradio%20x%20Agents%20x%20MCP%20Hackathon.pdf)
* **Code of Conduct**: [Contributor Covenant](https://huggingface.co/code-of-conduct)
* **Submission Guidelines**: See “Submission Guidelines” on the hackathon page
* **MCP Guide**: [How to Build an MCP Server](https://huggingface.co/blog/gradio-mcp)
* **Gradio Docs**: [https://www.gradio.app/docs](https://www.gradio.app/docs)
* **LlamaIndex Docs**: [https://llamaindex.ai/docs](https://llamaindex.ai/docs)
* **Mistral Model Hub**: [https://huggingface.co/mistral-ai/mistral-small](https://huggingface.co/mistral-ai/mistral-small)
## Free Credits!
**Modal Labs Compute Credits** (\$250 per participant)
Monitor your GPU/CPU credit usage by logging into your Modal account and navigating to **Dashboard → Billing**:
[https://modal.com/dashboard](https://modal.com/dashboard) ([huggingface.co][1], [modal.com][2])
**Hugging Face API Credits** (\$25 per participant)
View your remaining credits and invoices on the Hugging Face billing dashboard:
[https://huggingface.co/settings/billing](https://huggingface.co/settings/billing) ([huggingface.co][1], [huggingface.co][3])
**Nebius AI Cloud Credits** (\$25 to first 3,300 participants)
Check your Nebius “Grants and promocodes” balance and detailed billing reports at:
[https://nebius.com/services/billing](https://nebius.com/services/billing) ([huggingface.co][1], [nebius.com][4])
**Anthropic Claude API Credits** (\$25 to first 1,000 participants)
Track your Claude usage and remaining credits in the Anthropic Console under **Settings → Billing**:
[https://console.anthropic.com/settings/billing](https://console.anthropic.com/settings/billing) ([huggingface.co][1])
**OpenAI API Credits** (\$25 to first 1,000 participants)
Monitor your API calls, token usage, and spend on the OpenAI Usage dashboard:
[https://platform.openai.com/account/usage](https://platform.openai.com/account/usage) ([huggingface.co][1], [platform.openai.com][5])
**Hyperbolic Labs API Credits** (\$15 to first 1,000 participants)
After logging in at the Hyperbolic AI Dashboard, go to **Settings → Billing** to view your credit balance and transaction history:
[https://app.hyperbolic.xyz](https://app.hyperbolic.xyz) ([huggingface.co][1], [docs.hyperbolic.xyz][6], [hyperbolic.xyz][7])
**Mistral AI API Credits** (\$25 to first 500 participants)
Sign in at the Mistral Console and navigate to **Workspace → Billing** to activate and monitor your credits:
[https://console.mistral.ai](https://console.mistral.ai) ([huggingface.co][1], [docs.mistral.ai][8])
**SambaNova AI Cloud Credits** (\$25 to first 250 participants)
Log in to SambaNova Cloud and check your **Billing & Usage** in the plans section:
[https://cloud.sambanova.ai/plans/billing](https://cloud.sambanova.ai/plans/billing) ([huggingface.co][1], [cloud.sambanova.ai][9])
[1]: https://huggingface.co/Agents-MCP-Hackathon "Agents-MCP-Hackathon (Agents-MCP-Hackathon)"
[2]: https://modal.com/ "Modal: High-performance AI infrastructure"
[3]: https://huggingface.co/docs/hub/billing?utm_source=chatgpt.com "Billing - Hugging Face"
[4]: https://nebius.com/services/billing?utm_source=chatgpt.com "Billing - Nebius"
[5]: https://platform.openai.com/account/usage?utm_source=chatgpt.com "Account Usage - OpenAI Platform"
[6]: https://docs.hyperbolic.xyz/docs/getting-started?utm_source=chatgpt.com "Hyperbolic API"
[7]: https://hyperbolic.xyz/blog/how-to-set-up-your-account-on-hyperbolic?utm_source=chatgpt.com "How to Set Up Your Account on Hyperbolic"
[8]: https://docs.mistral.ai/getting-started/quickstart/?utm_source=chatgpt.com "Quickstart | Mistral AI Large Language Models"
[9]: https://cloud.sambanova.ai/plans/billing?utm_source=chatgpt.com "Billing - SambaNova Cloud"
# About the Author
**Graham Paasch** is an AI realist passionate about the coming AI revolution.
* LinkedIn: [https://www.linkedin.com/in/grahampaasch/](https://www.linkedin.com/in/grahampaasch/)
* YouTube: [https://www.youtube.com/channel/UCg3oUjrSYcqsL9rGk1g\_lPQ](https://www.youtube.com/channel/UCg3oUjrSYcqsL9rGk1g_lPQ)
Graham is currently looking for work. Inspired by Leopold Aschenbrenner’s “AI Situational Awareness” ([https://situational-awareness.ai/](https://situational-awareness.ai/)), he believes AI will become a multi-trillion-dollar industry over the next decade—what we’re seeing now is the equivalent of ARPANET in the early days of the internet. He’s committed to aligning his work with this vision to stay at the forefront of the AI revolution. |