I track everything. Blood markers quarterly. Sleep nightly. Glucose continuously. Heart rate variability. Body composition. Supplements. Workouts. I have more health data than my doctor knows what to do with.
Yet every week, I'd waste 20 minutes taking screenshots - Oura sleep scores, Whoop recovery metrics, CGM glucose spikes, blood test PDFs - just to upload them into ChatGPT to ask "What could be the reason for lower strength?”, or even “Give me ideas for workouts for the week." There had to be a better way.
Three weeks ago, I found out why this mattered. I threw out my back doing a warm-up squat. Not a max lift, not showing off - just rep 5 of a light set. As I lay on the floor, unable to move, I realized my body had been sending warning signals for weeks that I'd been missing despite all that tracking.
The next morning, still in bed and in pain, I asked Claude to analyze six months of my health data. But this time was different - no screenshots needed. Claude pulled everything directly: blood tests, sleep patterns, glucose trends, supplements, the works. What it found in 5 minutes would have taken me hours to piece together manually. More importantly, it found connections I wouldn’t have found.
My ferritin levels had been low for months and my deep sleep was fragmenting. All subtle changes that made sense of why a routine warm-up became an injury.
This is the story of how I built that AI health coach using MCP - and how you can build one too. A comprehensive data driven health coach that can provide advice based on up to date biometric data, diagnostics, and lifestyle information, and allow for an interactive experience to go deeper into different topics.
For the past year, I'd been using ChatGPT as my health advisor with a manual workflow that was... let's call it "functional but frustrating." Every consultation required uploading screenshots from Oura (sleep), Whoop (recovery), Apple Health (activity), Freestyle Libre (glucose), and PDF exports of blood tests. The AI gave good advice when it had the context, but getting that context into ChatGPT was cumbersome.
I had two options to building something like this for myself:
Build an AI based intelligent coach inside my tracker app. To do this, I would have to bring Oura, Whoop, Apple Health, and CGM data into my tracker. This would be similar to how I’m bringing in weight data from Withings. It’s totally doable and makes the tracker a powerful system of record. It’s also a big undertaking since it not only needs all these integrations, but also building a full chat like capability like ChatGPT has. It’s not just LLM API calls. There’s more to it, like context management, and short and long term memory.
Use an existing AI chat tool (like Claude Desktop or ChatGPT) and bring all these data sources into that tool and do the reasoning with integrated data there. This would be a much easier way to build a simple tool for myself. MCP would be the way to do it. ChatGPT doesn’t have enough connectivity options today, but Claude does.
I chose the second option. Partly because it was a great way for me to learn MCP. In this post I talk about my learnings, the implementation, and how it all comes together.
MCP (Model Configuration Protocol) is an architecture that enables AI models to interact with external data sources and services through standardized interfaces. It consists of two main components: MCP Servers, which expose APIs and data sources to AI models, handling authentication and data formatting; and MCP Clients, which are integrated with AI systems like Claude to interpret the capabilities exposed by servers.
MCP can be deployed in different models: Local Deployment, where servers run directly on the user's device providing better privacy since sensitive data never leaves the device; and Remote Deployment, where servers run in the cloud and are accessed over the internet, allowing for more complex processing but potentially presenting authentication challenges. The choice between local and remote depends on factors like privacy requirements, complexity of data processing, and authentication needs, with hybrid approaches also being possible.
MCP architecture consists of two primary components: servers and clients, which can be deployed in different models:
MCP Servers: These expose APIs and data sources to AI models. They handle authentication, data formatting, and define capabilities that models can access. Servers package data sources or tools into standardized interfaces that AI models can understand and utilize.
MCP Clients: These are integrated with AI systems like Claude and interpret the capabilities exposed by servers. The client side allows the model to discover available services, request specific information, and take actions when needed.
Local Deployment: In this model, MCP servers run directly on the user's device. This approach offers significant privacy advantages since sensitive data (like health metrics) never leaves the device. As mentioned in the context, local deployment was the fallback option when remote deployment with OAuth 2.1 presented challenges.
Remote Deployment: Here, MCP servers run in the cloud and are accessed over the internet. While this allows for more complex processing and integration with web services, it can present authentication challenges. The document mentions encountering difficulties with OAuth 2.1 when attempting remote deployment.
The choice between local and remote depends on factors like privacy requirements, complexity of data processing, and authentication needs. In the health coaching use case described, a hybrid approach was ultimately used—combining custom-built local MCP servers for some data sources (like CGM) with pre-existing remote MCP servers for others (Oura and Whoop).
While both MCP and traditional APIs enable software systems to communicate, they differ in several key ways:
Standardized Integration: MCP provides a uniform protocol for AI models to interact with various data sources, whereas traditional APIs often require custom integration code for each service.
AI-Native Design: MCP is specifically designed for AI consumption, with structured data formats that models can easily process and reason about. Traditional APIs are typically designed for developer consumption.
Capability Discovery: MCP allows AI models to dynamically discover available capabilities, while traditional APIs require developers to hardcode knowledge of endpoints and parameters.
Context Preservation: MCP maintains context across interactions, allowing for more coherent conversations about the data, unlike traditional APIs which often handle isolated requests.
Deployment Flexibility: MCP offers both local and remote deployment models, providing options for privacy-sensitive applications that traditional APIs may not address.
With this, let’s get into details of what I built.
The first thing I built was an MCP server for my own health tracker app, Wellavy. To do that, I first had to build REST APIs, then I built a local MCP server that wrapped those APIs.
The local MCP server was surprisingly straightforward to build. I used the official MCP SDK with TypeScript and created a simple wrapper around my Wellavy REST APIs. The whole thing took about 2 hours with Claude Code doing most of the heavy lifting.
The server exposes 14 tools to Claude Desktop - things like get_blood_tests for pulling my lab results, get_weight for weight tracking data, and get_intervention_list for seeing all my medications and supplements. Initially I had over 20 tools, but after using it for a week, I realized that was overkill. Consolidating down to 14 made the experience much cleaner.
To get it working with Claude Desktop, I added this to my config file:
// ~/.claude_desktop_config.json
{
"mcpServers": {
"wellavy": {
"command": "node",
"args": ["/path/to/wellavy-mcp/dist/index.js"],
"env": {
"WELLAVY_API_KEY": "wlv_sk_xxx",
"WELLAVY_BASE_URL": "<https://wellavy.com/api/v1>"
}
}
}
}
That's it. Claude Desktop restarts, connects to my local MCP server, and suddenly has access to all my health data. No screenshots needed.
The beauty of this approach is privacy - my health data never leaves my machine except when explicitly fetching from my own API. The MCP server just acts as a translator between Claude and Wellavy.
After getting the local version working, I wanted to build a remote MCP that my wife (and eventually other users) could use without needing to run code locally. Just add a URL to Claude Desktop and go.
The challenge was authentication. Wellavy uses Google OAuth through Supabase, but MCP's authentication story in August 2025 is... let's say "evolving." After hours of debugging with no error messages showing up anywhere (Claude Desktop just silently failed to connect. I filed a bug with Anthropic.), I pivoted to a simpler approach.
Based on my research, to get OAuth working I'd need to use something like Cloudflare for the MCP endpoint and build a token management system. That was too much work for what I was trying to accomplish.
Instead, I built an HTTP-based MCP server that accepts API keys as query parameters. Not the OAuth flow I wanted, but it works:
// Claude Desktop config for remote MCP
{
"mcpServers": {
"wellavy-remote": {
"url": "<https://mcp.wellavy.co/mcp?api_key=wlv_sk_xxx>"
}
}
}
The server runs on Railway and exposes the same 14 tools as the local version. Users just need their API key - no Node.js, no terminal, no npm install. They generate an API key in Wellavy, add the config to Claude Desktop, and they're connected.
The trade-off is that data now flows through Railway's servers, though it's still encrypted in transit and the MCP server doesn't store anything. For my use case, where convenience matters more than absolute privacy, it's the right balance.
After building the Wellavy MCP server, I wanted MCP servers for Oura, Whoop, and Freestyle Libre CGMs. I used these open source MCP servers:
Oura: oura-mcp by elizabethtrykin
Whoop: whoop-mcp by Christopher Vidic (ctvidic)
Freestyle Libre: I vibe coded my own using Claude Code, which I open sourced here. Don't judge me on the code quality!
These all work similarly - they connect to the respective APIs and expose the data as MCP tools. The setup for each is basically the same: add the server config to Claude Desktop, provide your API credentials, and you're connected.
A few weeks ago on a Sunday morning, I was working out at home. My workout of the day was a set of back squats, back exercises, farmer carries, and a bunch of accessory movements. My goal is not to lift the heaviest. I want to be functionally capable, move well, strong, and most importantly, make sure I don't get injured. I generally start with a good amount of warm-up movements, and my first few sets of any workout tend to have a gradual increase in intensity.
This Sunday was no different, at least based on my plan. But things didn't go per plan. I was in my third set of squats, still at a relatively low weight, just warming up. And while coming up from the fifth rep in this particular set, I felt something snap in my lower back. I've had lower back issues for several years at this point due to an injury that happened in my early 20s. So I'm quite careful about not pushing it too much and making sure that I move within my limits every time a flare-up happens it takes me out for at least a couple of weeks, and I don't want that. But this time something went wrong, and I don't know what. I left the bar and immediately lied down right there and then. Let the body calm down and get a feel for what was going on. I couldn't tell in the moment, there wasn't any pain immediately. But I didn't feel great. I knew something was off. So I stopped to work out and some light stretches after a bit. A little while later, I lied down to just take it easy. After lying down in bed, I try to get up, and I could barely move. I was in excruciating pain, way worse than anything I had experienced in several years at this point. This was very very concerning and I got quite worried. I put some cold packs, took painkillers. I knew that this is going to take me out for multiple weeks at least, the way it was feeling in the moment. Next morning was even worse, a lot of pain, a lot of stiffness, and I couldn't get out of bed.
It's puzzling that this happened during a warm-up set. I wasn't pushing it, it wasn't a lot of weight, and it is a movement that I do fairly regularly. I was quite surprised, but reflecting back, something was not right, and I think my body was telling me that I wasn't quite feeling as energetic and strong for several weeks. It was subtle. I didn't quite know what was going on, but something was not 100%.
Only recently I had gotten this whole Claude + MCP based health coach setup working, so I thought I'll take it for a spin. Here's what I asked it:
I have been feeling low energy and a little weakness from a strength perceptive for a couple of months. Most recently, I injured my back just doing a warm up set of squats. This has never happened before. Can you analyze my biomarkers (blood, sleep, CGM) from the beginning of the year + any meds / supplements and help me figure out what might be going on. Also recommend any potential changes I could consider to solve the issue.Claude took my query and used the MCP servers for Wellavy and Freestyle Libre to get all the relevant data for me. Here's a screenshot of Claude desktop in action:
It crunched through several months of my blood markers, all my supplements and medications, lifestyle interventions, sleep data, CGM stats, my weight and body comp. It also got information about my health goals, family history. Then it reasoned through all of this information. It took 3-5 minutes to do this.
The stuff it found was spot on. Here's a screenshot:
And then it went on to prioritize my risk factors and reason about what might have caused the injury, also giving me the top things to focus on:
I took all of this and sent it to my doctors. They looked through it and said that the analysis was spot-on and that they generally agree with the recommended plan. This made a doctor's consult so much more productive and easier for both myself and the doctors because they were able to see through a lot more data and analysis and quickly help me think through what I need to do.
MCP transformed how I interact with my health data. Instead of manually uploading screenshots from multiple apps into ChatGPT, I now have Claude Desktop connected directly to all my health sources - blood tests, sleep data, CGM readings, supplements, and more. One query pulls everything together for comprehensive analysis.
The real test came when I injured my back during a routine workout. Claude analyzed months of biomarkers, identified concerning trends (dropping testosterone, rising glucose variability, poor sleep), and provided actionable recommendations that my doctors validated as "spot-on." What would have taken hours of manual data gathering and analysis happened in minutes.
The technical implementation needed some digging into the details. Yes, there were authentication headaches and I had to make compromises (API keys instead of OAuth), but the end result works.
If you're drowning in health data from multiple sources and want AI-powered insights without the manual overhead, MCP is worth exploring. It's not perfect - the ecosystem is young and rough around the edges - but for those willing to tinker, it's a game-changer for personal health optimization.
I open sourced part of my code (LibreLink MCP), and used other people's open source projects (Oura MCP, Whoop MCP). If you build something similar or improve on what I've done, I'd love to hear about it.



