My Experience Building AuriCare at Boston GrandHack 2026 (MIT)

I had a great time participating at Boston GrandHack 2026 in the “Trauma and Rehabilitation” track. This was organized by the MIT Hacking Medicine organization and took place at the MIT Media Lab from Friday, March 13 – Sunday, March 15, 2026. We had 48 hours to pitch ideas, form teams of 4 to 6 people on-the-spot, and build solutions/demos. We had to finally give a 3-minute pitch in front of the judges.

To know about the winning teams and the global participation coverage, please visit https://hackingmedicine.mit.edu/events/grandhack-2026-recap.

We had to select one of the three tracks: (i) Trauma and Rehabilitation, (ii) Virtual Diagnostic Interfaces, and (iii) Portable Devices. For more details, please visit https://hackingmedicine.mit.edu/events/grandhack-2026.

Our Team building AuriCare, a “Holistic Pain Management” solution

We presented the solution called “AuriCare: AI-powered Pain Management” targeting the patients who are in chronic pain. We had an orthopedic doctor in our team who guided us on how clinicians view pain scores on EPIC. The problem we tackled is the subjective nature of self-reported pain scores and how pain thresholds vary across patients.

I am grateful to work to my team members for selecting the clinical domain of pain management as it strongly aligns with my current research at Stanford Medicine on perioperative pain management. To know more about my research portfolio, please visit https://roysoumya.github.io/

We chose Parkinson’s disease as a case-study where patients lack fine motor skills. I chose Parkinson’s Disease (PD) because I have previously worked with the clinical and genomic data of PD patients during my time at L3S Research Center, Germany. Here, I worked on it for close to 2 years in collaboration with the Hannover Medical School, Germany to develop an intepretable, robust decision tree-based patient subtyping method, and perform external validation across a different cohort. If you are interested, please read through this Frontiers in AI journal paper, published quite recently [Link to slides, link to paper].

Since we had a hardware engineer on the team, we designed an over-the-ear wearable which acts as a continuous monitoring device, which patients interact with daily or weekly to record their lived experience of pain during the time between the visits to the clinicians.

I designed the holistic AI calibration engine that extracts important information from their daily lived experience and calibrates the self-reported pain scores based on their pain context. We showed a demo integrated with the EPIC dashboard and a 3-D Interactive Pain Map – explorer tool to easily navigate the unstructured lived experience.

The demo can be accessed at  https://sgamboa1.github.io/auricare-demo/ .

It had been tremendous learning experience for me.

Motivation – Opening Keynote and InterSystems Innovation Pathway

Judging Criteria

Impact: Real Problem, Potential for Widespread Impact, Solution addresses identified problem

Innovation: Convincing rationale for why solution will work, Addresses challenges specific to stakeholders, Considers user experience, interface and service design

Business Model: Demonstrates a plan to work in the field, Sustainable business model

Presentation: Presentation effectiveness, Team (diversity backgrounds, technical expertise, etc.)

Boston GrandHack 2026 Schedule

Problem Pitching (45 seconds) on Day 1

Since the participants are selected individually and are not assigned to any teams, the problem pitching exercise is used to share the ideas (problems and not solutions) that you are interested and passionate to work on. You will be given less than a minute to pitch your ideas. Towards the end, some participants pitched themselves, including me, by saying things like “I am a CS researcher by training, an expert in medical NLP and worked with several types of medical data”.

This short time span allowed close to 75 pitches over the span of one hour. This was really quite unique and was truly a new experience for me.

As this was quite a new setup, the organizers shared some general tips and strategies for a good pitch. I have noted some of them below: Problem Definition and Tailoring for Stakeholders.

Problem Definition – Identify the 5 W’s?

What

What is the actual problem and its impact?

What will happen if the problem is not fixed?

What will happen if the problem is fixed?

What are the current solutions to the problem and the shortcomings?

Who

Who does the problem affect?

Where

Where does the problem occur?

When

When does the problem occur?

When does the problem need to be fixed?

Why

Why is it important to fix the problem?

Walking through the 5 W’s for AuriCare

To make this concrete, here is how we walked through the 5 W’s for AuriCare:

  • What: Self-reported pain scores are subjective and vary widely across patients, which makes them an unreliable clinical signal. Today, clinicians have to either trust the number at face value or spend extra time during short visits trying to interpret it. If unaddressed, chronic-pain patients (especially Parkinson’s patients) continue to be under-treated or over-treated. If addressed well, clinicians get a calibrated, context-aware signal they can actually act on.
  • Who: Chronic-pain patients (with Parkinson’s disease as our beachhead population), their clinicians, and the patient’s circle of trust (caregivers, family members) who often help report symptoms.
  • Where: The problem mostly occurs at home, between clinical visits — exactly where current EHR systems have the least visibility.
  • When: It occurs continuously, but becomes most visible during clinic visits when patients are asked to summarise weeks of pain in a single number. It needs to be addressed before pain trajectories drift far enough to require ER visits or hospital admissions.
  • Why: Better calibration of pain reports translates directly into better dosage recommendation, fewer avoidable escalations, and a meaningful improvement in the quality of life for a patient.

Tailoring for Stakeholders: Identify the 5 Ps

Separately brainstorm for each stakeholder

Patients

Providers

Payors

Pharma Industry

Patient’s Circle of Trust

Workshops (optional) – Running parallel to the Hackathon

  1. Learn to use the InterSystems IRIS data platform
  2. Prototyping Safety and Information: For teams interested in making real-world demos and use hardware, a MakerSpace was providing. Many teams made use of this to demonstrate a rough prototype of their proposed idea
  3. How to Pitch: Conducted by MIT Hacking Medicine’s Faculty Director and seasoned entrepreneur Zen Chu

Mentorship

From Day 2 onwards, several mentors (senior, experienced professionals in the healthcare space) interacted with the different teams. During the entire hackathon, we got to interact with around 4 to 5 mentors. They were very friendly, accommodating and took the time to listen to our ideas, pain points, before share their experience and feedback. It ranged broadly from problem ideation and impact (at the start) to business model development and adoption in clinical workflows (towards the end of the hackathon).

Being a postdoc at Stanford Medicine and a CS researcher by training, mostly in academia, I really enjoyed learning about challenges to deployment of medical AI tools and adoption to clinical workflows and the thought-process and needs for the different stakeholders. A few themes came up repeatedly across the mentor conversations and are worth highlighting:

Pick a target population and focus on it. A common piece of feedback was that “chronic pain” is far too broad as a starting market. Choosing Parkinson’s disease as our target population — where the lack of fine motor skills makes hands-off voice capture genuinely valuable, and where wearing-off cycles give the calibration engine a clear signal to model and made the pitch significantly sharper.

Start from the clinical workflow, not the model. Several mentors pushed us to articulate exactly where in the clinician’s day the AuriCare PainScore would appear, who would look at it first, and what action it would trigger. This is a very different framing from a typical academic ML paper, where the metric is often the end-point. In a clinical setting, a high-AUC model that sits outside the workflow simply does not get used.

Trust and explainability are not optional. Because we were proposing to calibrate a number that clinicians have been reading for decades, mentors flagged early that we needed a way for the clinician to see why a calibrated score differs from the self-reported one. This directly shaped our decision to keep the patient’s own words visible alongside the AuriCare PainScore, and to build the 3-D Interactive Pain Map as an explainability layer rather than a separate feature.

Focus less on data acquisition device and more on how to process the data to provide actionable intelligence. The mentors also urged us to think about scaling the product, where inevitably we will need to integrate various data sources.

Business Model Development

This is the aspect I got to learn a lot from my team members as well as the mentors. I am providing the relevant slides for “AuriCare”.

A few specific things I took away from this part of the hackathon:

  • Map every stakeholder before you pick a buyer. We mapped AuriCare’s stakeholder ecosystem as: patients (primary user), physicians (clinical buyer), hospital systems (enterprise buyer), insurance & CMS (payer), Epic / EHR vendors (infrastructure), and pharma & clinical research (research partner). This made it much clearer that even though patients are the primary user, the actual purchasing decision sits elsewhere.
  • A B2B2C model fit our solution better than direct-to-consumer. Insurer pays → provider deploys → patient uses. Framed this way, the value proposition becomes concrete on each side: insurers see fewer ER visits and fewer admissions through better outpatient pain management, providers get a calibrated signal that fits into their existing EPIC workflow, and patients get a hands-off device that listens to their lived experience.
  • Size the market at an honest intersection. Rather than claiming the entire chronic-pain market, we positioned AuriCare at the intersection of AI Remote Patient Monitoring and Chronic Pain Monitoring, with Parkinson’s as the wedge. This gave us a defensible addressable market while making clear that Parkinson’s was a starting point, not a ceiling.
  • Pricing should reflect the workflow, not just the device. We split pricing into an initial unit price (covering the device and an initial monitoring window for clinical assessment) and an ongoing subscription (covering continuous monitoring, AI scoring, and EPIC chart updates). This mirrors how insurers and health systems already think about remote-monitoring services and avoids the trap of pricing the hardware in isolation.

The biggest mindset shift for me was moving from “is the model good?” to “does the whole solution, including who pays for it and how it gets adopted, hang together?”. That question simply does not come up in most academic ML work, and the hackathon forced us to confront it within 48 hours.

Mock Presentation on Day 2 Evening

We had to give a mock presentation to members of the MIT Hacking Medicine team who played the role of the judged. We would be given 3 minutes for the final presentation and were advised to stick to 1 minute each for Problem, Solution and Business Model. They gave very detailed feedback, both on the technical as well as the presentation/logistical aspects of the pitch.

I really liked this collaborative and quick feedback nature of this segment.

Final Pitch Submission and Presentation on Day 3

We had to finalise and submit the pitch deck by Day 3 noon. The final presentation was scheduled after lunch.

Closing Picture

If you are a researcher, clinician, engineer, or designer thinking about whether to do something like Boston GrandHack, here are a few things I would share based on this experience:

  • The 48-hour constraint is the feature, not the bug. It forces you to make decisions you would normally postpone — picking a beachhead population, committing to one stakeholder, cutting features to fit a 3-minute pitch. That compression is where most of the learning happens.
  • Team composition matters more than any single skill. Having an orthopedic doctor, a hardware engineer, and an AI/ML researcher on the same team meant we could move from a clinical pain point to a working wearable demo and a calibrated AI score without long handoffs. If you are coming in as an individual, lean into the problem-pitching session on Day 1 — that is where teams form around complementary skills.
  • Mentorship is a renewable resource — use it. The mentors are there specifically to be challenged with your idea. The teams that improved the most over the weekend were the ones that actively sought out 4–5 mentor conversations and updated their pitch after each one, rather than treating mentorship as optional.
  • Coming from academia, the most valuable lens you can adopt is “who pays, who uses, and who decides”. It is genuinely different from “what’s the best metric on this benchmark”, and once you see the problem through that lens, it is hard to unsee it.

For me, AuriCare itself was almost secondary to the broader experience of working through a real-world medical AI problem end-to-end — from problem definition through prototype, business model, and pitch — in a single weekend, alongside a multidisciplinary team and a roomful of mentors who genuinely wanted us to succeed. I would highly recommend it to anyone working in or around medical AI.


Discover more from Medical AI Insights

Subscribe to get the latest posts sent to your email.

What is your take on this topic?

Discover more from Medical AI Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading