BodyMap VR: Medical Imaging Feature
Helping medical students understand anatomy spatially through immersive imaging
Client

Duration
Roles
Stakeholder interviews, information architecture, wireframing, and low- to high-fidelity prototyping in collaboration with the developer
Team
Tools
I designed a key feature for BodyMap, a VR anatomy platform helping students connect textbook knowledge to real-life anatomy. By integrating radiographic images with interactive 3D models, we enhanced spatial learning.
The feature successfully launched in early 2023 and contributed to broader adoption across educational institutions.
Students struggled to connect 2D images with real anatomy
Medical students had difficulty translating textbook-style images into practical anatomical understanding. This disconnect hurt their confidence and slowed down the learning process.
Traditional tools weren’t closing the gap—slides and videos are static, while cadavers are expensive and limited in availability.
We saw an opportunity to leverage VR
An immersive medium uniquely suited for spatial learning. By combining radiographic imaging with interactive 3D models, we aimed to help students visualize anatomy in context, rotate it, explore it, and build genuine spatial intuition.
We achieved
BodyMap’s medical imaging feature is now live, helping students better understand spatial anatomy across institutions. It has expanded access to immersive learning and strengthened BodyMap’s adoption in the education space.
Uncovering problems through stakeholder interviews
To validate the problem and guide our direction, we ran a rapid competitor audit and spoke with medical professionals and students.
We found that learners rarely study anatomy by isolated organs. Instead, they think regionally: using cross-sections and body areas to understand how structures relate.
When and how do users prefer to view medical images alongside a 3D model?
Medical Expert
“I usually compare the anatomy book with the cadaver by sections to understand the relative positions of the organs.”
Anatomy Student
“I rely on illustrations in anatomy books to understand regional anatomy, like the carpal tunnel, when studying cadavers.”
Users think in regions, not isolated parts
Our research showed that learners understand anatomy by region—like the digestive system or carpal tunnel—not by standalone organs. To support this, I restructured how users access medical images within the VR experience.
We then explored how best to integrate this feature into the existing interface:
From floating cards to a focused, image-first MVP
I created the user flow, information architecture, and wireframes in Figma to guide the experience.
I designed a Floating Card layout to present anatomical details—definitions, images, clinical relevance—with expandable dropdowns to reduce visual clutter and let users to dive deeper only when needed.
But during VR testing, we realized that too much text disrupted the immersive experience. So we refined the MVP to focus on medical image viewing, leading to clearer interaction and better spatial usability in 3D space.
Organizing Medical Images for Spatial Learning
To improve navigation and spatial understanding, I organized medical images by anatomical region and modality. They were divided into:
Cadaver images: realistic tissue references
Radiographic images: CT, MRI, and ultrasound scans for cross-sectional views
To support orientation in VR, I introduced a cube-view indicator that visualizes each image’s plane in 3D space. I also ensured that every image was clearly labeled by region and type in the library menu.
This gave learners confidence as they explored anatomy—allowing them to study by area, switch between image types, and build stronger spatial and clinical intuition.
Prototyping & Final Design
After finalizing the layout, our developer and I prototyped it in Unreal Engine. I then refined the UI and conducted in-headset testing to fine-tune spatial usability.
Designing for VR for the first time taught me how fundamentally different immersive interfaces are from traditional 2D design. I had to think about 360° environments, which open up more possibilities, but also add complexity.
One key challenge was the lack of established UI guidelines for VR. We had to build our own, testing directly in Unreal and scaling elements carefully in Figma to ensure they worked in 3D space. I also realized that accessibility in VR is still underexplored: something I plan to prioritize more in future projects.
Another takeaway was the importance of bridging communication styles across disciplines. Using visual tools like annotated flows and diagrams helped align developers and stakeholders and made collaboration much smoother.











