BodyMap VR: Medical Imaging Feature

Helping medical students understand anatomy spatially through immersive imaging

VR Design

UX/UI Design

BodyMap VR interface showing medical imaging feature

Role

Information architecture, wireframing, low- to high-fidelity prototyping, in-headset usability testing

Timeline

8 weeks (Oct - Dec 2022)

3-day Hackathon | March 6–9, 2026 & 3-day redesign prototype

Team

2 Designers, 1 VR Developer

2 Designers

Tools

Figma, Unreal Engine

2 Designers

Overview

As a product designer at MAI, I designed a key feature for BodyMap, a VR anatomy platform helping students connect textbook knowledge to real spatial anatomy. By integrating radiographic images with interactive 3D models, we enhanced spatial learning.

The feature launched in early 2023, contributing to broader adoption across educational institutions.

Browsing anatomy by region and 3D model side by side in VR

The Problem

Medical students struggled to connect 2D images into spatial anatomical understanding

Medical students face a persistent cognitive gap: textbook illustrations are flat, but the human body is spatial. Without tools that bridge that gap, students memorize structures without ever gaining practical, clinical understanding.

The Solution

VR is uniquely suited for spatial learning

By combining radiographic imaging with interactive 3D models, students can visualize anatomy in context, rotate it, explore it, and build genuine spatial intuition.


The Outcome

We achieved

BodyMap’s medical imaging feature is now live, strengthening BodyMap's adoption across educational institutions and expanding access to immersive anatomy learning.


10%+

increase in reseller interest after feature release

15%

increase in user engagement during VR study sessions

5+ schools

adopted the tablet version, including high schools in Taiwan

10%+

increase in reseller interest after feature release

15%

increase in user engagement during VR study sessions

5+ schools

adopted the tablet version, including high schools in Taiwan

10%+

increase in reseller interest after feature release

15%

increase in user engagement during VR study sessions

5+ schools

adopted the tablet version, including high schools in Taiwan

User Research

One stakeholder interview question drove the design direction

We ran a rapid audit of existing anatomy tools and spoke with both medical professionals and students to validate the problem space.


When and how do users prefer to view medical images alongside a 3D model?

Medical Expert

“I usually compare the anatomy book with the cadaver by sections to understand the relative positions of the organs.”

Anatomy Student

 “I rely on illustrations in anatomy books to understand regional anatomy, like the carpal tunnel, when studying cadavers.”

When and how do users prefer to view medical images alongside a 3D model?

Medical Expert

“I usually compare the anatomy book with the cadaver by sections to understand the relative positions of the organs.”

Anatomy Student

 “I rely on illustrations in anatomy books to understand regional anatomy, like the carpal tunnel, when studying cadavers.”

When and how do users prefer to view medical images alongside a 3D model?

Medical Expert

“I usually compare the anatomy book with the cadaver by sections to understand the relative positions of the organs.”

Anatomy Student

 “I rely on illustrations in anatomy books to understand regional anatomy, like the carpal tunnel, when studying cadavers.”


Key finding: Learners don't study anatomy by isolated organs. They think regionally, by body area and cross-section.


This insight completely reframed how we thought about information architecture.

Research goals:

  1. Understand how students currently study anatomy with imaging

  2. Identify when in the study process images are most useful

  3. Determine what context (labels, orientation cues, modality type) they need alongside images

Design Approach

Users think in regions, not parts

Our research showed that learners understand anatomy by region—like the digestive system or carpal tunnel—not by standalone organs. To support this, I restructured how users access medical images within the VR experience.

We then explored how best to integrate this feature into the existing interface:

Flashcards menu ❌

Focused too narrowly on individual structures, making it harder to grasp spatial relationships between organs and tissues.

Library Menu ✅

Organized by anatomical region and modality, allowing users to compare cadaver and radiographic images in spatial context.

Comparing flashcard vs. library menu integration for medical image access in VR


Design Solution

From floating cards to a focused, image-first MVP

I created the user flow, information architecture, and wireframes in Figma to guide the experience.

I designed a Floating Card layout to present anatomical details—definitions, images, clinical relevance—with expandable dropdowns to reduce visual clutter and let users to dive deeper only when needed.

But during VR testing, we realized that too much text disrupted the immersive experience. So we refined the MVP to focus on medical image viewing, leading to clearer interaction and better spatial usability in 3D space.

Organizing Medical Images for Spatial Learning

To improve navigation and spatial understanding, I organized medical images by anatomical region and modality. They were divided into:

  1. Cadaver images: realistic tissue references

  2. Radiographic images: CT, MRI, and ultrasound scans for cross-sectional views


To support orientation in VR, I introduced a cube-view indicator that visualizes each image’s plane in 3D space. I also ensured that every image was clearly labeled by region and type in the library menu.


This gave learners confidence as they explored anatomy—allowing them to study by area, switch between image types, and build stronger spatial and clinical intuition.

Wireframe illustrating grouped medical images for spatial learning


Prototyping


After finalizing the layout, our developer and I prototyped it in Unreal Engine. I then refined the UI and conducted in-headset testing to fine-tune spatial usability.


Final BodyMap VR interface showing library menu and main medical imaging panel


Reflections

Designing for VR taught me how fundamentally different immersive interfaces are from traditional 2D design. I had to think about 360° environments, which open up more possibilities, but also add complexity.

One key challenge was no established UI guidelines for VR. We built our own, testing directly in Unreal and scaling elements in Figma to ensure they worked in 3D space. I also realized that accessibility in VR is still underexplored: something I plan to prioritize more in future projects.

Another takeaway was the importance of bridging communication styles across disciplines. Using visual tools like annotated flows and diagrams helped align developers and stakeholders and made collaboration much smoother.


Other Features I Designed

©2026 Pinyun Wang

Developed with Coffee & Love

©2026 Pinyun Wang

Developed with Coffee & Love