Graduate Client Project · Army Research Laboratory Context · Low-Fidelity Prototype Complete
MUMOSA Crisis Response VR Study
This case study documents the finished low-fidelity stage of our graduate project built around the Army Research Laboratory's MUMOSA dashboard. My contribution centered on the VR paper prototype: a headset-view concept for revisiting hazardous scenes, surfacing AI summaries in place, and letting investigators jump back to source evidence. My teammates focused more on the flatscreen dashboard redesign, and the next digital step is split the same way: Axure for the flatscreen prototype together, Unity for the VR prototype on my side.
Research-Grounded VR Review Layer For Crisis Reconstruction
The published MUMOSA work combines question answering, textual evidence, visual evidence, schema graphs, and simulation views into one crisis-analysis interface. Our coursework prototype extends that idea into a more usable learning and investigation flow, and my lane was the VR review layer for revisiting dangerous scenes after the event.
- - Finished low-fidelity prototype, not just early concept notes.
- - My contribution centers on the VR paper prototype and evidence-grounding interactions.
- - The team also developed flatscreen dashboard directions for filters, schema, and note-taking.
Role
Literature-review author and VR paper-prototype owner for the team's MUMOSA redesign
Team
Three-person graduate team: Georgi Tsvetanski, Kelly Ehrlich, and Kamilah S.
Prototype Stage
Finished low-fidelity prototype covering both the VR lane and the flatscreen dashboard direction
Focus
This case study centers my VR lane while still crediting the flatscreen dashboard work developed with my teammates
What MUMOSA Is
MUMOSA is a multimodal situational-awareness dashboard. The core idea is to stop treating crisis evidence as separate silos and instead connect reports, images, extracted events, schema graphs, and 3D or simulation views inside one interface that can support investigation, training, and, later, potentially real-time response.
What makes it interesting to me is that it sits directly in the space I care about most: human factors, high-stakes information flow, and spatial interfaces that help users understand a scene rather than only read about it.
What I Owned
I am not presenting this as if I built the entire MUMOSA platform myself. My contribution was the literature review and the VR side of the prototype: framing when immersive review is valuable, sketching the reconstructed scene, mapping interactions to controllers, and showing how evidence would stay grounded instead of turning into a disconnected tech demo.
The dashboard redesign work shown later came from the shared team process and is included here as context. Going into the digital phase, we are keeping that split clear: the flatscreen prototype moves into Axure with my colleagues, while I carry the VR lane forward in Unity.
Research Findings That Shaped The Direction
Cognitive Load Comes First
My lit review kept returning to the same problem: responders and investigators are already overloaded. The interface has to reduce fragmentation, not add another noisy control room.
Trust Needs Grounding
The strongest heuristic in the MUMOSA paper is still the right one for our coursework too: every AI summary needs a visible path back to the source evidence.
Resolve Phase Is The Best Fit
The most believable use case stayed the same through prototyping: post-crisis reconstruction and training, where schema graphs, documents, and 3D review become genuinely useful.
Finished Low-Fidelity VR Paper Prototype
The finished low-fidelity prototype translates the research into something concrete: a paper headset window, controller annotations, a sketched reconstruction of the crisis site, and evidence notes that appear in-scene when the investigator asks to verify a claim.
Instead of claiming a full VR build, I focused on the interaction questions that actually matter first. Can users orient themselves in the scene? Can AI summaries be checked against evidence? Does the interface support a resolve-phase workflow without burying the person in more complexity?
Scene-First Orientation
I started with a panoramic sketch of the site so investigators can understand place and hazard layout before chasing UI chrome.
Evidence In Context
The sticky-note overlays simulate AI summaries, timestamps, and next actions anchored directly to the place being inspected.
Low-Tech, Testable Controls
Annotated paper controllers let us test teleportation, source reveal, LiDAR measurement, and zoom/select behavior before building software.
Interaction Model
The controls are intentionally modest. I wanted the paper prototype to prove the interaction logic before promising any technical implementation. The model stays focused on navigation, evidence verification, and selective deep inspection.
- - Teleport between scene zones instead of forcing the user through menu-heavy navigation.
- - Use a visible "show source" action so AI summaries can always lead back to evidence.
- - Reserve LiDAR measurement, zoom, and alternate view controls for deeper inspection moments.
- - Keep VR as a review surface paired with the web dashboard, not a replacement for the broader system.
Evidence Grounding In The Scene
These notes are the core of the concept. They show how the interface could present AI-generated findings without asking users to trust floating summaries blindly. Each card has a timestamped claim, a quick interpretation, and an action prompt that leads back to source evidence.
Team Dashboard Direction
Even though my emphasis is the VR prototype, the finished low-fidelity work was broader than that. The shared FigJam board tracked flatscreen improvements alongside the VR lane, and those dashboard ideas matter because VR only makes sense as one mode in a larger system.
Role-Based Views
The flatscreen direction explored filtered entry points for responders, investigators, and shared cross-role analysis so people see the right level of detail first.
Timeline And Calendar Wayfinding
The team pushed incident/date selection toward clearer timeline and calendar structures so users can reconstruct event sequence faster.
Clearer Schema Evidence
Dashboard revisions focused on a better legend, stronger source linkage, and easier-to-read node details instead of the original dense graph.
Saved Notes For Investigators
My teammates also explored note-taking and saved annotations so investigators can preserve findings during deeper review.
Design Brief Translation
Reviewing the FigJam design brief helped tighten the page narrative. The board makes it clear that the project is not only about adding VR, but about restructuring the whole experience around cognition, role, and investigation flow.
Role-Adaptive Information
The FigJam design brief reframed the dashboard around dynamic filtering and role-based views so investigators, responders, and coordinators can enter the same incident from different cognitive starting points.
Overview To Detail To Overview
One of the clearest patterns in the brief is hierarchical exploration: start broad, drill into specific evidence, then move back out to re-establish context.
Training Is Not Secondary
The board treats simulation-based learning, pattern recognition, and decision rehearsal as core outcomes, not side benefits layered on after the fact.
Planned Usability Test
The FigJam board also already includes a paper-prototype usability script. That matters because the low-fidelity phase is not just a sketch dump; it has a concrete evaluation plan for orientation, timeline understanding, and deeper evidence review.
Task 1: Initial Orientation
Participants first explore the dashboard freely, then explain what they would do first to understand the event. This tests whether overview information and entry points are discoverable.
Task 2: Timeline Reconstruction
The script asks users to find when the event happened and reconstruct the sequence leading up to it, focusing on timeline discoverability and information hierarchy.
Task 3: Evidence And Deeper Investigation
Participants are asked to locate supporting evidence and describe how they would inspect the scene more closely, which directly probes whether VR mode and deeper analysis tools feel legible.
Course Deliverables And Project Status
Completed
Literature Review
I finished the research document grounding the redesign in situational awareness, cognitive load, and multimodal crisis-response heuristics.
Completed
Low-Fidelity Team Prototype
The current prototype package includes dashboard wireframes, the VR paper prototype, and a usability-testing script for an incident-analysis scenario.
Next
Testing And Electronic Prototype
The next milestone is a digital prototype split across two lanes: an Axure flatscreen prototype built with my teammates and a separate Unity VR prototype that I will develop for the immersive side.
Research Grounding
This is the authored research document behind my part of the project. It covers user groups, heuristics, multimodal crisis-response design, and the reasoning that eventually shaped the resolve-phase and VR framing.
The client paper still matters as context, but it is not my portfolio artifact. What belongs in this case study is the bridge from research into the finished low-fidelity prototype.
Earlier VR Direction Note
Reference Documents
These links are here for context and coursework documentation. The client paper is supporting reference, not presented as my authored portfolio work.
Outcome So Far
What is real here now is not only the research but the finished low-fidelity prototype itself. The project already communicates a credible division of labor: my VR paper prototype explores spatial evidence review, while the rest of the team pushes the dashboard toward clearer filtering, schema, and note-taking. The next step is to preserve that split in the digital prototype phase: Axure for the shared flatscreen workflow, Unity for the VR experience I am building separately.