Skip to main content

We have been working on challenging problem of 3D reconstruction of room-sized dynamic scenes (i.e., containing moving humans and objects) with a quality and fidelity significantly higher than currently possible. Visualization of these 3D reconstructions on virtual and augmented reality displays can provide immersive experience for activities currently requiring live physical presence. Currently we are working on 3D capture of prostate biopsy’s and trauma surgical procedures that can provide immersive learning experience of medical surgeries.

Publications

 MMVR Cha, Y. W., Dou, M., Chabra, R., Menozzi, F., Wallen, E., & Fuchs, H. (2015). Immersive Learning Experiences for Surgical Procedures. Studies in health technology and informatics, 220, 55-62.
Abstract: This paper introduces a computer-based system that is designed to record a surgical procedure with multiple depth cameras and reconstruct in three dimensions the dynamic geometry of the actions and events that occur during the procedure. The resulting 3D-plus-time data takes the form of dynamic, textured geometry and can be immersively examined at a later time; equipped with a Virtual Reality headset such as Oculus Rift DK2, a user can walk around the reconstruction of the procedure room while controlling playback of the recorded surgical procedure with simple VCR-like controls (play, pause, rewind, fast forward). The reconstruction can be annotated in space and time to provide more information of the scene to users. We expect such a system to be useful in applications such as training of medical students and nurses.
[Paper] [BibTeX]
 Dou2015 Dou, Mingsong, Jonathan Taylor, Henry Fuchs, Andrew Fitzgibbon, Shahram Izadi. “3D Scanning Deformable Objects with a Single RGBD Sensor.” CVPR 2015 (Boston, Massachusetts, Jun. 8-10, 2015).
Abstract: We present a 3D scanning system for deformable objects that uses only a single Kinect sensor. Our work allows considerable amount of nonrigid deformations during scanning, and achieves high quality results without heavily constraining user or camera motion. We do not rely on any prior shape knowledge, enabling general object scanning with freeform deformations. To deal with the drift problem when nonrigidly aligning the input sequence, we automatically detect loop closures, distribute the alignment error over the loop, and finally use a bundle adjustment algorithm to optimize for the latent 3D shape and nonrigid deformation parameters simultaneously. We demonstrate high quality scanning results in some challenging sequences, comparing with state of art nonrigid techniques, as well as ground truth data
[Paper] [YouTube][BibTeX]