Vice President, Research Science
Meta Reality Labs Research
NVIDIA
University of Chicago
NVIDIA, TUM
Meta
08:00-08:30 am | Intro + Accepted papers spotlight | |
08:30-08:40 am | Rakesh Ranjan | Opening remarks |
08:40-09:15 am | Richard Newcombe | [Keynote] Problems still to be solved on the path to the next computing platform -- extending reality with always-on contextual MR, AI and Social Teleportation |
09:15-9:45 am | Anjul Patney |
Pixels at Speed of Light: Lessons in Deploying CV in Graphics & Games
Abstract: This talk will cover the complexities of productizing cutting-edge AI advances into efficient and robust production technologies. I will share key lessons learned from shipping high-impact graphics technologies, focusing on the critical "last-mile" research that bridges academic breakthroughs and real-world applications. With most pixels now AI-generated, we continue to face unique challenges in making novel algorithms interactive and practical for real-time graphics. I will discuss strategies for aggressive prototyping, the vital role of interactive demos, and the necessity of tight research-implementation integration. Finally, I will examine common pitfalls and outline critical open problems in CV4MR, including speed-of-light AI implementations, low-latency remote rendering, and controllable real-time diffusion models. |
9:45-10:15 am | Margarita Grinvald |
Differentiable Passthrough - A learning-based approach to reduce perceptual artifacts
Abstract: Passthrough technology is a fundamental building block for mixed reality (MR), enabling the seamless integration of physical and virtual environments in a headset. Despite its apparent simplicity, the Passthrough algorithm involves complex challenges, particularly related to the need to deal with disocclusion artifacts that arise when synthesizing novel views from the user's eye perspective. This talk outlines the tradespace in a Passthrough setup that doesn't involve explicit disocclusion inpainting and introduces Differentiable Passthrough: a machine-learning approach that leverages differentiable rendering to learn the optimal balance between various perceptual artifacts, ultimately aiming to enhance the end-user experience. |
10:15-10:50 am | Poster Spotlight + Break | |
10:50-11:20 am | Rana Hanocka |
Data-Driven Neural Mesh Editing – without 3D Data
Abstract: Much of the current success of deep learning has been driven by massive amounts of curated data, whether annotated or unannotated. Compared to image datasets, developing large-scale 3D datasets is either prohibitively expensive or impractical. In this talk, I will present several works that harness the power of data-driven deep learning for tasks in shape editing and processing, without any 3D datasets. I will discuss works that learn to synthesize and analyze 3D geometry using large image datasets. |
11:20-11:30 am | Best Poster Award + Town Hall | |
11:30-12:00 pm | Laura Leal-Taixé |
Towards a Foundation Model for 4D (Lidar)
Abstract: Understanding dynamic scenes from videos is one of the key problems in computer vision. Having access to 3D sensors such as Lidar should make our problems easier, but we do not have access to the plethora of foundation models that exist for images. How to perform basic tasks such as panoptic segmentation in the Lidar space? In this talk, I will explore the use of 2D foundation models to construct our very own 4D (Lidar) foundation model that can segment objects in the Lidar space given a prompt, it can track those objects, and it can learn to reconstruct or complete them. |
Important Dates
Papers submitted to the workshop will appear in the proceedings of the CVPR workshops in 2025.
Announcement
Only two weeks before our workshop submission deadline, CVPR Workshop Chairs unexpectedly communicated us a March 31 deadline for proceedings submission, leaving us only 48 hours for the review process. Despite our efforts, we were unable to extend this deadline with them. Rather than compromise on review quality, we've decided not to submit accepted papers to the CVPR Workshop proceedings. Our priority is to provide the authors with high-quality feedback, select the best papers, and ensure they are highlighted on the workshop website. Other workshop organizers we know are taking the same approach. Thank you for your understanding.
Topics of Interest
The CV4MR 2025 workshop will highlight frontiers of innovation in turning wearable computers, sensors and displays into augmentations of human capability for productivity, life improvement or recreation. Since this topic is inherently interdisciplinary, we encourage authors to submit works in AI, Computer Vision, Image Processing or Computational Photography that they think are applicable to advancing this field.
Authors are highly encouraged to motivate their applications for Mixed Reality in the submissions.
Here is a non-exhaustive list of topics we encourage submissions on:
Best Workshop Paper Award
We are pleased to announce a CV4MR Best Workshop Paper Award (with a Meta Quest 3S prize sponsored by Meta), to be selected from the accepted papers.
Submission Guidelines:
Reviewing for CV4MR 2025
Reviewers are the backbone for the integrity of knowledge in our workshop. For those interested in being added to the reviewer pool, please email cv4mr@googlegroups.com with the subject “Reviewer Pool Participation”, some information about you, and your resume attached.
Meta
NVIDIA
Meta
Meta
Meta