Workshop Updates
October 13th: Video recordings of the workshop keynote talk and presentations are now available at: YouTube.
October 13th: Free online access to the workshop proceeding can be downloaded here. Note that the full proceeding is only accessible directed from this website, and will be available for 4 weeks.
October 4th: Full program of the workshop is now online. The workshop will be held at the Vancouver Convention
Center East Building Level 1, Meeting Room 15
August 17th: Notification of acceptance has been sent to the primary authors. Submission of camera-ready version of the accepted submission is now open on CMT-"Create Camera Ready Submission". Deadline for camera-ready submission is August 22nd 2023.
July 12th: This year we will have the honor to invite Prof. Arman Rahmim from The University of British Columbia for his keynote speech on "Towards Digital Twins for Precision Medicine"!
August 8th: Due to the large number of submissions we received this year, extra time will be needed for the reviewing process. The notification of acceptance date is delayed to August 16th 2023.
July 13th: In response to mulitple requests, submission deadline has been extended to July 21st 2023.
May 9th: Submission portal is now open.
May 6th: Workshop listed in the MICCAI satelite event program. MMMI 2023 will be held on October 8th PM as a half-day event.
We are offering multiple Best Paper Awards and Student Paper Awards, thanks to the support from our sponsors!
Because of this, submission deadline has been extended to August 7th.
Scope
The International Workshop on Multiscale Multimodal Medical Imaging (MMMI) aims at tackling the important challenge of acquiring and analyzing medical images at multiple scales and/or from multiple modalities, which has been increasingly applied in research studies and clinical practice. MMMI offers an opportunity to present: 1) techniques involving multi-modal image acquisition and reconstruction, or imaging at multi-scales; 2) novel methodologies and insights of multiscale multimodal medical images analysis, including image fusing, multimodal augmentation, and joint inference; and 3) empirical studies involving the application of multiscale multimodal imaging for clinical use.
Objective
Facing the growing amount of data available from multiscale multimodal medical imaging facilities and a variety of new methods for the image analysis developed so far, this MICCAI workshop aims to move the forward state of the art in multiscale multimodal medical imaging, including both algorithm development, implementation of the methodology, and experimental studies. The workshop also aims to facilitate more communications and interactions between researchers in the field of medical image analysis and the field of machine learning, especially with expertise in data fusion, multi-fidelity methods, and multi-source learning.
Topics
Topic of submissions to the workshop include, but not limited to:
Image segmentation techniques based on multiscale multimodal images
Novel techniques in multiscale multimodal image acquisition and reconstruction
Registration methods across multiscale multimodal images
Fusion of images from multiple resolutions and novel visualization methods
Spatial-temporal analysis using multiple modalities
Fusion of image sources with different fidelities: e.g., co-analysis of EEG and fMRI
Multiscale multimodal disease diagnosis/prognosis using supervised or unsupervised methods
Atlas-based methods on multiple imaging modalities
Cross-modality image generative methods: e.g., generation of synthetic CT/MR images
Novel radiomics methods based on multiscale multimodal imaging
Shape analysis on images from multiple sources and/or multiple resolutions
Graph methods in medical image analysis
Benchmark studies for multiscale multimodal image analysis: e.g., using electrophysiological signals to validate fMRI data
Multi-view machine learning for cancer diagnosis and prognosis
Integrated radiology, pathology, and genomics analysis via learning algorithms
New image biomarker identification through multiscale multimodal data
Integrated learning using both image and non-image data
History of MMMI
MMMI 2019 (https://mmmi2019.github.io/) recorded 80 attendees and received 18 8-pages submissions, with 13 accepted and presented. The theme of MMMI 2019 was the emerging techniques for imaging and analyzing multi-modal multi-scale data. The 2nd MMMI workshop was merged with MLCDS 2021 (http://mcbr-cds.org/), recorded 58 attendees, and received 16 8-pages submissions, with 10 of them accepted and presented. The theme of MLCDS 2021 was the role and prospect of multi-modal multi-scale imaging in clinical practice. The 3rd MMMI workshop recorded 64 attendees and received 18 8-pages submissions, with 12 of them accepted and presented. The theme of MMMI 2022 was the novel methodology development for multi-modal fusion. As multi-modal, multi-scale medical imaging is a fast-growing field; we are continuing the MMMI to provide a platform for presenting and discussing novel research from both the radiology and computer science communities.
Workshop Schedule
October 8th, 13:30 - 18:00 (Vancouver time)
13:30-13:40 Welcome message and updates from the workshop organization team
13:40-14:30 Keynote Talk: Prof. Xiaoxiao Li: "Federated Learning on Multi-source and Multi-modal Medical Data"
Xiaoxiao Li is an Assistant Professor at the Department of Electrical and Computer Engineering at the University of British Columbia (UBC) starting August 2021. In addition, Dr. Li holds positions as a Faculty Member at Vector Institute and an adjunct Assistant Professor at Yale University. Before joining UBC, Dr. Li was a Postdoc Research Fellow at Princeton University. Dr. Li obtained her Ph.D. degree from Yale University in 2020. Dr. Li's research focuses on developing theoretical and practical solutions for enhancing the trustworthiness of AI systems in healthcare. Specifically, her recent research has been dedicated to advancing federated learning techniques and their applications in the medical field. Dr. Li's work has been recognized with numerous publications in top-tier machine learning conferences and journals, including NeurIPS, ICML, ICLR, MICCAI, IPMI, ECCV, TMI, Medical Image Analysis and Nature Methods. Her contributions have been further acknowledged with several best paper awards at prestigious international conferences.
14:30-15:30 Long oral session, Part I
BreastRegNet: A Deep Learning Framework for Registration of Breast Faxitron and Histopathology Images (20m)
Negar Golestani (Stanford University) Gregory Bean (Stanford University) Mirabela Rusu (Stanford University)
Identifying Shared Neuroanatomic Architecture between Cognitive Traits through Multiscale Morphometric Correlation Analysis (20m)
Zixuan Wen (University of Pennsylvania) Jingxuan Bao (University of Pennsylvania) Shannon Risacher (Indiana University) Andrew Saykin (Indiana University) Paul Thompson (Imaging Genetics Center ) Christos Davatzikos (University of Pennsylvania) Yize Zhao (Yale University) Li Shen (University of Pennsylvania)
Modality Cycles with Masked Conditional Diffusion for Unsupervised Anomaly Segmentation in MRI (20m)
Ziyun Liang (University of Oxford) Harry Anthony (University of Oxford) Felix Wagner (University of Oxford) Konstantinos Kamnitsas (University of Oxford)
15:30-16:00
Coffee break
16:00-17:00
Long oral session, Part II
MAD: Modality Agnostic Distance Measure for Image Registration (20m)
Vasiliki Sideri-Lampretsa (Technische Universität München) Veronika Zimmer (Technical University Munich) Huaqi Qiu (Imperial College London) Georgios Kaissis (Technische Universität München) Daniel Rueckert (Technische Universität München)
Osteoarthritis Diagnosis Integrating Whole Joint Radiomics and Clinical Features for Robust Learning Models using Biological Privileged Information (20m) recording
Najla Al Turkestani (King Abdulaziz University) Lucia Cevidanes (University of Michigan) Jonas Bianchi (University of Michigan) Winston Zhang (University of Michigan) Marcela Gurgel (University of Michigan) Baptiste Baquero (University of Michigan) Reza Soroushmehr (University of Michigan)
Anatomy-Aware Lymph Node Detection in Non-Contrast and Contrast-Enhanced Chest CT using Implicit Station Stratification (20m)
Ke Yan (Alibaba DAMO Academy) Dakai Jin (Alibaba USA Inc.) Dazhou Guo (Alibaba DAMO Academy USA) Minfeng Xu (Alibaba) Na Shen (Zhongshan Hospital of Fudan University) Xian-Sheng Hua (Damo Academy, Alibaba Group) Xianghua Ye (Zhejiang University) Le Lu (Alibaba Group)
17:00-18:00
Short oral session
M^2Fusion: Bayesian-based Multimodal Multi-level Fusion on Colorectal Cancer Microsatellite Instability Prediction (5m)
Quan Liu (Vanderbilt University) Jiawen Yao (DAMO Academy, Alibaba Group) Lisha Yao (Guangdong) Xin Chen (Guangzhou First People's Hospital) Jingren Zhou (Alibaba Group) Le Lu (Alibaba Group) Ling Zhang (Alibaba USA Inc.) Zaiyi Liu (Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Science) Yuankai Huo (Vanderbilt University)
Query Re-Training for Modality-Gnostic Incomplete Multi-modal Brain Tumor Segmentation (5m)
Delin Chen (Wuhan University) YanSheng Qiu (Wuhan University) Zheng Wang (Wuhan University)
Multimodal Context-Aware Detection of Glioma Biomarkers using MRI and WSI (5m)
Tomé Albuquerque (INESC TEC) Benedikt Wiestler (TUM) Maria Vasconcelos (Fraunhofer Portugal AICOS) Jaime Cardoso (INESC Porto, Universidade do Porto) Peter Schüffler (Technical University of Munich)
Synthesising brain iron maps from quantitative magnetic resonance images using interpretable generative adversarial networks (5m)
Lindsay Munroe (King's College London) Maria Deprez (King's College London)
Noisy-Consistent Pseudo Labeling Model for Semi-supervised Skin Lesion Classification (5m) recording
Sen Li (Yizhun Medical AI) Qian Li (China Aerospace Science and Industry Group 731 Hospital)
Hessian-based Similarity Metric for Multimodal Medical Image Registration (5m)
Mohammadreza Eskandari (McGill University) Houssem-Eddine Gueziri (McGill University) Louis Collins (McGill)
Hybrid Multimodality Fusion with Cross-Domain Knowledge Transfer to Forecast Progression Trajectories in Cognitive Decline (5m)
Minhui Yu (The University of North Carolina at Chapel Hill) Yunbi Liu (School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen) Shijun Qiu (The First Affiliated Hospital of Guangzhou University of Chinese Medicine) Ling Yue (Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine) Mingxia Liu (University of North Carolina at Chapel Hill)
Leveraging Contrastive Learning with SimSiam for the Classification of Primary and Secondary Liver Cancers (5m)
Ramtin Mojtahedi (Queen's University) Mohammad Hamghalam (Queen's University) William Jarnagin (Memorial Sloan Kettering Cancer Center) Richard Do (Memorial Sloan Kettering Cancer Centre) Amber Simpson (Queen's University)
MuST: Multimodal Spatiotemporal Graph-Transformer for Hospital Readmission Prediction (5m) recording
Yan Miao (The University of Hong Kong) Lequan Yu (The University of Hong Kong)
Groupwise Image Registration with Atlas of Multiple Resolutions Refined at Test Phase (5m)
Ziyi HE (Hong Kong University of Science and Technology) Tony C. W. Mok (DAMO Academy, Alibaba Group) Albert C. S. Chung (HKUST)
Presenters are encouraged, although not required, to bring and present the poster associated with the accepted submission. The poster hall is at the Ground Level Exhibition B-C where the coffee break and lunches will be served. There will be labels on the poster board with the acronyms of MMMI. Posters should be in Portrait format. The maximum poster size for MICCAI 2023 is A0, (i.e. 841 x 1189 mm or 33.1 x 46.8 in) (Width x Height) portrait format.