Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Zhou T, Li M, Ruan S, Luo T, Jiang B, Zhu J, Ma P, Yang D, Yang Get al., 2026,

    A reliable framework for brain tumor segmentation via multi-modal fusion and uncertainty modeling

    , Information Fusion, Vol: 129, ISSN: 1566-2535

    Accurate brain tumor segmentation from MRI scans is critical for effective diagnosis and treatment planning. Recent advances in deep learning have significantly improved brain tumor segmentation performance. However, these models still face challenges in clinical adoption due to their inherent uncertainties and potential for errors. In this paper, we propose a novel MR brain tumor segmentation approach that integrates multi-modal data fusion and uncertainty quantification to improve the accuracy and reliability of brain tumor segmentation. Recognizing that each MR modality contributes unique insights into the tumor’s characteristics, we propose a novel modality-aware guidance by explicitly categorizing the modalities into ”teacher” (FLAIR and T1c) and ”student” (T2 and T1) groups. Since the teacher modalities are the most informative modalities for identifying brain tumors, we propose a multi-modal teacher-student fusion strategy. This strategy leverages the teacher modalities to guide the student modalities in both spatial and channel feature representation aspects. To address prediction reliability, we employ Monte Carlo dropout during training to generate multiple uncertainty estimates. Additionally, we develop a novel uncertainty-aware loss function that optimizes segmentation accuracy while quantifying the uncertainty in predictions. Experimental results conducted on three BraTS datasets demonstrate the effectiveness of the proposed components and the superior performance compared to the state-of-the-art methods, highlighting their potential for clinical application.

  • Journal article
    Cheng CW, Huang J, Zhang Y, Yang G, Schönlieb CB, Aviles-Rivero AIet al., 2026,

    Mamba neural operator: Who wins? transformers vs. state-space models for PDEs

    , Journal of Computational Physics, Vol: 548, ISSN: 0021-9991

    Partial differential equations (PDEs) are widely used to model complex physical systems, but solving them efficiently remains a significant challenge. Recently, Transformers have emerged as the preferred architecture for PDEs due to their ability to capture intricate dependencies. However, they struggle with representing continuous dynamics and long-range interactions. To overcome these limitations, we introduce the Mamba Neural Operator (MNO), a novel framework that enhances neural operator-based techniques for solving PDEs. MNO establishes a formal theoretical connection between structured state-space models (SSMs) and neural operators, offering a unified structure that can adapt to diverse architectures, including Transformer-based models. By leveraging the structured design of SSMs, MNO captures long-range dependencies and continuous dynamics more effectively than traditional Transformers. Through extensive analysis, we show that MNO significantly boosts the expressive power and accuracy of neural operators, making it not just a complement but a superior framework for PDE-related tasks, bridging the gap between efficient representation and accurate solution approximation.

  • Journal article
    Jing P, Lee K, Zhang Z, Zhou H, Yuan Z, Gao Z, Zhu L, Papanastasiou G, Fang Y, Yang Get al., 2026,

    Reason like a radiologist: Chain-of-thought and reinforcement learning for verifiable report

    , MEDICAL IMAGE ANALYSIS, Vol: 109, ISSN: 1361-8415
  • Journal article
    Ma X, Tao Y, Zhang Z, Zhang Y, Wang X, Zhang S, Ji Z, Zhang Y, Chen Q, Yang Get al., 2026,

    Test-time generative augmentation for medical image segmentation

    , MEDICAL IMAGE ANALYSIS, Vol: 109, ISSN: 1361-8415
  • Journal article
    Zhang S, Nan Y, Fang Y, Wang S, Liu Y, Papanastasiou G, Gao Z, Li S, Walsh S, Yang Get al., 2026,

    Dynamical multi-order responses and global semantic-infused adversarial learning: A robust airway segmentation method

    , MEDICAL IMAGE ANALYSIS, Vol: 108, ISSN: 1361-8415
  • Journal article
    Ma J, Jiang M, Fang X, Chen J, Wang Y, Yang Get al., 2026,

    Hybrid aggregation strategy with double inverted residual blocks for lightweight salient object detection

    , NEURAL NETWORKS, Vol: 194, ISSN: 0893-6080
  • Journal article
    Jameel A, Smith J, Akgun S, Bain P, Nandi D, Jones B, Quest R, Gedroyc W, Yousif Net al., 2026,

    Creation and clinical utility of a 3D atlas-based model for visualising brain nuclei targeted by MR-guided focused ultrasound thalamotomy for tremor.

    , Biomed Phys Eng Express, Vol: 12

    Magnetic resonance guided focused ultrasound (MRgFUS) thalamotomy is an established treatment for tremor. MRgFUS utilises ultrasound to non-invasively thermally ablate or 'lesion' tremorgenic tissue. The success of treatment is contingent on accurate lesioning as assessed by tremor improvement and minimisation of adverse effects. However, coordinate planning and post-procedure lesion visualisation are difficult as the key targets, cannot be seen on standard clinical imaging. Thus, a computational tool is needed to aid target visualisation. A 3D atlas-based model was created using the Schaltenbrand-Wahren atlas. Key nuclei were manually delineated, interpolated and smoothed in 3D Slicer to create the model. Evaluation of targeting approaches across a seven-year period and patient-specific analyses of tremor treatments were performed. The anatomical position of MRgFUS lesions in the model were compared against varying clinical outcomes. The model provides an anatomical visualisation of how the change in targeting approach led to improved tremor suppression and a reduction in adverse effects for patients. This study demonstrates the successful development of a 3D atlas-based computational model of the brain target nuclei in MRgFUS thalamotomy and its clinical utility for tremor treatment analysis.

  • Conference paper
    Zhang H, Huang J, Wu Y, Dai C, Wang F, Zhang Z, Yang Get al., 2026,

    Lightweight Hypercomplex MRI Reconstruction: A Generalized Kronecker-Parameterized Approach

    , Pages: 95-105, ISSN: 0302-9743

    Magnetic Resonance Imaging (MRI) is crucial for clinical diagnostics but is hindered by prolonged scan times. Current deep learning models enhance MRI reconstruction but are often memory-intensive and unsuitable for resource-limited systems. This paper introduces a lightweight MRI reconstruction model leveraging Kronecker-Parameterized Hypercomplex Neural Networks to achieve high performance with reduced parameters. By integrating Kronecker-based modules, including Kronecker MLP, Kronecker Window Attention, and Kronecker Convolution, the proposed model efficiently extracts spatial features while preserving representational power. We introduce Kronecker U-Net and Kronecker SwinMR, which maintain high reconstruction quality with approximately 50% fewer parameters compared to existing models. Experimental evaluation on the FastMRI dataset demonstrates competitive PSNR, SSIM, and LPIPS metrics, even at high acceleration factors (8× and 16×), with no significant performance drop. Additionally, Kronecker variants exhibit superior generalization and reduced overfitting on limited datasets, facilitating efficient MRI reconstruction on hardware-constrained systems. This approach sets a new benchmark for parameter-efficient medical imaging models. Code is available at:https://github.com/Whethe/HyperKron-MRI-Recon.

  • Conference paper
    Hasan MK, Yang G, Yap CH, 2026,

    Motion-Enhanced Cardiac Anatomy Segmentation via an Insertable Temporal Attention Module

    , Pages: 143-153, ISSN: 0302-9743

    Cardiac anatomy segmentation is useful for clinical assessment of cardiac morphology to inform diagnosis and intervention. Deep learning (DL), especially with motion information, has improved segmentation accuracy. However, existing techniques for motion enhancement are not yet optimal, and they have high computational costs due to increased dimensionality or reduced robustness due to suboptimal approaches that use non-DL motion registration, non-attention models, or single-headed attention. They further have limited adaptability and are inconvenient for incorporation into existing networks where motion awareness is desired. Here, we propose a novel, computationally efficient Temporal Attention Module (TAM) that offers robust motion enhancement, modeled as a small, multi-headed, cross-temporal attention module. TAM’s uniqueness is that it is a lightweight, plug-and-play module that can be inserted into a broad range of segmentation networks (CNN-based, Transformer-based, or hybrid) for motion enhancement without requiring substantial changes in the network’s backbone. This feature enables high adaptability and ease of integration for enhancing both existing and future networks. Extensive experiments on multiple 2D and 3D cardiac ultrasound and MRI datasets confirm that TAM consistently improves segmentation across a range of networks while maintaining computational efficiency and improving on currently reported performance. The evidence demonstrates that it is a robust, generalizable solution for motion-awareness enhancement that is scalable (such as from 2D to 3D). The code is available at https://github.com/kamruleee51/TAM.

  • Journal article
    Markus JE, Cristinacce PLH, Punwani S, O'Connor JPB, Mills R, Lopez MY, Grech-Sollars M, Fasano F, Waterton JC, Thrippleton MJ, Hall MG, Francis ST, Statton B, Murphy K, So P-W, Hyare Het al., 2025,

    Steps on the Path to Clinical Translation-A British and Irish Chapter ISMRM Workshop Survey of the UK MRI Community

    , MAGNETIC RESONANCE IN MEDICINE, ISSN: 0740-3194

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1107&limit=10&resgrpMemberPubs=true&respub-action=search.html Current Millis: 1770435942196 Current Time: Sat Feb 07 03:45:42 GMT 2026

Contact


For enquiries about the MRI Physics Collective, please contact:

Mary Finnegan
Senior MR Physicist at the Imperial College Healthcare NHS Trust

Pete Lally
Assistant Professor in Magnetic Resonance (MR) Physics at Imperial College

Jan Sedlacik
MR Physicist at the Robert Steiner MR Unit, Hammersmith Hospital Campus