Alejandro Frangi
University of Manchester, UK
In Silico Regulatory Science for the Digital Era
Abstract
Novel medical technologies are being introduced at unprecedented rates, demanding scientific evidence of their safety and efficacy at pace to ensure patient safety and benefit. With success in both in-vitro/in-vivo studies, products are tested on clinical trials assessing use in humans. Predicting low-frequency side effects has been difficult because such side effects may not become apparent until many patients adopt the treatment. When medical devices fail at later stages, financial losses can be catastrophic. Testing on many people is costly, lengthy, and sometimes implausible (e.g., paediatric patients, rare diseases, and underrepresented or hard-to-reach ethnic groups).
Computational Medicine underpins In-silico trials (IST), i.e., computer-based trials of medical products performed on populations of digital twins (aka virtual patients). Computer models/simulations are used to conceive, develop, and assess devices with the intended clinical outcome explicitly optimised from the outset (a-priori) instead of tested on humans (a-posteriori). This will include testing for potential risks to patients (side effects) and exhaustively exploring medical device failure modes before being tested in human clinical trials. In-silico evidence is still consolidating but is poised to transform how health and life sciences R&D and regulations are conducted. UK can take a leadership position in in-silico trials, which would cement its position as a global leader in health and life sciences, help drive the UK economy and provide UK citizens with early access to innovative health products.
In this talk, I will introduce the attendees to this world of new possibilities and summarise progress made in this new paradigm among academia, industry, regulators, and policymakers. A recent landscape report would be a helpful companion to this talk: Frangi, AF, et al. Unlocking the Power of Computational Modelling and Simulation Across the Product Lifecycle in Life Sciences: A UK Landscape Report. InSilicoUK Pro-Innovation Regulations Network, 2023, doi:10.5281/zenodo.8325274.
Speaker’s Bio
Professor Alejandro F Frangi FREng FIEEE FSPIE FMICCAI is the Bicentennial Turing Chair in Computational Medicine at the University of Manchester, Manchester, UK, with joint appointments at the Computer Science and Health Sciences Schools. He is Director of the Christabel Pankhurst Institute on health technologies research and in-novation. He is also the Royal Academy of Engineering Chair in Emerging Technologies, with a focus on Precision Computational Medicine for in silico trials of medical devices. He is an Alan Turing Institute Fellow. His research vision was recently awarded an ERC Advanced Grant from the European Research Council under the Computer Science and Informatics (PE6) panel. He also leads the InSilicoUK Pro-Innovation Regulations Network.
Professor Frangi’s primary research interests lie at the crossroads of medical image analysis and modelling, emphasising machine learning (phenomenological models) and computational physiology (mechanistic models). He is particularly interested in statistical methods applied to population imaging and in silico clinical trials. His highly interdisciplinary work has been translated into cardiovascular, musculoskeletal and neurosciences.
Alessandro Foi
Tampere University, Finland
Noise in imaging: focus on correlation and nonlinearity
Abstract
Understanding and characterizing noise is a foundational part of the design and analysis of an imaging system, and it is also essential for the development of the corresponding image processing modules. In this talk we consider broad classes of heteroskedastic image observations and specifically focus on the noise correlation, the noise anisotropy, and on the nonlinear effects that can arise when dealing with capture at low signal-to-noise ratio or when maximizing the coverage of a narrow dynamic range. We demonstrate possibly unexpected and perhaps counter-intuitive phenomena which, unless suitably modeled and accounted for, can significantly disrupt the noise analysis and other operations in an image processing pipeline. Instances of these phenomena are shown across various imaging and image processing systems used in biomedical, defense, security, as well as consumer applications, including x-ray tomography, infrared thermography, confocal fluorescence microscopy, and on-demand video streaming.
Speaker’s Bio
Alessandro Foi is Professor of Signal Processing at Tampere University (TAU), Finland. He leads the Signal and Image Restoration group and he is the director of TAU Imaging Research Platform.
He received the M.Sc. degree in Mathematics from the Università degli Studi di Milano, Italy, in 2001, the Ph.D. degree in Mathematics from the Politecnico di Milano in 2005, and the D.Sc.Tech. degree in Signal Processing from Tampere University of Technology, Finland, in 2007.
His research interests include mathematical and statistical methods for signal processing, functional and harmonic analysis, and computational modeling of the human visual system. His work focuses on spatially adaptive (anisotropic, nonlocal) algorithms for the restoration and enhancement of digital images, on noise modeling for imaging devices, and on the optimal design of statistical transformations for the stabilization, normalization, and analysis of random data.
He is the Editor-in-Chief of the IEEE Transactions on Image Processing.
He previously served as a Senior Area Editor for the IEEE Transactions on Computational Imaging and as an Associate Editor for the IEEE Transactions on Image Processing, the SIAM Journal on Imaging Sciences, and the IEEE Transactions on Computational Imaging.
Mohamed Deriche
Ajman University, UAE
Evaluating Multimedia Content Quality in the Age of Generative AI
Abstract
In an era marked by technological advancements, the automatic evaluation of multimedia content, spanning audio, images, and videos, has become integral to machine learning and computer vision-based multimedia systems. Despite the high correlation between current objective multimedia quality metrics and subjective scores, various challenges persist. These challenges encompass disparities in metric performance across datasets and distortions, handling multiple distortions, considerations of run-time performance, memory requirements, and application-specific metrics.
This presentation addresses the imperative of ensuring and enhancing multimedia content quality. We explore the confluence of multimedia content evaluation, advancements in AI/ML, and the transformative capabilities of Generative Artificial Intelligence (AI). The discussion delves into methodologies and frameworks employed to assess the quality of multimedia content across diverse platforms and formats.
We will start by first highlighting the contemporary challenges in evaluating multimedia content, considering factors such as visual aesthetics, perceptual quality, and user engagement. Emphasis will be placed on the dynamic nature of multimedia, where traditional evaluation methods may fall short in capturing the nuances of evolving content types.
The talk then shifts focus to the promising role of Generative AI tools in overcoming these challenges. We provide an overview of the latest advancements in generative models, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), elucidating their applicability in both generating and evaluating high-quality multimedia content.
Lastly, the presentation concludes with a forward-looking perspective, addressing ethical considerations, potential pitfalls, and future directions in the symbiotic relationship between multimedia content evaluation and generative AI. The overarching goal is to furnish a roadmap for unlocking the potential of generative AI in the evaluation and enhancement of multimedia content, offering valuable insights for industry professionals, researchers, and enthusiasts.
Speaker’s Bio
Mohamed Deriche received his B.Sc. degree in electrical engineering from the National Polytechnic School, Algeria, and his Ph.D. degree in signal processing from the University of Minnesota in 1994. He worked at Queensland University of Technology, Australia, before joining King Fahd University of Petroleum and Minerals (KFUPM) in Dhahran, Saudi Arabia, where he led the signal processing group. He has published more than 300 papers in multimedia signal and image processing. In 2021, he joined Ajman University to promote the AIRC center and the new Masters in AI within the College of Eng and IT. He delivered numerous invited talks and chaired several conferences including GlobalSIP-MPSP, IEEE Gulf (GCC), Image Processing Tools and Applications, and TENCON (a Region 10 conference). He has supervised more than 50 M.Sc. and Ph.D. students and is the recipient of the IEEE Third Millennium Medal. He also received the Shauman Best Researcher Award, and both the Excellence in Research and Excellence in Teaching Awards while at KFUPM and at Ajman University. His research interests cover signal and image processing spanning from theory to models to diverse applications in multimedia, biomedical, seismics, to language processing.
Slava Voloshynovskiy
University of Geneva, Switzerland
Vision Through the Information-Theoretic AI Lens: Variational and Contrastive Techniques in Explainable AI
Abstract
In this keynote, we delve into the transformative impact of the information-theoretic framework on enhancing exploitability in the field of computer vision, with a special focus on its application in autoencoders, self-supervised learning systems, and generative models. Central to our exploration is the decomposition of mutual information into variational and contrastive components, a methodology that not only deepens our understanding of AI systems but also facilitates the development of more transparent and interpretable models in vision applications. This approach is particularly crucial in improving the analysis and design of existing autoencoder architectures, modern self-supervised learning, and generative models, aligning them more closely with the principles of explainable AI. The presentation will highlight the significance of designing advanced, explainable sampling schemes, especially in specialized areas such as astronomy and medical imaging. These schemes play a vital role in enhancing the accuracy and effectiveness of applications in these fields while ensuring their operational mechanisms are clear and comprehensible. Furthermore, the talk will cover the strategy of conducting data analysis directly in the UV space or k-space, and the Fourier domain. This approach not only optimizes tasks like classification, source estimation, anomaly detection, and disease identification but also adds a layer of transparency to these processes. Marrying complex theoretical constructs with their practical applications will contribute to the advancement of explainable, reliable, and effective technology across diverse fields, ranging from astronomy to medical imaging.
Speaker’s Bio
Slava Voloshynovskiy (IEEE SM’11), a Professor at the University of Geneva’s Department of Computer Science, leads the Stochastic Information Processing group. He earned his Ph.D. in Electrical Engineering from State University Lvivska Polytechnika, Ukraine, after completing his Radio Engineer degree at Lviv Polytechnic Institute. His research focuses on image processing, multimedia security, privacy, and machine learning. Voloshynovskiy has published over 350 journal and conference papers and holds twelve patents. He served as an Associate and Senior Editor for IEEE journals and was an active member of various IEEE committees. Earlier in his career, he was a visiting scholar at the University of Illinois at Urbana-Champaign. In addition to his academic role, he has experience as a consultant in multimedia security and co-founded several companies specializing in copyright and brand protection. He was also the recipient of the Swiss National Science Foundation Professorship Grant in 2003.