Current Standardization Activities on Dynamic Mesh Compression
- Sebastian Schwarz (Nokia Technologies)
- Marius Preda (Institut Mines Telecom)
The advances in 3D capture, modeling, and rendering have promoted the ubiquitous presence of 3D content across several platforms and devices. Nowadays, it is possible to capture a baby’s first step in one continent and allow the grandparents to see (and maybe interact with) and enjoy a fully immersive experience with the child in another continent. Nevertheless, to achieve such realism, models are becoming ever more sophisticated, and a significant amount of data is linked to the creation and consumption of those models. 3D meshes are widely used to represent such immersive content. A mesh is composed of several polygons that describe the boundary surface of a volumetric object. Optionally, vertex attributes, such as colors, normals, etc., could be associated with the mesh vertices. A dynamic mesh sequence may require a large amount of data since it may consist of a significant amount of information changing in time. Therefore, efficient compression technologies are required to store and transmit such content.
Therefore, ISO/IEC JTC1 SC29, also known as MPEG, is actively working on a new mesh compression standard to directly handle dynamic meshes with time-varying connectivity information and time-varying attributes. The main difference with traditional dynamic mesh representation formats such as the ones used in games is that not only the geometry of the object is dynamic but all other components (connectivity and attributes). Such content is more realistic but requires more data. To address these challenges, five companies have submitted their solutions for dynamic mesh coding as answers to MPEG’s Call for Proposals in March 2022.
In this proposed special session, we open the floor for scientific discussions with each proponent to the public, outside the confined space of international standardization. We will bring together the key players in MPEG dynamic mesh compression standardization to present their proposals to the public. The session offers will be completed with an analysis paper on the objective and subjective evaluation of the CfP results, as well as an overview of the latest developments in MPEG mesh coding standardization.
Machine Learning for Immersive Content Processing
- Hadi Amirpour (University of Klagenfurt)
- Christine Guillemot (INRIA Rennes)
- Christian Timmerer (University of Klagenfurt)
The importance of remote communication is becoming more and more important in particular after COVID-19 crisis. However, to bring more realistic visual experience, more than the traditional two-dimensional (2D) interfaces we know today is required. Immersive media such as 360-degree, light fields, point cloud, ultra-high definition, high dynamic range, etc. can fill this gap. These modalities, however, face several challenges from capture to display.
Learning based solutions show great promise and significant performance in improving traditional solutions in addressing the challenges. In this special session, we will focus on research works aimed at extending and improving the use of learning based architectures for immersive imaging technologies.
Biometric Recognition Explainability
- Paulo Lobato Correia (Universidade de Lisboa/Instituto de Telecomunicações)
- Chiara Galdi (EURECOM)
Biometric recognition has become a key technology in our society, frequently used in multiple applications, becoming part of the decision-making process in many fields, with a potential impact in, for instance, the labor market, education, online advertising systems, social media, taxation, and the justice system. While biometric recognition solutions based on machine learning are becoming popular, achieving human-like performance most of the time, in a few cases they don’t, thus leading to unexpected behavior of the systems on some individual examples or classes of images. This has contributed to undermining society’s trust and acceptance of these systems and has indirectly slowed down the development of these technologies, as trustworthiness is a prerequisite for artificial intelligence uptake.
This special session focuses on biometric recognition technologies based on artificial intelligence (AI), and in particular on:
- the analysis of the influencing factors relevant for the final decision as an essential step to understand and improve the underlying processes involved;
- performance assessment metrics and protocols for biometric explainability.
This special session aims at collecting scientific contributions that will help improve trust and transparency of biometric systems with important benefits for society as a whole.
Challenges in Point Cloud Technology
- António Pinheiro (Universidade da Beira Interior/Instituto de Telecomunicações)
Plenoptic technology is considered the next frontier in multimedia technology. Providing a richer representation of the 3D world, plenoptic models are foreseen in a wide range of applications, notably virtual, augmented and mixed reality, computer graphics, gaming, 3D printing, construction, manufacturing, robotics, automation, medical applications, cultural heritage, or geographical information systems, among others. Typically, three main domains are considered, notably light fields, point clouds and digital holography. However, as a 3D representation of the information, plenoptic data is usually represented by huge amounts of data, requiring efficient coding to provide reliable management, transmission, and storage.
This special session intends to revisit the new challenges in the domain of coding and quality evaluation of point clouds. New compression models that take advantage of the particularities of this format are required. Eventually, machine learning based codecs will provide new optimized solutions. However, the lack of suitable models for visualization of 3D data, require extensive studies for the evaluation of the compression solutions. Although, a great effort has been established recently in researching models that provide solutions to these new challenges, there is a need for new models that address these challenges in a reliable and efficient manner.