Publications

PAIP 2019: Liver cancer segmentation challenge

Yoo Jung Kim · Hyungjoon Jang · Kyoungbun Lee · Seongkeun Park · Sung-Gyu Min · Choyeon Hong · Jeong Hwan Park · Kanggeun Lee · Jisoo Kim · Wonjae Hong · Hyun Jung · Yanling Liu · Haran Rajkumar · Mahendra Khened · Ganapathy Krishnamurthi · Sen Yang · Xiyue Wang · Chang Hee Han · Jin Tae Kwak · Jianqiang Ma · Zhe Tang · Bahram Marami · Jack Zeineh · Zixu Zhao · Pheng-Ann Heng · Rudiger Schmitz · Frederic Madesta · Thomas Rosch · Rene Werner · Jie Tian · Puybareau · Matteo Bovio · Xiufeng Zhang · Yifeng Zhu · Se Young Chun · Won-Ki Jeong · Peom Park · Jinwook Choi

Liver cancer
Tumor burden
Digital pathology
Challenge
Segmentation

Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team’s algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.

A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging

Zhaohan Xiong · Qing Xia · Zhiqiang Hu · Ning Huang · Cheng Bian · Yefeng Zheng · Sulaiman Vesal · Nishant Ravikumar · Andreas Maier · Xin Yang · Pheng-Ann Heng · Dong Ni · Caizi Li · Qianqian Tong · Weixin Si · Puybareau · Younes Khoudli · Thierry Géraud · Chen Chen · Wenjia Bai · Daniel Rueckert · Lingchao Xu · Xiahai Zhuang · Xinzhe Luo · Shuman Jia · Maxime Sermesant · Yashu Liu · Kuanquan Wang · Davide Borra · Alessandro Masci · Cristiana Corsi · Coen Vente · Mitko Veta · Rashed Karim · Chandrakanth Jayachandran Preetha · Sandy Engelhardt · Menyun Qiao · Yuanyuan Wang · Qian Tao · Marta Nunez-Garcia · Oscar Camara · Nicolo Savioli · Pablo Lamata · Jichao Zhao

Left atrium
Convolutional neural networks
Late gadolinium-enhanced magnetic resonance imaging
Image segmentation

Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIs, currently the world’s largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalities, having an impact on the wider medical imaging community.

FOANet: A focus of attention network with application to myocardium segmentation

Zhou Zhao · Nicolas Boutry · Puybareau · Thierry Géraud

In myocardium segmentation of cardiac magnetic resonance images, ambiguities often appear near the boundaries of the target domains due to tissue similarities. To address this issue, we propose a new architecture, called FOANet, which can be decomposed in three main steps: a localization step, a Gaussian-based contrast enhancement step, and a segmentation step. This architecture is supplied with a hybrid loss function that guides the FOANet to study the transformation relationship between the input image and the corresponding label in a three-level hierarchy (pixel-, patch- and map-level), which is helpful to improve segmentation and recovery of the boundaries. We demonstrate the efficiency of our approach on two public datasets in terms of regional and boundary segmentations.

Do not treat boundaries and regions differently: An example on heart left atrial segmentation

Zhou Zhao · Nicolas Boutry · Puybareau · Thierry Géraud

Atrial fibrillation is the most common heart rhythm disease. Due to a lack of understanding in matter of underlying atrial structures, current treatments are still not satisfying. Recently, with the popularity of deep learning, many segmentation methods based on fully convolutional networks have been proposed to analyze atrial structures, especially from late gadolinium-enhanced magnetic resonance imaging. However, two problems still occur: 1) segmentation results include the atrial- like background; 2) boundaries are very hard to segment. Most segmentation approaches design a specific network that mainly focuses on the regions, to the detriment of the boundaries. Therefore, this paper proposes an attention full convolutional network framework based on the ResNet-101 architecture, which focuses on boundaries as much as on regions. The additional attention module is added to have the network pay more attention on regions and then to reduce the impact of the misleading similarity of neighboring tissues. We also use a hybrid loss composed of a region loss and a boundary loss to treat boundaries and regions at the same time. We demonstrate the efficiency of the proposed approach on the MICCAI 2018 Atrial Segmentation Challenge public dataset.

Stacked and parallel U-nets with multi-output for myocardial pathology segmentation

Zhou Zhao · Nicolas Boutry · Puybareau

In the field of medical imaging, many different image modalities contain different information, helping practitionners to make diagnostic, follow-up, etc. To better analyze images, mixing multi-modalities information has become a trend. This paper provides one cascaded UNet framework and uses three different modalities (the late gadolinium enhancement (LGE) CMR sequence, the balanced- Steady State Free Precession (bSSFP) cine sequence and the T2-weighted CMR) to complete the segmentation of the myocardium, scar and edema in the context of the MICCAI 2020 myocardial pathology segmentation combining multi-sequence CMR Challenge dataset (MyoPS 2020). We evaluate the proposed method with 5-fold-cross-validation on the MyoPS 2020 dataset.

Practical “paritizing” of Emerson–Lei automata

Florian Renkin · Alexandre Duret-Lutz · Adrien Pommellet

We introduce a new algorithm that takes a <i>Transition-based Emerson-Lei Automaton</i> (TELA), that is, an $\omega$-automaton whose acceptance condition is an arbitrary Boolean formula on sets of transitions to be seen infinitely or finitely often, and converts it into a <i>Transition-based Parity Automaton</i> (TPA). To reduce the size of the output TPA, the algorithm combines and optimizes two procedures based on a <i>latest appearance record</i> principle, and introduces a <i>partial degeneralization</i>. Our motivation is to use this algorithm to improve our LTL synthesis tool, where producing deterministic parity automata is an intermediate step.

A two-stage temporal-like fully convolutional network framework for left ventricle segmentation and quantification on MR images

Zhou Zhao · Nicolas Boutry · Puybareau · Thierry Géraud

Automatic segmentation of the left ventricle (LV) of a living human heart in a magnetic resonance (MR) image (2D+t) allows to measure some clinical significant indices like the regional wall thicknesses (RWT), cavity dimensions, cavity and myocardium areas, and cardiac phase. Here, we propose a novel framework made of a sequence of two fully convolutional networks (FCN). The first is a modified temporal-like VGG16 (the "localization network") and is used to localize roughly the LV (filled-in) epicardium position in each MR volume. The second FCN is a modified temporal-like VGG16 too, but devoted to segment the LV myocardium and cavity (the "segmentation network"). We evaluate the proposed method with 5-fold-cross-validation on the MICCAI 2019 LV Full Quantification Challenge dataset. For the network used to localize the epicardium, we obtain an average dice index of 0.8953 on validation set. For the segmentation network, we obtain an average dice index of 0.8664 on validation set (there, data augmentation is used). The mean absolute error (MAE) of average cavity and myocardium areas, dimensions, RWT are 114.77 mm^2; 0.9220 mm; 0.9185 mm respectively. The computation time of the pipeline is less than 2 s for an entire 3D volume. The error rate of phase classification is 7.6364%, which indicates that the proposed approach has a promising performance to estimate all these parameters.

Topological properties of the first non-local digitally well-composed interpolation on $n$-D cubical grids

Nicolas Boutry · Laurent Najman · Thierry Géraud

In discrete topology, we like digitally well-composed (shortly DWC) interpolations because they remove pinches in cubical images. Usual well-composed interpolations are local and sometimes self-dual (they treat in a same way dark and bright components in the image). In our case, we are particularly interested in $n$-D self-dual DWC interpolations to obtain a purely self-dual tree of shapes. However, it has been proved that we cannot have an $n$-D interpolation which is at the same time local, self-dual, and well-composed. By removing the locality constraint, we have obtained an $n$-D interpolation with many properties in practice: it is self-dual, DWC, and in-between (this last property means that it preserves the contours). Since we did not published the proofs of these results before, we propose to provide in a first time the proofs of the two last properties here (DWCness and in-betweeness) and a sketch of the proof of self-duality (the complete proof of self-duality requires more material and will come later). Some theoretical and practical results are given.

Equivalence between digital well-composedness and well-composedness in the sense of Alexandrov on $n$-D cubical grids

Nicolas Boutry · Laurent Najman · Thierry Géraud

Among the different flavors of well-composednesses on cubical grids, two of them, called respectively Digital Well-Composedness (DWCness) and Well-Composedness in the sens of Alexandrov (AWCness), are known to be equivalent in 2D and in 3D. The former means that a cubical set does not contain critical configurations when the latter means that the boundary of a cubical set is made of a disjoint union of discrete surfaces. In this paper, we prove that this equivalence holds in $n$-D, which is of interest because today images are not only 2D or 3D but also 4D and beyond. The main benefit of this proof is that the topological properties available for AWC sets, mainly their separation properties, are also true for DWC sets, and the properties of DWC sets are also true for AWC sets: an Euler number locally computable, equivalent connectivities from a local or global point of view... This result is also true for gray-level images thanks to cross-section topology, which means that the sets of shapes of DWC gray-level images make a tree like the ones of AWC gray-level images.

3BI-ECC: A decentralized identity framework based on blockchain technology and elliptic curve cryptography

Daniel Maldonado-Ruiz · Jenny Torres · Nour El Madhoun

blockchain
elliptic curves cryptography
self-generated certificates
self-generated identity
cybersecurity

Most of the authentication protocols assume the existence of a Trusted Third Party (TTP) in the form of a Certificate Authority or as an authentication server. The main objective of this research is to present an autonomous solution where users could store their credentials, without depending on TTPs. For this, the use of an autonomous network is imperative, where users could use their uniqueness in order to identify themselves. We propose the framework “Three Blockchains Identity Management with Elliptic Curve Cryptography (3BI-ECC)”. Our proposed framework is a decentralize identity management system where users’ identities are self-generated.