Publications

Improving parallel state-space exploration using genetic algorithms

Renault

The verification of temporal properties against a given system may require the exploration of its full state space. In explicit model-checking this exploration uses a Depth-First-Search (DFS) and can be achieved with multiple randomized threads to increase performance. Nonetheless the topology of the state-space and the exploration order can cap the speedup up to a certain number of threads. This paper proposes a new technique that aims to tackle this limitation by generating artificial initial states, using genetic algorithms. Threads are then launched from these states and thus explore different parts of the state space. Our prototype implementation runs 10% faster than state-of-the-art algorithms. These results demonstrate that this novel approach worth to be considered as a way to overcome existing limitations.

The challenge of cerebral magnetic resonance imaging in neonates: A new method using mathematical morphology for the segmentation of structures including diffuse excessive high signal intensities

Yongchao Xu · Baptiste Morel · Sonia Dahdouh · Puybareau · Alessio Virzì · Hélène Urien · Thierry Géraud · Catherine Adamsbaum · Isabelle Bloch

Preterm birth is a multifactorial condition associated with increased morbidity and mortality. Diffuse excessive high signal intensity (DEHSI) has been recently described on T2-weighted MR sequences in this population and thought to be associated with neuropathologies. To date, no robust and reproducible method to assess the presence of white matter hyperintensities has been developed, perhaps explaining the current controversy over their prognostic value. The aim of this paper is to propose a new semi-automated framework to detect DEHSI on neonatal brain MR images having a particular pattern due to the physiological lack of complete myelination of the white matter. A novel method for semi- automatic segmentation of neonatal brain structures and DEHSI, based on mathematical morphology and on max-tree representations of the images is thus described. It is a mandatory first step to identify and clinically assess homogeneous cohorts of neonates for DEHSI and/or volume of any other segmented structures. Implemented in a user-friendly interface, the method makes it straightforward to select relevant markers of structures to be segmented, and if needed, apply eventually manual corrections. This method responds to the increasing need for providing medical experts with semi-automatic tools for image analysis, and overcomes the limitations of visual analysis alone, prone to subjectivity and variability. Experimental results demonstrate that the method is accurate, with excellent reproducibility and with very few manual corrections needed. Although the method was intended initially for images acquired at 1.5T, which corresponds to usual clinical practice, preliminary results on images acquired at 3T suggest that the proposed approach can be generalized.

Reactive synthesis from LTL specification with Spot

Thibaud Michaud · Maximilien Colange

We present ltlsynt, a new tool for reactive synthesis from LTL specifications. It relies on the efficiency of Spot to translate the input LTL specification to a deterministic parity automaton. The latter yields a parity game, which we solve with Zielonka’s recursive algorithm.The approach taken in ltlsynt was widely believed to be impractical, due to the double-exponential size of the parity game, and to the open status of the complexity of parity games resolution. ltlsynt ranked second of its track in the $2017$ edition of the SYNTCOMP competition. This demonstrates the practical applicability of the parity game approach, when backed by efficient manipulations of $\omega$-automata such as the ones provided by Spot. We present our approach and report on our experimental evaluation of ltlsynt to better understand its strengths and weaknesses.

A formally-proved algorithm to compute the correct average of decimal floating-point numbers

Sylvie Boldot · Florian Faissole · Vincent Tourneur

Some modern processors include decimal floating-point units, with a conforming implementation of the IEEE-754 2008 standard. Unfortunately, many algorithms from the computer arithmetic literature are not correct anymore when computations are done in radix 10. This is in particular the case for the computation of the average of two floating-point numbers. Several radix-2 algorithms are available, including one that provides the correct rounding, but none hold in radix 10. This paper presents a new radix-10 algorithm that computes the correctly-rounded average. To guarantee a higher level of confidence, we also provide a Coq formal proof of our theorems, that takes gradual underflow into account. Note that our formal proof was generalized to ensure this algorithm is correct when computations are done with any even radix.

Un algorithme de complexité linéaire pour le calcul de l’arbre des formes

Edwin Carlinet · Sébastien Crozet · Thierry Géraud

L’arbre des formes (AdF) est une représentation morpho- logique hiérarchique de l’image qui traduit l’inclusion des ses lignes de niveaux. Il se caractérise par son invariance à certains changement de l’image, ce qui fait de lui un outil idéal pour le développement d’applications de reconnaissance des formes. Dans cet article, nous proposons une méthode pour transformer sa construction en un calcul de Max-tree. Ce dernier a été largement étudié au cours des dernières années et des algorithmes efficaces (dont certains parallèles) existent déjà. Nous proposons également une optimisation qui permet d’accélérer son calcul dans le cas classique des images 2D. Il en découle un algorithme simple, efficace, s’exécutant linéairement en fonction du nombre de pixels, avec une faible empreinte mémoire, et qui surpasse les algorithmes à l’état de l’art.

Segmentation des hyperintensités de la matière blanche en quelques secondes à l’aide d’un réseau de neurones convolutif et de transfert d’apprentissage

Puybareau · Yongchao Xu · Joseph Chazalon · Isabelle Bloch · Thierry Géraud

Dans cet article, nous proposons une méthode automatique et rapide pour segmenter les hyper-intensités de la matière blanche (WMH) dans des images IRM cérébrales 3D, en utilisant un réseau de neurones entièrement convolutif (FCN) et du transfert d’apprentissage. Ce FCN est le réseau neuronal du Visual Geometry Group (VGG) pré-entraîné sur la base ImageNet pour la classification des images naturelles, et affiné avec l’ensemble des données d’entraînement du concours MICCAI WMH. Nous considérons trois images pour chaque coupe du volume à segmenter, provenant des acquisitions en T1, en FLAIR, et le résultat d’un opérateur morphologique appliqué sur le FLAIR, le top-hat, qui met en évidence les petites structures de forte intensité. Ces trois images 2D sont assemblées pour former une image 2D-3 canaux interprétée comme une image en couleurs, ensuite passée au FCN pour obtenir la segmentation 2D de la coupe correspondante. Nous traitons ainsi toutes les coupes pour former la segmentation de sortie 3D. Avec une telle technique, la segmentation de WMH sur un volume cérébral 3D prend environ 10 secondes, pré-traitement compris. Notre technique a été classée 6e sur 20 participants au concours MICCAI WMH.

Augmented songbook: An augmented reality educational application for raising music awareness

Marçal Rusiñol · Joseph Chazalon · Katerine Diaz-Chito

This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the concept of rythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactical animations and interactive content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computationnal efficiency, both for document model identification and pespective transform estimation. All experiments are performed on an original and public dataset we introduce here.

Parallel computation of component trees on distributed memory machines

Markus Götz · Gabriele Cavallaro · Thierry Géraud · Matthias Book · Morris Riedel

Component trees are region-based representations that encode the inclusion relationship of the threshold sets of an image. These representations are one of the most promising strategies for the analysis and the interpretation of spatial information of complex scenes as they allow the simple and efficient implementation of connected filters. This work proposes a new efficient hybrid algorithm for the parallel computation of two particular component trees—the max- and min-tree—in shared and distributed memory environments. For the node-local computation a modified version of the flooding-based algorithm of Salembier is employed. A novel tuple-based merging scheme allows to merge the acquired partial images into a globally correct view. Using the proposed approach a speed-up of up to 44.88 using 128 processing cores on eight-bit gray-scale images could be achieved. This is more than a five-fold increase over the state-of-the-art shared-memory algorithm, while also requiring only one-thirty-second of the memory.

Parallel model checking algorithms for linear-time temporal logic

Jiří Barnat · Vincent Bloemen · Alexandre Duret-Lutz · Alfons Laarman · Laure Petrucci · Jaco Pol · Étienne Renault

Model checking is a fully automated, formal method for demonstrating absence of bugs in reactive systems. Here, bugs are violations of properties in Linear-time Temporal Logic (LTL). A fundamental challenge to its application is the exponential explosion in the number of system states. The current chapter discusses the use of parallelism in order to overcome this challenge. We reiterate the textbook automata-theoretic approach, which reduces the model checking problem to the graph problem of finding cycles. We discuss several parallel algorithms that attack this problem in various ways, each with different characteristics: Depth-first search (DFS) based algorithms rely on heuristics for good parallelization, but exhibit a low complexity and good on-the-fly behavior. Breadth-first search (BFS) based approaches, on the other hand, offer good parallel scalability and support distributed parallelism. In addition, we present various simpler model checking tasks, which still solve a large and important subset of the LTL model checking problem, and show how these can be exploited to yield more efficient algorithms. In particular, we provide simplified DFS-based search algorithms and show that the BFS-based algorithms exhibit optimal runtimes in certain cases.