Publications

Modular and efficient divide-and-conquer SAT solver on top of the Painless framework

Ludovic Le Frioux · Souheib Baarir · Julien Sopena · Fabrice Kordon

Over the last decade, parallel SATisfiability solving has been widely studied from both theoretical and practical aspects. There are two main approaches. First, divide-and-conquer (D&C) splits the search space, each solver being in charge of a particular subspace. The second one, portfolio launches multiple solvers in parallel, and the first to find a solution ends the computation. However although D&C based approaches seem to be the natural way to work in parallel, portfolio ones experimentally provide better performances. An explanation resides on the difficulties to use the native formulation of the SAT problem (i.e., the CNF form) to compute an a priori good search space partitioning (i.e., all parallel solvers process their subspaces in comparable computational time). To avoid this, dynamic load balancing of the search subspaces is implemented. Unfortunately, this is difficult to compare load balancing strategies since state-of-the-art SAT solvers appropriately dealing with these aspects are hardly adaptable to various strategies than the ones they have been designed for. This paper aims at providing a way to overcome this problem by proposing an implementation and evaluation of different types of divide-and- conquer inspired from the literature. These are relying on the Painless framework, which provides concurrent facilities to elaborate such parallel SAT solvers. Comparison of the various strategies are then discussed.

Finite automata theory based optimization of conditional variable binding

Jim Newton · Didier Verna

We present an efficient and highly optimized implementation of destructuring-case in Common Lisp. This macro allows the selection of the most appropriate destructuring lambda list of several given based on structure and types of data at run-time and thereafter dispatches to the corresponding code branch. We examine an optimization technique, based on finite automata theory applied to conditional variable binding and execution, and type-based pattern matching on Common Lisp sequences. A risk of inefficiency associated with a naive implementation of destructuring-case is that the candidate expression being examined may be traversed multiple times, once for each clause whose format fails to match, and finally once for the successful match. We have implemented destructuring-case in such a way to avoid multiple traversals of the candidate expression. This article explains how this optimization has been implemented.

Motion compensation in digital holography for retinal imaging

Julie Rivet · Guillaume Tochon · Serge Meimon · Michel Paques · Michael Atlan · Thierry Géraud

The measurement of medical images can be hindered by blur and distortions caused by the physiological motion. Specially for retinal imaging, images are greatly affected by sharp movements of the eye. Stabilization methods have been developed and applied to state-of-the-art retinal imaging modalities; here we intend to adapt them for coherent light detection schemes. In this paper, we demonstrate experimentally cross-correlation-based lateral and axial motion compensation in laser Doppler imaging and optical coherence tomography by digital holography. Our methods improve lateral and axial image resolution in those innovative instruments and allow a better visualization during motion.

Implementing baker’s SUBTYPEP decision procedure

Léo Valais · Jim Newton · Didier Verna

We present here our partial implementation of Baker’s decision procedure for SUBTYPEP. In his article “A Decision Procedure for Common Lisp’s SUBTYPEP Predicate”, he claims to provide implementation guidelines to obtain a SUBTYPEP more accurate and as efficient as the average implementation. However, he did not provide any serious implementation and his description is sometimes obscure. In this paper we present our implementation of part of his procedure, only supporting primitive types, CLOS classes, member, range and logical type specifiers. We explain in our words our understanding of his procedure, with much more detail and examples than in Baker’s article. We therefore clarify many parts of his description and fill in some of its gaps or omissions. We also argue in favor and against some of his choices and present our alternative solutions. We further provide some proofs that might be missing in his article and some early efficiency results. We have not released any code yet but we plan to open source it as soon as it is presentable.

Parallelizing quickref

Didier Verna

Quickref is a global documentation project for Common Lisp software. It builds a website containing reference manuals for Quicklisp libraries. Each library is first compiled, loaded, and introspected. From the collected information, a Texinfo file is generated, which is then processed into an HTML one. Because of the large number of libraries in Quicklisp, doing this sequentially may require several hours of processing. We report on our experiments in parallelizing Quickref. Experimental data on the morphology of Quicklisp libraries has been collected. Based on this data, we are able to propose a number of parallelization schemes that reduce the total processing time by a factor of 3.8 to 4.5, depending on the exact situation.

One more step towards well-composedness of cell complexes over $n$-D pictures

Nicolas Boutry · Rocio Gonzalez-Diaz · Maria-Jose Jimenez

An $n$-D pure regular cell complex $K$ is weakly well-composed (wWC) if, for each vertex $v$ of $K$, the set of $n$-cells incident to $v$ is face-connected. In previous work we proved that if an $n$-D picture $I$ is digitally well composed (DWC) then the cubical complex $Q(I)$ associated to $I$ is wWC. If $I$ is not DWC, we proposed a combinatorial algorithm to locally repair $Q(I)$ obtaining an $n$-D pure simplicial complex $P_S(I)$ homotopy equivalent to $Q(I)$ which is always wWC. In this paper we give a combinatorial procedure to compute a simplicial complex $P_S(\bar{I})$ which decomposes the complement space of $|P_S(I)|$ and prove that $P_S(\bar{I})$ is also wWC. This paper means one more step on the way to our ultimate goal: to prove that the $n$-D repaired complex is continuously well-composed (CWC), that is, the boundary of its continuous analog is an $(n-1)$-manifold.

High throughput automated detection of axial malformations in Medaka embryo

Diane Genest · Puybareau · Marc Léonard · Jean Cousty · Noémie De Crozé · Hugues Talbot

Fish embryo models are widely used as screening tools to assess the efficacy and/or toxicity of chemicals. This assessment involves the analysis of embryo morphological abnormalities. In this article, we propose a multi-scale pipeline to allow automated classification of fish embryos (Medaka: Oryzias latipes) based on the presence or absence of spine malformations. The proposed pipeline relies on the acquisition of fish embryo 2D images, on feature extraction based on mathematical morphology operators and on machine learning classification. After image acquisition, segmentation tools are used to detect the embryo before analysing several morphological features. An approach based on machine learning is then applied to these features to automatically classify embryos according to the presence of axial malformations. We built and validated our learning model on 1459 images with a 10-fold cross-validation by comparison with the gold standard of 3D observations performed under a microscope by a trained operator. Our pipeline results in correct classification in 85% of the cases included in the database. This percentage is similar to the percentage of success of a trained human operator working on 2D images. The key benefit of our approach is the low computational cost of our image analysis pipeline, which guarantees optimal throughput analysis..

Left atrial segmentation in a few seconds using fully convolutional network and transfer learning

Puybareau · Zhou Zhao · Younes Khoudli · Edwin Carlinet · Yongchao Xu · Jérôme Lacotte · Thierry Géraud

In this paper, we propose a fast automatic method that segments left atrial cavity from 3D GE-MRIs without any manual assistance, using a fully convolutional network (FCN) and transfer learning. This FCN is the base network of VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2018 Atrial Segmentation Challenge. It relies on the "pseudo-3D" method published at ICIP 2017, which allows for segmenting objects from 2D color images which contain 3D information of MRI volumes. For each $n^{\text{th}}$ slice of the volume to segment, we consider three images, corresponding to the $(n-1)^{\text{th}}$, $n^{\text{th}}$, and $(n+1)^{\text{th}}$ slices of the original volume. These three gray-level 2D images are assembled to form a 2D RGB color image (one image per channel). This image is the input of the FCN to obtain a 2D segmentation of the $n^{\text{th}}$ slice. We process all slices, then stack the results to form the 3D output segmentation. With such a technique, the segmentation of the left atrial cavity on a 3D volume takes only a few seconds. We obtain a Dice score of 0.92 both on the training set in our experiments before the challenge, and on the test set of the challenge.

Deep neural networks for aberrations compensation in digital holographic imaging of the retina

Julie Rivet · Guillaume Tochon · Serge Meimon · Michel Pâques · Thierry Géraud · Michael Atlan

In computational imaging by digital holography, lateral resolution of retinal images is limited to about 20 microns by the aberrations of the eye. To overcome this limitation, the aberrations have to be canceled. Digital aberration compensation can be performed by post-processing of full-field digital holograms. Aberration compensation was demonstrated from wavefront measurement by reconstruction of digital holograms in subapertures, and by measurement of a guide star hologram. Yet, these wavefront measurement methods have limited accuracy in practice. For holographic tomography of the human retina, image reconstruction was demonstrated by iterative digital aberration compensation, by minimization of the local entropy of speckle-averaged tomographic volumes. However image-based aberration compensation is time-consuming, preventing real-time image rendering. We are investigating a new digital aberration compensation scheme with a deep neural network to circumvent the limitations of these aberrations correction methods. To train the network, 28.000 anonymized images of eye fundus from patients of the 15-20 hospital in Paris have been collected, and synthetic interferograms have been reconstructed digitally by simulating the propagation of eye fundus images recorded with standard cameras. With a U-Net architecture, we demonstrate defocus correction of these complex-valued synthetic interferograms. Other aberration orders will be corrected with the same method, to improve lateral resolution up to the diffraction limit in digital holographic imaging of the retina.

A theoretical and numerical analysis of the worst-case size of reduced ordered binary decision diagrams

Jim Newton · Didier Verna

Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced Ordered BDDs) are a common data structure for manipulating Boolean expressions, integrated circuit design, type inferencers, model checkers, and many other applications. Although the ROBDD is a lightweight data structure to implement, the behavior, in terms of memory allocation, may not be obvious to the program architect. We explore experimentally, numerically, and theoretically the typical and worst-case ROBDD sizes in terms of number of nodes and residual compression ratios, as compared to unreduced BDDs. While our theoretical results are not surprising, as they are in keeping with previously known results, we believe our method contributes to the current body of research by our experimental and statistical treatment of ROBDD sizes. In addition, we provide an algorithm to calculate the worst-case size. Finally, we present an algorithm for constructing a worst-case ROBDD of a given number of variables. Our approach may be useful to projects deciding whether the ROBDD is the appropriate data structure to use, and in building worst-case examples to test their code.