Publications

Community and LBD-based clause sharing policy for parallel SAT solving

Vincent Vallade · Ludovic Le Frioux · Souheib Baarir · Julien Sopena · Vijay Ganesh · Fabrice Kordon

Modern parallel SAT solvers rely heavily on effective clause sharing policies for their performance. The core problem being addressed by these policies can be succinctly stated as "the problem of identifying high-quality learnt clauses" that when shared between the worker nodes of parallel solvers results in improved performance than otherwise. The term "high-quality clauses" is often defined in terms of metrics that solver designers have identified over years of empirical study. Some of the more well-known metrics to identify high-quality clauses for sharing include clause length, literal block distance (LBD), and clause usage in propagation. In this paper, we propose a new metric aimed at identifying high-quality learnt clauses and a concomitant clause-sharing policy based on a combination of LBD and community structure of Boolean formulas. The concept of community structure has been proposed as a possible explanation for the extraordinary performance of SAT solvers in industrial instances. Hence, it is a natural candidate as a basis for a metric to identify high-quality clauses. To be more precise, our metric identifies clauses that have low LBD and low community number as ones that are high-quality for applications such as verification and testing. The community number of a clause C measures the number of different communities of a formula that the variables in C span. We perform extensive empirical analysis of our metric and clause-sharing policy, and show that our method significantly outperforms state-of-the-art techniques on the benchmark from the parallel track of the last four SAT competitions.

Learning endmember dynamics in multitemporal hyperspectral data using a state-space model formulation

Lucas Drumetz · Mauro Dalla Mura · Guillaume Tochon · Ronan Fablet

Hyperspectral image unmixing is an inverse problem aiming at recovering the spectral signatures of pure materials of interest (called endmembers) and estimating their proportions (called abundances) in every pixel of the image. However, in spite of a tremendous applicative potential and the avent of new satellite sensors with high temporal resolution, multitemporal hyperspectral unmixing is still a relatively underexplored research avenue in the community, compared to standard image unmixing. In this paper, we propose a new framework for multitemporal unmixing and endmember extraction based on a state-space model, and present a proof of concept on simulated data to show how this representation can be used to inform multitemporal unmixing with external prior knowledge, or on the contrary to learn the dynamics of the quantities involved from data using neural network architectures adapted to the identification of dynamical systems.

Performance comparison of several folding strategies

Jim Newton

In this article we examine the computation order and consequent performance of three different conceptual implementations of the fold function. We explore a set of performance based experiments on different implementations of this function. In particular, we contrast the fold-left implementation with two other implements we refer to as pair-wise-fold and tree-like-fold. We explore two application areas: ratio arithmetic and Binary Decisions Diagram construction. We demonstrate several cases where the performance of certain algorithms is very different depending on the approach taken. In particular iterative computations where the object size accumulates are good candidates for the tree-like-fold.

A new minimum barrier distance for multivariate images with applications to salient object detection, shortest path finding, and segmentation

Minh Ôn Vũ Ngọc

Hierarchical image representations are widely used in image processing to model the content of an image in the multi-scale structure. A well-known hierarchical representation is the tree of shapes (ToS) which encodes the inclusion relationship between connected components from different thresholded levels. This kind of tree is self-dual, contrast-change invariant and popular in computer vision community. Typically, in our work, we use this representation to compute the new distance which belongs to the mathematical morphology domain. Distance transforms and the saliency maps they induce are generally used in image processing, computer vision, and pattern recognition. One of the most commonly used distance transforms is the geodesic one. Unfortunately, this distance does not always achieve satisfying results on noisy or blurred images. Recently, a new pseudo-distance, called the minimum barrier distance (MBD), more robust to pixel fluctuation, has been introduced. Some years after, Géraud et al. have proposed a good and fast-to-compute approximation of this distance: the Dahu pseudo-distance. Since this distance was initially developed for grayscale images, we propose here an extension of this transform to multivariate images; we call it vectorial Dahu pseudo-distance. This new distance is easily and efficiently computed thanks to the multivariate tree of shapes (MToS). We propose an efficient way to compute this distance and its deduced saliency map in this thesis. We also investigate the properties of this distance in dealing with noise and blur in the image. This distance has been proved to be robust for pixel invariant. To validate this new distance, we provide benchmarks demonstrating how the vectorial Dahu pseudo-distance is more robust and competitive compared to other MB-based distances. This distance is promising for salient object detection, shortest path finding, and object segmentation. Moreover, we apply this distance to detect the document in videos. Our method is a region-based approach which relies on visual saliency deduced from the Dahu pseudo-distance. We show that the performance of our method is competitive with state-of-the-art methods on the ICDAR Smartdoc 2015 Competition dataset.

State-of-the-art speaker recognition with neural network embeddings in NIST SRE18 and speakers in the wild evaluations

Jesús Villalba · Nanxin Chen · David Snyder · Daniel Garcia-Romero · Alan McCree · Gregory Sell · Jonas Borgstrom · Leibny Paola García-Perera · Fred Richardson · Réda Dehak · Pedro A. Torres-Carrasquillo · Najim Dehak

Connected filters on generalized shape-spaces

Lê Duy Huỳnh · Nicolas Boutry · Thierry Géraud

Classical hierarchical image representations and connected filters work on sets of connected components (CC). These approaches can be defective to describe the relations between disjoint objects or partitions on images. In practice, objects can be made of several connected components in images (due to occlusions for example), therefore it can be interesting to be able to take into account the relationship between these components to be able to detect the whole object. In Mathematical Morphology, second-generation connectivity (SGC) and tree-based shape-space study this relation between the connected components of an image. However, they have limitations. For this reason, we propose in this paper an extension of the usual shape-space paradigm into what we call a Generalized Shape-Space (GSS). This new paradigm allows to analyze any graph of connected components hierarchically and to filter them thanks to connected operators.

Standardized assessment of automatic segmentation of white matter hyperintensities: Results of the WMH segmentation challenge

H. J. Kuijf · J. M. Biesbroek · J. Bresser · R. Heinen · S. Andermatt · M. Bento · M. Berseth · M. Belyaev · M. J. Cardoso · A. Casamitjana · D. L. Collins · M. Dadar · A. Georgiou · M. Ghafoorian · D. Jin · A. Khademi · J. Knight · H. Li · X. Lladó · M. Luna · Q. Mahmood · R. McKinley · A. Mehrtash · S. Ourselin · B. Park · H. Park · S. H. Park · S. Pezold · Élodie Puybareau · L. Rittner · C. H. Sudre · S. Valverde · V. Vilaplana · R. Wiest · Yongchao Xu · Z. Xu · G. Zeng · J. Zhang · G. Zheng · C. Chen · W. Flier · F. Barkhof · M. A. Viergever · G. J. Biessels

Image segmentation; Three-dimensional displays; Manuals; White matter; Biomedical imaging; Radiology; Magnetic resonance imaging (MRI); Brain; Evaluation and performance; Segmentation

Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. Automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their method on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge (https://wmh.isi.uu.nl/). Sixty T1+FLAIR images from three MR scanners were released with manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. Segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: (1) Dice similarity coefficient, (2) modified Hausdorff distance (95th percentile), (3) absolute log-transformed volume difference, (4) sensitivity for detecting individual lesions, and (5) F1-score for individual lesions. Additionally, methods were ranked on their inter-scanner robustness. Twenty participants submitted their method for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation.

Combining parallel emptiness checks with partial order reductions

Denis Poitrenaud · Étienne Renault

In explicit state model checking ofconcurrent systems, multicore emptiness checks and partial order reductions (POR) are two major techniques to handle large state spaces. The first one tries to take advantage of multi-core architectures while the second one may decrease exponentially the size of the state space to explore. For checking LTL properties, Bloemen and van de Pol [2] shown that the best performance is currently obtained using their multi-core SCC-based emptiness check. However, combining the latest SCC-based algorithm with POR is not trivial since a condition on cycles, the proviso, must be enforced on an algorithm which processes collaboratively cycles. In this paper, we suggest a pessimistic approach to tackle this problem for liveness properties. For safety ones, we propose an algorithm which takes benefit from the information computed by the SCC-based algorithm. We also present new parallel provisos for both safety and liveness prop- erties that relies on other multi-core emptiness checks. We observe that all presented algorithms maintain good reductions and scalability.

Braids of partitions for the hierarchical representation and segmentation of multimodal images

Guillaume Tochon · Mauro Dalla Mura · Miguel Angel Veganzones · Thierry Géraud · Jocelyn Chanussot

Hierarchical data representations are powerful tools to analyze images and have found numerous applications in image processing. When it comes to multimodal images however, the fusion of multiple hierarchies remains an open question. Recently, the concept of braids of partitions has been proposed as a theoretical tool and possible solution to this issue. In this paper, we demonstrate the relevance of the braid structure for the hierarchical representation of multimodal images. We first propose a fully operable procedure to build a braid of partitions from two hierarchical representations. We then derive a framework for multimodal image segmentation, relying on an energetic minimization scheme conducted on the braid structure. The proposed approach is investigated on different multimodal images scenarios, and the obtained results confirm its ability to efficiently handle the multimodal information to produce more accurate segmentation outputs.