Publications

Towards better heuristics for solving bounded model checking problems

Anissa Kheireddine · Étienne Renault · Souheib Baarir

This paper presents a new way to improve the performance of the SAT-based bounded model checking problem by exploiting relevant information identified through the characteristics of the original problem. This led us to design a new way of building interesting heuristics based on the structure of the underlying problem. The proposed methodology is generic and can be applied for any SAT problem. This paper compares the state-of-the-art approach with two new heuristics: Structure-based and Linear Programming heuristics and show promising results.

VizNN: Visual data augmentation with convolutional neural networks for cybersecurity investigation

Amélie Raymond · Baptiste Brument · Pierre Parrend

One of the key challenges of Security Operating Centers (SOCs) is to provide rich information to the security analyst to ease the investigation phase in front of a cyberattack. This requires the combination of supervision with detection capabilities. Supervision enables the security analysts to gain an overview on the security state of the information system under protection. Detection uses advanced algorithms to extract suspicious events from the huge amount of traces produced by the system. To enable coupling an efficient supervision with performance detection, the use of visualisation-based analysis is a appealing approach, which into the bargain provides an elegant solution for data augmentation and thus improved detection performance. We propose VizNN, a Convolutional Neural Networks for analysing trace features through their graphical representation. VizNN enables to gain a visual overview of the traces of interests, and Convolutional Neural Networks leverage a scalability capability. An evaluation of the proposed scheme is performed against reference classifiers for detecting attacks, XGBoost and Random Forests

Revisiting the Coco panoptic metric to enable visual and qualitative analysis of historical map instance segmentation

Joseph Chazalon · Edwin Carlinet

Segmentation is an important task. It is so important that there exist tens of metrics trying to score and rank segmentation systems. It is so important that each topic has its own metric because their problem is too specific. Does it? What are the fundamental differences with the ZoneMap metric used for page segmentation, the COCO Panoptic metric used in computer vision and metrics used to rank hierarchical segmentations? In this paper, while assessing segmentation accuracy for historical maps, we explain, compare and demystify some the most used segmentation evaluation protocols. In particular, we focus on an alternative view of the COCO Panoptic metric as a classification evaluation; we show its soundness and propose extensions with more “shape-oriented” metrics. Beyond a quantitative metric, this paper aims also at providing qualitative measures through <i>precision-recall maps</i> that enable visualizing the success and the failures of a segmentation method.

ICDAR 2021 competition on historical map segmentation

Joseph Chazalon · Edwin Carlinet · Yizi Chen · Julien Perret · Bertrand Duménieu · Clément Mallet · Thierry Géraud · Vincent Nguyen · Nam Nguyen · Josef Baloun · Ladislav Lenc · Pavel Král

This paper presents the final results of the ICDAR 2021 Competition on Historical Map Segmentation (MapSeg), encouraging research on a series of historical atlases of Paris, France, drawn at 1/5000 scale between 1894 and 1937. The competition featured three tasks, awarded separately. Task 1 consists in detecting building blocks and was won by the L3IRIS team using a DenseNet-121 network trained in a weakly supervised fashion. This task is evaluated on 3 large images containing hundreds of shapes to detect. Task 2 consists in segmenting map content from the larger map sheet, and was won by the UWB team using a U-Net-like FCN combined with a binarization method to increase detection edge accuracy. Task 3 consists in locating intersection points of geo-referencing lines, and was also won by the UWB team who used a dedicated pipeline combining binarization, line detection with Hough transform, candidate filtering, and template matching for intersection refinement. Tasks 2 and 3 are evaluated on 95 map sheets with complex content. Dataset, evaluation tools and results are available under permissive licensing at https://icdar21-mapseg.github.io/.

Vectorization of historical maps using deep edge filtering and closed shape extraction

Yizi Chen · Edwin Carlinet · Joseph Chazalon · Clément Mallet · Bertrand Duménieu · Julien Perret

Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing the complex spatial transformation of landscapes over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains (social sciences, economy, etc.). The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects under a vectorial shape. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. We propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers). It is built upon the complementary strength of mathematical morphology and convolutional neural networks through efficient edge filtering. Evenmore, we modify ConnNet and combine with deep edge filtering architecture to make use of pixel connectivity information and built an end-to-end system without requiring any post-processing techniques. In this paper, we focus on the comprehensive benchmark on various architectures on multiple datasets coupled with a novel vectorization step. Our experimental results on a new public dataset using COCO Panoptic metric exhibit very encouraging results confirmed by a qualitative analysis of the success and failure cases of our approach. Code, dataset, results and extra illustrations are freely available at https://github.com/soduco/ICDAR-2021-Vectorization.

Learning Sentinel-2 spectral dynamics for long-run predictions using residual neural networks

Joaquim Estopinan · Guillaume Tochon · Lucas Drumetz

Making the most of multispectral image time-series is a promising but still relatively under-explored research direction because of the complexity of jointly analyzing spatial, spectral and temporal information. Capturing and characterizing temporal dynamics is one of the important and challenging issues. Our new method paves the way to capture real data dynamics and should eventually benefit applications like unmixing or classification. Dealing with time-series dynamics classically requires the knowledge of a dynamical model and an observation model. The former may be incorrect or computationally hard to handle, thus motivating data-driven strategies aiming at learning dynamics directly from data. In this paper, we adapt neural network architectures to learn periodic dynamics of both simulated and real multispectral time-series. We emphasize the necessity of choosing the right state variable to capture periodic dynamics and show that our models can reproduce the average seasonal dynamics of vegetation using only one year of training data.

VerSe: A vertebrae labelling and segmentation benchmark for multi-detector CT images

Anjany Sekuboyina · Malek E. Husseini · Amirhossein Bayat · Maximilian Löffler · Hans Liebl · Hongwei Li · Giles Tetteh · Jan Kukačka · Christian Payer · Darko Stern · Martin Urschler · Maodong Chen · Dalong Cheng · Nikolas Lessmann · Yujin Hu · Tianfu Wang · Dong Yang · Daguang Xu · Felix Ambellan · Tamaz Amiranashvili · Moritz Ehlke · Hans Lamecker · Sebastian Lehnert · Marilia Lirio · Nicolás Pérez de Olaguer · Heiko Ramm · Manish Sahu · Alexander Tack · Stefan Zachow · Tao Jiang · Xinjun Ma · Christoph Angerman · Xin Wang · Kevin Brown · Matthias Wolf · Alexandre Kirszenberg · Puybareau · Di Chen · Yiwei Bai · Brandon H. Rapazzo · Timyoas Yeah · Amber Zhang · Shangliang Xu · Feng Houa · Zhiqiang He · Chan Zeng · Zheng Xiangshang · Xu Liming · Tucker J. Netherton · Raymond P. Mumme · Laurence E. Court · Zixun Huang · Chenhang He · Li-Wen Wang · Sai Ho Ling · Lê Duy Huỳnh · Nicolas Boutry · Roman Jakubicek · Jiri Chmelik · Supriti Mulay · Mohanasankar Sivaprakasam · Johannes C. Paetzold · Suprosanna Shit · Ivan Ezhov · Benedikt Wiestler · Ben Glocker · Alexander Valentinitsch · Markus Rempfler · Björn H. Menze · Jan S. Kirschke

Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.

A blockchain-based certificate revocation management and status verification system

Elloh Adja · Badis Hammi · Ahmed Serhrouchni · Sherali Zeadally

Authentication
Blockchain
Bloom filter
Certificate
Revocation
Decentralization
PKI
Security
X509

Revocation management is one of the main tasks of the Public Key Infrastructure (PKI). It is also critical to the security of any PKI. As a result of the increase in the number and sizes of networks as well as the adoption of novel paradigms such as the Internet of Things and their usage of the web, current revocation mechanisms are vulnerable to single point of failures as the network loads increase. To address this challenge, we take advantage of blockchains power and resiliency in order to propose an efficient decentralized certificates revocation management and status verification system. We use the extension field of the X509 certificate’s structure to introduce a field that describes to which distribution point the certificate will belong to if revoked. Each distribution point is represented by a Bloom filter filled with revoked certificates. Bloom filters and revocation information are stored in a public blockchain. We developed a real implementation of our proposed mechanism in Python and the Namecoin blockchain. Then, we conducted an extensive evaluation of our scheme using performance metrics such as execution time and data consumption to demonstrate that it can meet the needed requirements with high efficiency and low cost. Moreover, we compare the performance of our approach with two of the most well-known/used revocation techniques which are Online Certificate Status Protocol (OCSP) and Certificate Revocation List (CRL). The results obtained show that our proposed approach outperforms these current schemes.

Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy

Sharib Ali · Mariia Dmitrieva · Noha Ghatwary · Sophia Bano · Gorkem Polat · Alptekin Temizel · Adrian Krenzer · Amar Hekalo · Yun Bo Guo · Bogdan Matuszewski · Mourad Gridach · Irina Voiculescu · Vishnusai Yoganand · Arnav Chavan · Aryan Raj · Nhan T. Nguyen · Dat Q. Tran · Lê Duy Huỳnh · Nicolas Boutry · Shahadate Rezvy · Haijian Chen · Yoon Ho Choi · Anand Subramanian · Velmurugan Balasubramanian · Xiaohong W. Gao · Hongyu Hu · Yusheng Liao · Danail Stoyanov · Christian Daul · Stefano Realdon · Renato Cannizzaro · Dominique Lamarque · Terry Tran-Nguyen · Adam Bailey · Barbara Braden · James East · Jens Rittscher

The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies. Whilst endoscopy is a widely used diagnostic and treatment tool for hollow-organs, there are several core challenges often faced by endoscopists, mainly: 1) presence of multi-class artefacts that hinder their visual interpretation, and 2) difficulty in identifying subtle precancerous precursors and cancer abnormalities. Artefacts often affect the robustness of deep learning methods applied to the gastrointestinal tract organs as they can be confused with tissue of interest. EndoCV2020 challenges are designed to address research questions in these remits. In this paper, we present a summary of methods developed by the top 17 teams and provide an objective comparison of state-of-the-art methods and methods designed by the participants for two sub-challenges: i) artefact detection and segmentation (EAD2020), and ii) disease detection and segmentation (EDD2020). Multi-center, multi-organ, multi-class, and multi-modal clinical endoscopy datasets were compiled for both EAD2020 and EDD2020 sub-challenges. The out-of-sample generalization ability of detection algorithms was also evaluated. Whilst most teams focused on accuracy improvements, only a few methods hold credibility for clinical usability. The best performing teams provided solutions to tackle class imbalance, and variabilities in size, origin, modality and occurrences by exploring data augmentation, data fusion, and optimal class thresholding techniques.