Publications

Max-tree computation on GPUs

Nicolas Blin · Edwin Carlinet · Florian Lemaitre · Lionel Lacassagne · Thierry Géraud

In Mathematical Morphology, the max-tree is a region-based representation that encodes the inclusion relationship of the threshold sets of an image. This tree has been proven useful in numerous image processing applications. For the last decade, works have been led to improve the building time of this structure; mixing algorithmic optimizations, parallel and distributed computing. Nevertheless, there is still no algorithm that takes benefit from the computing power of the massively parallel architectures. In this work, we propose the first GPU algorithm to compute the max-tree. The proposed approach leads to significant speed-ups, and is up to one order of magnitude faster than the current State-of-the-Art parallel CPU algorithms. This work paves the way for a max-tree integration in image processing GPU pipelines and real-time image processing based on Mathematical Morphology. It is also a foundation for porting other image representations from Mathematical Morphology on GPUs.

New security protocols for offline point-of-sale machines

Nour El Madhoun · Emmanuel Bertin · Mohamad Badra · Guy Pujolle

EMV protocol
EMV vulnerabilities
NFC
offline
payment
security

EMV (Europay MasterCard Visa) is the protocol implemented to secure the communication, between a client’s payment device and a Point-of-Sale machine, during a contact or an NFC (Near Field Communication) purchase transaction. In several studies, researchers have analyzed the operation of this protocol in order to verify its safety: unfortunately, they have identified two security vulnerabilities that lead to multiple attacks and dangerous risks threatening both clients and merchants. In this paper, we are interested in proposing new security solutions that aim to overcome the two dangerous EMV vulnerabilities. Our solutions address the case of Point-of-Sale machines that do not have access to the banking network and are therefore in the "offline" connectivity mode. We verify the accuracy of our proposals by using the Scyther security verification tool.

Local intensity order transformation for robust curvilinear object segmentation

Tianyi Shi · Nicolas Boutry · Yongchao Xu · Thierry Géraud

Segmentation of curvilinear structures is important in many applications, such as retinal blood vessel segmentation for early detection of vessel diseases and pavement crack segmentation for road condition evaluation and maintenance. Currently, deep learning-based methods have achieved impressive performance on these tasks. Yet, most of them mainly focus on finding powerful deep architectures but ignore capturing the inherent curvilinear structure feature (e.g., the curvilinear structure is darker than the context) for a more robust representation. In consequence, the performance usually drops a lot on cross-datasets, which poses great challenges in practice. In this paper, we aim to improve the generalizability by introducing a novel local intensity order transformation (LIOT). Specifically, we transfer a gray-scale image into a contrast- invariant four-channel image based on the intensity order between each pixel and its nearby pixels along with the four (horizontal and vertical) directions. This results in a representation that preserves the inherent characteristic of the curvilinear structure while being robust to contrast changes. Cross-dataset evaluation on three retinal blood vessel segmentation datasets demonstrates that LIOT improves the generalizability of some state-of-the-art methods. Additionally, the cross-dataset evaluation between retinal blood vessel segmentation and pavement crack segmentation shows that LIOT is able to preserve the inherent characteristic of curvilinear structure with large appearance gaps. An implementation of the proposed method is available at https://github.com/TY-Shi/LIOT.

ETAP: Experimental typesetting algorithms platform

Didier Verna

We present the early development stages of ETAP, a platform for experimenting with typesetting algorithms. The purpose of this platform is twofold: while its primary objective is to provide building blocks for quickly and easily designing and testing new algorithms (or variations on existing ones), it can also be used as an interactive, real time demonstrator for many features of digital typography, such as kerning, hyphenation, or ligaturing.

How to boost close-range remote sensing courses using a serious game: Uncover in a fun way the complexity and transversality of multi-domain field acquisitions

Loïca Avanthey · Laurent Beaudoin

Close-range remote sensing, and more particularly, its acquisition part that is linked to field robotics, is at the crossroads of many scientific and engineering fields. Thus, it takes time for students to acquire the solid foundations needed before practicing on real systems. Therefore, we are interested in a means that allow students without prerequisites to quickly appropriate the fundamentals of this interdisciplinary field. For this, we adapted a haggle game to the close-range remote sensing theme. In this article, we explain the mechanics that serve our educational purposes. We have used it, so far, for four academic years with hundreds of students. The experience was assessed through quality surveys and quizzes to calculate success indicators. The results show that the serious game is well appreciated by the students. It allows them to better structure information and acquire a good global vision of multi-domain acquisition and data processing in close-range remote sensing. The students are also more involved in the rest of the lessons; all of this helps to facilitate their learning of the theoretical parts. Thus, we were able to shorten the time before moving on to real practice by replacing three lesson sessions with one serious game session, with an increase in mastering fundamental skills. The designed serious game can be useful for close-range remote sensing teachers looking for an effective starting lesson. In addition, teachers from other technical fields can draw inspiration from the creation mechanisms described in this article to create their own adapted version. Such a serious game is also a good asset for selecting promising students in a recruitment context.

GenIDA, une base de données participative internationale permettant de mieux connaître l’histoire naturelle et les comorbidités des formes génétiques de troubles neurodéveloppementaux

Jean-Louis Mandel · Pauline Burger · Axelle Strehle · Florent Colin · Timothée Mazzucotelli · Nicole Collot · S. Baer · Benjamin Durand · Amélie Piton · Romain Coutelle · Elise Schaefer · Pierre Parrend · L. Faivre · K. Jobard Garou · D. Geneviève · V. Ruault · D. Martin · R. Caumes · T. Smol · J. Ghoumid · F. Ropert Conquer · J. Kummeling · C. Ockeloen · Tjitske Kleefstra · David Koolen

Base de données participative
génétique
troubles neurodéveloppementaux

Continuous well-composedness implies digital well-composedness in $n$-D

Nicolas Boutry · Rocio Gonzalez-Diaz · Laurent Najman · Thierry Géraud

In this paper, we prove that when a $n$-D cubical set is continuously well-composed (CWC), that is, when the boundary of its continuous analog is a topological $(n-1)$-manifold, then it is digitally well-composed (DWC), which means that it does not contain any critical configuration. We prove this result thanks to local homology. This paper is the sequel of a previous paper where we proved that DWCness does not imply CWCness in 4D.

One-class ant-miner: Selection of majority class rules for binary rule-based classification

Naser Ghannad · Roland Guio · Pierre Parrend

AntMiner
Evolutionary algorithm
Rule-based classifier
Ant colony classification
Imbalanced dataset
Binary classification

In recent years, high-performance models have been introduced based on deep learning; however, these models do not have high interpretability to complement their high efficiency. Rule-based classifiers can be used to obtain explainable artificial intelligence. Rule-based classifiers use a labeled dataset to extract rules that express the relationships between inputs and expected outputs. Although many evolutionary and non-evolutionary algorithms have developed to solve this problem, we hypothesize that rule-based evolutionary algorithms such as the AntMiner family can provide good approximate solutions to problems that cannot be addressed efficiently using other techniques. This study proposes a novel supervised rule-based classifier for binary classification tasks and evaluates the extent to which algorithms in the AntMiner family can address this problem. First, we describe different versions of AntMiner. We then introduce the one-class AntMiner (OCAntMiner) algorithm, which can work with different imbalance ratios. Next, we evaluate these algorithms using specific synthetic datasets based on the AUPRC, AUROC, and MCC evaluation metrics and rank them based on these metrics. The results demonstrate that the OCAntMiner algorithm performs better than other versions of AntMiner in terms of the specified metrics.

Hate speech and toxic comment detection using transformers

Pierre Guillaume · Corentin Duchêne · Réda Dehak

Hate speech and toxic comment detection on social media has proven to be an essential issue for content moderation. This paper displays a comparison between different Transformer models for Hate Speech detection such as Hate BERT, a BERT-based model, RoBERTa and BERTweet which is a RoBERTa based model. These Transformer models are tested on Jibes&Delight 2021 reddit dataset using the same training and testing conditions. Multiple approaches are detailed in this paper considering feature extraction and data augmentation. The paper concludes that our RoBERTa st4-aug model trained with data augmentation outperforms simple RoBERTa and HateBERT models.

Current trends in blockchain implementations on the paradigm of public key infrastructure: A survey

Daniel Maldonado-Ruiz · Jenny Torres · Nour El Madhoun · Mohamad Badra

BKI
Blockchain
Decentralised
Identity Management
PKI
Smart-Contracts

Since the emergence of the Bitcoin cryptocurrency, the blockchain technology has become the new Internet tool with which researchers claim to be able to solve any existing online problem. From immutable log ledger applications to authorisation systems applications, the current technological consensus implies that most of Internet problems could be effectively solved by deploying some form of blockchain environment. Regardless this ’consensus’, there are decentralised Internet-based applications on which blockchain technology can actually solve several problems and improve the functionality of these applications. The development of these new blockchain-based solutions is grouped into a new paradigm called Blockchain 3.0 and its concepts go far beyond the well-known cryptocurrencies. In this paper, we study the current trends in the application of blockchain on the paradigm of Public Key Infrastructures (PKI). In particular, we focus on how these current trends can guide the exploration of a fully Decentralised Identity System, with blockchain as be part of the core technology.