Dictionnaire des sciences du jeu
Publications
Dictionnaire des sciences du jeu
An end-to-end approach for the detection of phishing attacks
Badis Hammi · Tristan Billot · Danyil Bazain · Nicolas Binand · Maxime Jaen · Chems Mitta · Nour El Madhoun
The main approaches/implementations used to counteract phishing attacks involve the use of crowd-sourced blacklists. However, blacklists come with several drawbacks. In this paper, we present a comprehensive approach for the detection of phishing attacks. Our approach uses our own detection engine which relies on Graph Neural Networks to leverage the hyperlink structure of the websites to analyze. Additionally, we offer a turnkey implementation to the end-users in the form of a Mozilla Firefox plugin.
Automatic vectorization of historical maps: A benchmark
Yizi Chen · Joseph Chazalon · Edwin Carlinet · Minh Ôn Vũ Ngọc · Clément Mallet · Julien Perret
Shape vectorization is a key stage of the digitization of large-scale historical maps, especially city maps that exhibit complex and valuable details. Having access to digitized buildings, building blocks, street networks and other geographic content opens numerous new approaches for historical studies such as change tracking, morphological analysis and density estimations. In the context of the digitization of Paris atlases created in the 19th and early 20th centuries, we have designed a supervised pipeline that reliably extract closed shapes from historical maps. This pipeline is based on a supervised edge filtering stage using deep filters, and a closed shape extraction stage using a watershed transform. It relies on probable multiple suboptimal methodological choices that hamper the vectorization performances in terms of accuracy and completeness. Objectively investigating which solutions are the most adequate among the numerous possibilities is comprehensively addressed in this paper. The following contributions are subsequently introduced: (i) we propose an improved training protocol for map digitization; (ii) we introduce a joint optimization of the edge detection and shape extraction stages; (iii) we compare the performance of state-of-the-art deep edge filters with topology-preserving loss functions, including vision transformers; (iv) we evaluate the end-to-end deep learnable watershed against Meyer watershed. We subsequently design the critical path for a fully automatic extraction of key elements of historical maps. All the data, code, benchmark results are freely available at https://github.com/soduco/Benchmark_historical_map_vectorization.
Unsupervised discovery of interpretable visual concepts
Caroline Mazini-Rodrigues · Nicolas Boutry · Laurent Najman
Providing interpretability of deep-learning models to non-experts, while fundamental for a responsible real-world usage, is challenging. Attribution maps from xAI techniques, such as Integrated Gradients, are a typical example of a visualization technique containing a high level of information, but with difficult interpretation. In this paper, we propose two methods, Maximum Activation Groups Extraction (MAGE) and Multiscale Interpretable Visualization (Ms-IV), to explain the model’s decision, enhancing global interpretability. MAGE finds, for a given CNN, combinations of features which, globally, form a semantic meaning, that we call concepts. We group these similar feature patterns by clustering in concepts, that we visualize through Ms-IV. This last method is inspired by Occlusion and Sensitivity analysis (incorporating causality) and uses a novel metric, called Class-aware Order Correlation (CAOC), to globally evaluate the most important image regions according to the model’s decision space. We compare our approach to xAI methods such as LIME and Integrated Gradients. Experimental results evince the Ms-IV higher localization and faithfulness values. Finally, qualitative evaluation of combined MAGE and Ms-IV demonstrates humans’ ability to agree, based on the visualization, with the decision of clusters’ concepts; and, to detect, among a given set of networks, the existence of bias.
Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021
Carole H. Sudre · Kimberlin Van Wijnen · Florian Dubost · Hieab Adams · David Atkinson · Frederik Barkhof · Mahlet A. Birhanu · Esther E. Bron · Robin Camarasa · Nish Chaturvedi · Yuan Chen · Zihao Chen · Shuai Chen · Qi Dou · Tavia Evans · Ivan Ezhov · Haojun Gao · Marta Girones Sanguesa · Juan Domingo Gispert · Beatriz Gomez Anson · Alun D. Hughes · M. Arfan Ikram · Silvia Ingala · H. Rolf Jaeger · Florian Kofler · Hugo J. Kuijf · Denis Kutnar · Minho Lee · Bo Li · Luigi Lorenzini · Bjoern Menze · Jose Luis Molinuevo · Yiwei Pan · Puybareau · Rafael Rehwald · Ruisheng Su · Pengcheng Shi · Lorna Smith · Therese Tillin · Guillaume Tochon · Hélène Urien · Bas H. M. Velden · Isabelle F. Velpen · Benedikt Wiestler · Frank J. Wolters · Pinar Yilmaz · Marius Groot · Meike W. Vernooij · Marleen Bruijne
Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.
SAT-based learning of computation tree logic
Adrien Pommellet · Daniel Stan · Simon Scatton
The CTL learning problem consists in finding for a given sample of positive and negative Kripke structures a distinguishing CTL formula that is verified by the former but not by the latter. Further constraints may bound the size and shape of the desired formula or even ask for its minimality in terms of syntactic size. This synthesis problem is motivated by explanation generation for dissimilar models, e.g. comparing a faulty implementation with the original protocol. We devise a SAT-based encoding for a fixed size CTL formula, then provide an incremental approach that guarantees minimality. We further report on a prototype implementation whose contribution is twofold: first, it allows us to assess the efficiency of various output fragments and optimizations. Secondly, we can experimentally evaluate this tool by randomly mutating Kripke structures or syntactically introducing errors in higher-level models, then learning CTL distinguishing formulas.
Concurrent stochastic lossy channel games
Daniel Stan · Muhammad Najib · Anthony Widjaja Lin · Parosh Aziz Abdulla
Closure and decision properties for higher-dimensional automata
Amazigh Amrane · Hugo Bazille · Uli Fahrenberg · Krzysztof Ziemiański
Ce que nous savons sur (les) sciences du jeu : Analyse bibliométrique et lexicométrique des articles de la revue (octobre 2013 - mai 2022)
Aymeric Brody · Fanny Barnabé · Nicolas Bourgeois · Vincent Berry · Vinciane Zabban
On its website, Sciences du jeu describes itself as an "international and interdisciplinary journal whose mission is to develop and promote French-speaking researc[h] on play", "to foster dialogue between social sciences and set off debates on this particular subject" Created in 2013 following a study day in tribute to the work of Jacques Henriot, its scientific program clearly follows in his footsteps. Indeed, Sciences du jeu defines itself not only as "open to all approaches or methods", but also to "every aspect of play" (including, but not exclusively, video games) and to "researc[h] from various fields related to play in a broad sense (objects, structures, situations, experiences, attitudes)".. Ten years after the publication of the first issue of the journal, we may well ask to what extent the articles published to date reflect the original approach of play originally promoted by Henriot. What about the references to this author and to the concepts he developed in his work? More generally, what are the bibliographical references most frequently used by the journal’s authors? What do they tell us about their conception of play and how they approach it? On which disciplinary approaches and methods are their analyses most often based? What types of games, themes and/or fields are most frequently studied? What are the gray areas and less visible fields? Finally, who are the authors of these papers (in terms of gender and status), where do they come from (in terms of affiliation and disciplinary roots) and how does this influence their perspective on play? To answer these questions, this paper draws on a bibliometric, lexicometric and sociological analysis based on a corpus comprising all the articles published in the first seventeen issues of the journal.
A CP-based automatic tool for instantiating truncated differential characteristics
Fraņois Delobel · Patrick Derbez · Arthur Gontier · Loïc Rouquette · Christine Solnon
An important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done with ad-hoc algorithms and CP or MILP models for generic solvers. In this paper, we present a generic approach for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive with all the previous dedicated approaches.