For the past eight years I have been working in various fields related to AI and Computer Science research, with a particular interest on cognitive and social phenomena. I'm currently working at the Informatics Institute at the University of Amsterdam on the conception of a computer-aided policy-based environment for socio-technical infrastructures (e.g. data-sharing infrastructures, in healthcare, finance or logistics), fostering the idea of normware as third level of conception of computational systems, besides software and hardware (see our position paper on the role of normware in trustworthy and explainable AI). I also assist in the coordination of a recently formed group of 5 PhD students.
Between 2016 and 2018, at Télécom ParisTech and at Paris Dauphine University, I have explored alternative solutions to the interface problem (human vs artificial cognition). I have investigated the symbol grounding problem, or semantic gap, from an image interpretation perspective, and exploring potential applications of conceptual spaces, mathematical morphology, and simplicity theory, working together with Isabelle Bloch, Jean-Louis Dessalles and Jamal Atif.
My most recent publications are about:
- a critique of the "Policy Recommendations for Trustworthy AI" by the AI HLEG appointed by the European commission;
- a logic programming-based theory of power, action and causation to be used for automated normative reasoning and normative design;
- the relationship between logic conditionals and supervenience (identified as necessary requirement for compression), supporting an alternative explanation of Wason's selection tasks;
- how normware (not software, not hardware) might be a key notion to reach trustworthy and explainable AI;
- the computation of contrast on conceptual spaces, showing that graded membership functions (for properties/relations as "hot", "sweet", "left-of", as well as for conceptual categories) can be seen as derived product of contrastive functions;
- the distinction between perceptual and conceptual similarity, based on notion of contrast, suggesting a novel explanation to the issues observed with human similarity judgments concerning symmetry, triangle inequality, minimality and diagnosticity effect;
- the identification of relevant causes in a given scenario, including an empirical study highlighting issues with Bayesian measures;
- the quantification of notions related to the attribution of moral/legal responsibility (e.g. negligence, foreseeability, risk) based on the computation of Kolmogorov's like complexities (building upon simplicity theory).
I received my PhD in 2016 from the University of Amsterdam for the research conducted at the Leibniz Center for Law, the Artificial Intelligence & Law department. My research project aimed to support a partial realignment of representations of law (norms) with representations of action (business processes or case stories). I have explored agent-based modelling to support the conception of operational tools for administrative organizations, and design tools for policy-makers. The resulting thesis is Aligning Law and Action: a conceptual and computational inquiry.
Before these experiences, I took a M.Sc. in Electronic Engineering at the Politecnico di Torino (Italy) and at the ENSIMAG - INPG (France), following a double degree program. Before joining again the academic world, I worked as business analyst for McKinsey & Co., and then as freelance developer for prototype, innovative applications and as professional musician.
I serve as PC member for IJCAI (since 2020), AAAI (AAAI-19 Outstanding PC Award), AAMAS, JURIX (since 2019), ICAIL (since 2017) and ICAART (since 2014).