Members of the LUCID group are set to give a tutorial on
Using intervals to capture and handle uncertainty
at the World Congress on Computational intelligence (WCCI), July 19-24, 2020, Glasgow, UK
Uncertainty is pervasive across data and data sources, from sensors in engineering applications to human preference and expertise in areas as diverse as marketing to cyber security. Appropriate handling of such uncertainties depends upon three main stages: capture, modelling, and analysis of/reasoning with results.
In recent years, interest has surged in using data types that are fundamentally uncertain – in particular intervals (rather than exact numbers). This has promoted novel research into multiple facets of handling uncertainty using interval-values. This includes capturing the uncertainty at source, and modelling it using intervals or higher-level models such as fuzzy sets. A variety of approaches to analysing said data have been introduced, from interval arithmetic and statistics on intervals, to similarity and distance measures applied to both ‘raw’ interval-valued datasets and fuzzy set models of the original data.
Going forward, it is expected that the use of intervals within machine learning and AI techniques will continue to grow, providing an intuitive means of capturing, accounting for, and communicating uncertainty in data.
This tutorial is designed to give researchers a practical introduction to the use of intervals for handling uncertainty in data. The tutorial will discuss relevant types and sources of uncertainty before proceeding to review and demonstrate practical approaches and tools that enable the capture, modelling and analysis of interval-valued data. This session will provide participants with an end-to-end overview and in-depth starting point for leveraging intervals within their own research.
The tutorial is structured into four main components:
1. Capturing intervals from people
The first part of the tutorial will discuss the challenges behind capturing intervals in practice, before providing some practical solutions. This will include the underlying rationale, the nature and different types of intervals – and why these matter. As a use-case, we will discuss the elicitation of intervals within the quantitative social sciences, as part of a recently introduced interval-valued questionnaire approach using a freely available software platform: DECSYS.
2. Handling and analysing interval-valued data
The second part of the tutorial will review key techniques for handling ‘raw’ interval-valued data, including interval arithmetic and the computation of summary statistics – along with associated challenges (e.g. the dependency problem).
3. Modelling intervals using fuzzy sets
Beyond handling interval-valued data directly, a variety of approaches have been developed to model multi-source interval-valued data using fuzzy sets. We will discuss and demonstrate key algorithms, focussing in particular on the Interval Agreement Approach (IAA), which is designed to model interval-valued datasets while minimising modelling assumptions (e.g. outlier removal).
4. Case studies
In the final part of the tutorial, we will discuss a set of recent studies. These serve as real-world examples – demonstrating the efficacy of intervals through research and in applications that range from cyber security to engineering and psychology.
The overall time of the tutorial will be three hours, with approximately 40 minutes per section, and a 20 minute break.
There are no pre-requisites for this tutorial, although a familiarity with fuzzy sets will be an advantage.
Prof. Christian Wagner, Prof. Vladik Kreinovich, Dr Josie McCulloch, Dr Zack Ellerby
The organisers have a track record of organising and chairing special sessions at previous IEEE conferences, annually, since 2009.
A paper authored by three LUCID members (Zack Ellerby, Josie McCulloch and Christian Wagner), together with a student from Horizon CDT (Melanie Wilson), was recently presented at the 14th International Conference on Critical Information Infrastructures Security (Linköping, Sweden). This paper won the ‘Young CRITIS Award’ 2019, having been judged as the best by someone under the age of 35 at the conference.
The paper, titled ‘Exploring how Component Factors and their Uncertainty Affect Judgements of Risk in Cyber-Security’ is available to view here: http://arxiv.org/abs/1910.00703
The LUCID group have returned from a successful trip to the International Conference on Fuzzy Systems FUZZ-IEEE 2019 in New Orleans, Louisiana. Multiple papers are published from the group, all of which are listed below:
Fuzzy Integral Driven Ensemble Classification using A Priori Fuzzy Measures
Utkarsh Agrawal, Christian Wagner, Jonathan M. Garibaldi and Daniele Soria
On the Concept of Meaningfulness in Constrained Type-2 Fuzzy Sets
Pasquale D'Alterio, Jonathan Garibaldi and Robert John
DECSYS - Discrete and Ellipse-based response Capture SYStem
Zack Ellerby, Josie McCulloch, John Young and Christian Wagner
A Preliminary Approach for the Exploitation of Citizen Science Data for Fast and Robust Fuzzy k-Nearest Neighbour Classification
Manuel Jimenez, Mercedes Torres Torres, Robert John and Isaac Triguero
Measuring Similarity Between Discontinuous Intervals - Challenges and Solutions
Shaily Kabir, Christian Wagner, Timothy C. Havens and Derek T. Anderson
On Comparing and Selecting Approaches to Model Interval-Valued Data as Fuzzy Sets
Josie McCulloch, Zack Ellerby and Christian Wagner
Measuring Inter-group Agreement on zSlice Based General Type-2 Fuzzy Sets
Javier Navarro and Christian Wagner
Leveraging IT2 Input Fuzzy Sets in Non-Singleton Fuzzy Logic Systems to Dynamically Adapt to Varying Uncertainty Levels
Direnc Pekaslan, Christian Wagner and Jonathan M. Garibaldi
A Measure of Structural Complexity of Hierarchical Fuzzy Systems Adapted from Software Engineering
Tajul Rosli Razak, Jonathan M. Garibaldi and Christian Wagner
A Novel Weighted Combination Method for Feature Selection using Fuzzy Sets
Zixiao Shen, Xin Chen, Jonathan M. Garibaldi
Fuzzy Hot Spot Identification for Big Data: An Initial Approach
Rebecca Tickle, Isaac Triguero, Grazziela P. Figueredo, Ender Ozcan, Mohammad Mesgarpour and Robert I. John
The paper "On the Relationship between Similarity Measures and Thresholds of Statistical Significance in the Context of Comparing Fuzzy Sets" (by Josie McCulloch, Zack Ellerby and Christian Wagner) has been accepted for publication and is available now here: doi.org/10.1109/TFUZZ.2019.2922161
Comparing fuzzy sets by computing their similarity is common, with a large set of measures of similarity available. However, while commonplace in the computational intelligence community, the application and results of similarity measures are less common in the wider scientific context, where statistical approaches are the standard for comparing distributions. This is challenging, as it means that developments around similarity measures arising from the fuzzy community are inaccessible to the wider scientific community; and that the fuzzy community fails to take advantage of a strong statistical understanding which may be applicable to comparing (fuzzy membership) functions. In this paper, we commence a body of work on systematically relating the outputs of similarity measures to the notion of statistically significant difference; that is, how (dis)similar do two fuzzy sets need to be for them to be statistically different? We explain that in this context it is useful to initially focus on dis-similarity, rather than similarity, as the former aligns directly with the widely used concept of statistical difference. We propose two methods of applying statistical tests to the outputs of fuzzy dissimilarity measures to determine significant difference. We show how the proposed work provides deeper insight into the behaviour and possible interpretation of degrees of dis-similarity and, consequently, similarity, and how the interpretation differs in respect to context (e.g., the complexity of the fuzzy sets).
The paper "Similarity between interval-valued fuzzy sets taking into account the width of the intervals and admissible orders" (by H. Bustince, C. Marco-Detchart, J. Fernandez, C. Wagner, J.M. Garibaldi, Z. Takác) has been accepted for publication to Fuzzy Sets and Systems: https://doi.org/10.1016/j.fss.2019.04.002
For abstract and highlights, see: christianwagner.weebly.com/
Earlier this year, I was invited to contribute to the panel on 'AI Governance: Role of the legislators, tech companies and standard bodies' at CPDP 2019 in Brussels, Belgium. Big thanks to Mark Cole, Andra Giurgiu and the University of Luxembourg for organising and hosting an exciting and timely panel (and for inviting me, even though I know nothing about governance :) ). Also, thank you to the CPDP organisers - it was a great, really stimulating and extremely well organised conference!
A video of the panel is now available here: https://www.youtube.com/watch?v=3ZJg-2D2QIA, with brief details on what to expect below.
All the best,
Panel organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, Co-organiser Andra Giurgiu, University of Luxembourg (LU),
Moderator: Erik Valgaeren, Stibbe (BE)
Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE)
AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond security and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI?
• Do we need AI governance? If so, who should be in charge of it?
• Is there a need to ensure consistency of such governance?
• What are the risks? Do we know them and are we in the right position to address them?
• Are existing structures/bodies sufficient to address these issues or do we perhaps need to
create news ones?
LUCID members meet in Nottingham city centre to celebrate our member Elissa Madi following her viva passing. Throughout her PhD research she focused on Type-2 Fuzzy TOPSIS and worked upon improving Multi-Criteria Decision Making Models. We all in LUCID wish Elissa the very best in the next stage of her life!
Congratulations to LUCID member Elissa Madi who passed her viva on Friday subject to minor corrections!
Elissa was primarily supervised by Prof Jon Garibaldi. Her thesis is entitled ‘An Improved Uncertainty in Multi-Criteria Decision Making Model Based on Type-2 Fuzzy TOPSIS’.
Cyber Security Threat Data Analyst - KTP Associate (fixed term)
Closing Date: Friday, 5th October 2018
Based primarily at J.P. Morgan, Canary Wharf, London
This is an exciting opportunity for an ambitious individual to advance their career through a Knowledge Transfer Partnership (KTP). You will be working with JP Morgan and the School of Computer Science at the University of Nottingham to develop and embed a novel methodology to deliver improved forward assessment of the likelihood of cyber security threats in respect to a variety of uncertain data.
You will be employed by the University of Nottingham (School of Computer Science) but will be based primarily at JP Morgan, Canary Wharf, London.
This post will be offered on a fixed-term contract for a period of 36 months.
See here for more details on the role and to apply.
News, Ideas and Comments around our work.