10 fully funded PhD scholarships are available at the School of Computer Science, for details, have a look here: https://www.nottingham.ac.uk/jobs/currentvacancies/ref/SCI1979. Deadline for applications is 15th March 2021.
If you are interesting in applying for a scholarship to join LUCID as a PhD student, please contact Prof. Christian Wagner in the first instance. You may also want to chat to some of the existing students and/or researchers in the group and look at our publications.
The paper "ADONiS—Adaptive Online Nonsingleton Fuzzy Logic Systems" (by Direnc Pekaslan, Christian Wagner and Jonathan M. Garibaldi) has been published on IEEE Transactions on Fuzzy Systems.
Real-world environments are subject to different uncertainty sources which can be impactful at different levels and/or at different duration which can inevitably cause a variation in uncertainty levels over time. Due to the heterogeneity and diversity of real-world conditions, measurement devices (e.g. sensors) may not be able to provide absolute true perfect value, rather approximations which in turn processed as inputs to systems. Thus, given inputs are exposed to different effects (e.g. quadcopters subjected to varying wind gusts) which lead to input uncertainty to be a principal source of uncertainty and inseparable components of decision-making systems.
Non-Singleton Fuzzy Logic Systems have the potential to capture and handle input uncertainty within the design of input fuzzy sets. In this paper, we propose a complete ADaptive, Online Non-Singleton (ADONiS) framework which incorporates online uncertainty detection and associated parameterization of the Non-Singleton input fuzzy sets, thus providing an improved capacity to adapt to variations in the level of input-affecting noise, common in real-world applications.
The proposed approach avoids both the need for a priori knowledge of the uncertainty levels experienced at runtime and the need for offline training while providing the means for systems to continuously adapt to changing levels of uncertainty. Specifically, in the proposed approach, input fuzzy set parameters are continuously adapted based on information gained from an uncertainty level estimation process which iteratively estimates uncertainty levels over a sequence of recent observations.
The proposed ADONiS framework for combining online determination of uncertainty levels with associated adaptation of input fuzzy sets provides an efficient and effective solution which elegantly models input uncertainty ‘where it arises’ without requiring changes in any other part (e.g. antecedents, rules or consequents) of the FLS. In doing so, ADONiS limits tuning to the fuzzification stage and remain rules ‘untouched’ (which can be generated based on experts insights or in a data-driven way), thus providing a fundamental requirement for good interpretability. –if rules and sets were understood well initially.
As time series forecasting provides an ideal test bed for the systematic evaluation (offering the potential to accurately control the levels of uncertainty/noise affecting system inputs at any given time) of techniques designed to deal with input uncertainty, in this paper, we focus on applying the proposed ADONiS framework to the context of two common chaotic time series (Mackey-Glass, Lorenz) prediction as an initial area of the application enabling efficient evaluation and demonstration.
An animated illustration of the ADONiS adaptive behaviour to variation in the levels of uncertainty affecting a system’s inputs can be seen below.
At each time step, inputs are associated with a given non-singleton FS, for which the parameters are determined directly by the levels of uncertainty detected within the preceding time frame. Employing an uncertainty detection technique to construct input FSs provides the capacity for adapting to changes in the levels of uncertainty affecting a system (e.g. in respect to varying environmental circumstances).
Acknowledging the fact that in the real world, sources of (varying levels of) uncertainty are pervasive, a variety of different training/testing scenarios were explored to systematically evaluate the proposed framework. The results from the comparison of the proposed Adaptive and Non-adaptive techniques suggest that the proposed ADONiS approach of dynamically changing input FSs provides significant advantages, particularly in environments that include high variation in noise levels, which are common in real-world applications. For more details, please see the paper 10.1109/TFUZZ.2019.2933787.
The IEEE Transactions on AI is now live and open for submissions at https://mc.manuscriptcentral.com/tai-ieee.
"The IEEE Transactions on Artificial Intelligence (TAI) is a multidisciplinary journal publishing papers on theories and methodologies of Artificial Intelligence. Applications of Artificial Intelligence are also considered.
Topics covered by IEEE TAI include, but not limited to, Agent-based Systems, Augmented Intelligence, Autonomic Computing, Constraint Systems, Explainable AI, Knowledge-Based Systems, Learning Theories, Planning, Reasoning, Search, Natural Language Processing, and Applications. Technical papers addressing contemporary topics in AI such as Ethics and Social Implications are welcomed."
For more details, see the Call for Papers here and the journal website here.
Today marked the funeral of Professor Robert (Bob) John, a great friend, scholar, and colleague. Bob was a member and supporter of LUCID from its very beginning and supported everyone across the group, sharing his expertise on fuzzy sets, in particular type-2 fuzzy sets and fuzzy logic. Bob, you will be missed.
Christian & LUCID, 5th March 2020
Members of the LUCID group are set to give a tutorial on
Using intervals to capture and handle uncertainty
at the World Congress on Computational intelligence (WCCI), July 19-24, 2020, Glasgow, UK
Uncertainty is pervasive across data and data sources, from sensors in engineering applications to human preference and expertise in areas as diverse as marketing to cyber security. Appropriate handling of such uncertainties depends upon three main stages: capture, modelling, and analysis of/reasoning with results.
In recent years, interest has surged in using data types that are fundamentally uncertain – in particular intervals (rather than exact numbers). This has promoted novel research into multiple facets of handling uncertainty using interval-values. This includes capturing the uncertainty at source, and modelling it using intervals or higher-level models such as fuzzy sets. A variety of approaches to analysing said data have been introduced, from interval arithmetic and statistics on intervals, to similarity and distance measures applied to both ‘raw’ interval-valued datasets and fuzzy set models of the original data.
Going forward, it is expected that the use of intervals within machine learning and AI techniques will continue to grow, providing an intuitive means of capturing, accounting for, and communicating uncertainty in data.
This tutorial is designed to give researchers a practical introduction to the use of intervals for handling uncertainty in data. The tutorial will discuss relevant types and sources of uncertainty before proceeding to review and demonstrate practical approaches and tools that enable the capture, modelling and analysis of interval-valued data. This session will provide participants with an end-to-end overview and in-depth starting point for leveraging intervals within their own research.
The tutorial is structured into four main components:
1. Capturing intervals from people
The first part of the tutorial will discuss the challenges behind capturing intervals in practice, before providing some practical solutions. This will include the underlying rationale, the nature and different types of intervals – and why these matter. As a use-case, we will discuss the elicitation of intervals within the quantitative social sciences, as part of a recently introduced interval-valued questionnaire approach using a freely available software platform: DECSYS.
2. Handling and analysing interval-valued data
The second part of the tutorial will review key techniques for handling ‘raw’ interval-valued data, including interval arithmetic and the computation of summary statistics – along with associated challenges (e.g. the dependency problem).
3. Modelling intervals using fuzzy sets
Beyond handling interval-valued data directly, a variety of approaches have been developed to model multi-source interval-valued data using fuzzy sets. We will discuss and demonstrate key algorithms, focussing in particular on the Interval Agreement Approach (IAA), which is designed to model interval-valued datasets while minimising modelling assumptions (e.g. outlier removal).
4. Case studies
In the final part of the tutorial, we will discuss a set of recent studies. These serve as real-world examples – demonstrating the efficacy of intervals through research and in applications that range from cyber security to engineering and psychology.
The overall time of the tutorial will be three hours, with approximately 40 minutes per section, and a 20 minute break.
There are no pre-requisites for this tutorial, although a familiarity with fuzzy sets will be an advantage.
Prof. Christian Wagner, Prof. Vladik Kreinovich, Dr Josie McCulloch, Dr Zack Ellerby
The organisers have a track record of organising and chairing special sessions at previous IEEE conferences, annually, since 2009.
A paper authored by three LUCID members (Zack Ellerby, Josie McCulloch and Christian Wagner), together with a student from Horizon CDT (Melanie Wilson), was recently presented at the 14th International Conference on Critical Information Infrastructures Security (Linköping, Sweden). This paper won the ‘Young CRITIS Award’ 2019, having been judged as the best by someone under the age of 35 at the conference.
The paper, titled ‘Exploring how Component Factors and their Uncertainty Affect Judgements of Risk in Cyber-Security’ is available to view here: http://arxiv.org/abs/1910.00703
The LUCID group have returned from a successful trip to the International Conference on Fuzzy Systems FUZZ-IEEE 2019 in New Orleans, Louisiana. Multiple papers are published from the group, all of which are listed below:
Fuzzy Integral Driven Ensemble Classification using A Priori Fuzzy Measures
Utkarsh Agrawal, Christian Wagner, Jonathan M. Garibaldi and Daniele Soria
On the Concept of Meaningfulness in Constrained Type-2 Fuzzy Sets
Pasquale D'Alterio, Jonathan Garibaldi and Robert John
DECSYS - Discrete and Ellipse-based response Capture SYStem
Zack Ellerby, Josie McCulloch, John Young and Christian Wagner
A Preliminary Approach for the Exploitation of Citizen Science Data for Fast and Robust Fuzzy k-Nearest Neighbour Classification
Manuel Jimenez, Mercedes Torres Torres, Robert John and Isaac Triguero
Measuring Similarity Between Discontinuous Intervals - Challenges and Solutions
Shaily Kabir, Christian Wagner, Timothy C. Havens and Derek T. Anderson
On Comparing and Selecting Approaches to Model Interval-Valued Data as Fuzzy Sets
Josie McCulloch, Zack Ellerby and Christian Wagner
Measuring Inter-group Agreement on zSlice Based General Type-2 Fuzzy Sets
Javier Navarro and Christian Wagner
Leveraging IT2 Input Fuzzy Sets in Non-Singleton Fuzzy Logic Systems to Dynamically Adapt to Varying Uncertainty Levels
Direnc Pekaslan, Christian Wagner and Jonathan M. Garibaldi
A Measure of Structural Complexity of Hierarchical Fuzzy Systems Adapted from Software Engineering
Tajul Rosli Razak, Jonathan M. Garibaldi and Christian Wagner
A Novel Weighted Combination Method for Feature Selection using Fuzzy Sets
Zixiao Shen, Xin Chen, Jonathan M. Garibaldi
Fuzzy Hot Spot Identification for Big Data: An Initial Approach
Rebecca Tickle, Isaac Triguero, Grazziela P. Figueredo, Ender Ozcan, Mohammad Mesgarpour and Robert I. John
The paper "On the Relationship between Similarity Measures and Thresholds of Statistical Significance in the Context of Comparing Fuzzy Sets" (by Josie McCulloch, Zack Ellerby and Christian Wagner) has been accepted for publication and is available now here: doi.org/10.1109/TFUZZ.2019.2922161
Comparing fuzzy sets by computing their similarity is common, with a large set of measures of similarity available. However, while commonplace in the computational intelligence community, the application and results of similarity measures are less common in the wider scientific context, where statistical approaches are the standard for comparing distributions. This is challenging, as it means that developments around similarity measures arising from the fuzzy community are inaccessible to the wider scientific community; and that the fuzzy community fails to take advantage of a strong statistical understanding which may be applicable to comparing (fuzzy membership) functions. In this paper, we commence a body of work on systematically relating the outputs of similarity measures to the notion of statistically significant difference; that is, how (dis)similar do two fuzzy sets need to be for them to be statistically different? We explain that in this context it is useful to initially focus on dis-similarity, rather than similarity, as the former aligns directly with the widely used concept of statistical difference. We propose two methods of applying statistical tests to the outputs of fuzzy dissimilarity measures to determine significant difference. We show how the proposed work provides deeper insight into the behaviour and possible interpretation of degrees of dis-similarity and, consequently, similarity, and how the interpretation differs in respect to context (e.g., the complexity of the fuzzy sets).
The paper "Similarity between interval-valued fuzzy sets taking into account the width of the intervals and admissible orders" (by H. Bustince, C. Marco-Detchart, J. Fernandez, C. Wagner, J.M. Garibaldi, Z. Takác) has been accepted for publication to Fuzzy Sets and Systems: https://doi.org/10.1016/j.fss.2019.04.002
For abstract and highlights, see: christianwagner.weebly.com/
Earlier this year, I was invited to contribute to the panel on 'AI Governance: Role of the legislators, tech companies and standard bodies' at CPDP 2019 in Brussels, Belgium. Big thanks to Mark Cole, Andra Giurgiu and the University of Luxembourg for organising and hosting an exciting and timely panel (and for inviting me, even though I know nothing about governance :) ). Also, thank you to the CPDP organisers - it was a great, really stimulating and extremely well organised conference!
A video of the panel is now available here: https://www.youtube.com/watch?v=3ZJg-2D2QIA, with brief details on what to expect below.
All the best,
Panel organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, Co-organiser Andra Giurgiu, University of Luxembourg (LU),
Moderator: Erik Valgaeren, Stibbe (BE)
Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE)
AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond security and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI?
• Do we need AI governance? If so, who should be in charge of it?
• Is there a need to ensure consistency of such governance?
• What are the risks? Do we know them and are we in the right position to address them?
• Are existing structures/bodies sufficient to address these issues or do we perhaps need to
create news ones?
News, Ideas and Comments around our work.