New paper in Neural Networks


The paper "SEGAL time series classification — Stable explanations using a generative model and an adaptive weighting method for LIME" (by Han Meng, Christian Wagner and Isaac Triguero) has been accepted for publication and is available now here:


AI systems are increasingly used in critical areas like healthcare, where understanding how decisions are made is essential. However, many AI models are black boxes—while generally effective, the way they arrive at their outputs is not easily understandable by humans. LIME is a popular and well-known method that helps users understand a model's behaviour by providing comparatively simply explanations for inputs which are similar to what the user is interested in—so-called ‘neighbours’. This study addresses a key issue with LIME: its explanations are not always consistent. Imagine getting a different explanation each time you ask why an AI made a certain decision—it would be confusing and would cast doubt on the utility of the explanation. We discovered that this inconsistency in part arises because at times, LIME selects ‘neighbours’ which are actually not neighbours at all, thus resulting in very different reasoning and explanations. To address this, the main contribution of this work is to develop a generative model to minimize the risk of selecting ‘false neighbours’. Results show that this improvement makes LIME's explanations significantly more consistent. Through this work, we aim to increase awareness of how unrealistic samples can affect the reliability of AI explanations and to advance explainable AI systems in general, making them better suitable for deployment in critical applications.

LUCID Christmas social event


This Friday we have an amazing social event for celebrating the end of the semester and upcoming Christmas! Thank you all for your participation. Special congratulations to Chao for winning the first edition of the LUCID Texas Hold'em Poker Championship!

We are recruiting!


Interested in joining LUCID? We are recruiting multiple full academic posts, including options for early- and established career researchers. For mor information, see our Work@LUCID pages, or the advert directly here.

LUCID Meetings back 'in-person' & JuzzyPy


The weekly LUCID catchup is back to form, with a full house in-person last week. Great to see everyone and special thanks for Sameer to giving us a sneak peak at his upcoming SSCI paper on JuzzyPy, the native Python library for type-1, interval type-2 and general type-2 fuzzy systems, seeSoftware - LUCID - LAB FOR UNCERTAINTY IN DATA AND DECISION MAKING (


Winner of the CMRxMotion Challenge in MICCAI 2022


Our PhD student Ruizhe Li and his supervisor Xin Chen in IMA/LUCID group have won the 1st place of an international grand challenge held in MICCAI 2022 ( MICCAI is the top international technical-oriented conference in the field of medical imaging. This challenge (CMRxMotion) tackles the task of automatic quality assessment in cardiac magnetic resonance imaging, which attracted 87 internal teams. Only 16 teams submitted their final valid algorithms for testing, and we are the winner by outperforming the second best team a large margin (75% classification accuracy vs 70%)! The technical detail can be found in our paper, entitled: “Motion-related Artefacts Classification Using Patch-based Ensemble and Transfer Learning in Cardiac MRI”, MICCAI STACOM workshop 2022. Very well done Ruizhe!

Full PhD Studentships for 2022 entry!


Fully funded #phdposition now available for 2022 entry.

If you are excited about #research, understanding handling uncertainty and vagueness to develop #AI-driven, #human-centric #decisionsupport of tomorrow, from cyber-security to consumer-products and supply-chain management; working in a supportive, #computerscience -centric research group with a strong multi-disciplinary outlook from social #science to mathematics and psychology, then get in touch!

Informal enquiries:

Details on the positions:

New Paper in IEEE Transactions on Fuzzy Systems


The paper "ADONiS—Adaptive Online Nonsingleton Fuzzy Logic Systems" (by Direnc Pekaslan, Christian Wagner and Jonathan M. Garibaldi) has been published on IEEE Transactions on Fuzzy Systems

Real-world environments are subject to different uncertainty sources which can be impactful at different levels and/or at different duration which can inevitably cause a variation in uncertainty levels over time. Due to the heterogeneity and diversity of real-world conditions, measurement devices (e.g. sensors) may not be able to provide absolute true perfect value, rather approximations which in turn processed as inputs to systems. Thus, given inputs are exposed to different effects (e.g. quadcopters subjected to varying wind gusts) which lead to input uncertainty to be a principal source of uncertainty and inseparable components of decision-making systems.

Non-Singleton Fuzzy Logic Systems have the potential to capture and handle input uncertainty within the design of input fuzzy sets. In this paper, we propose a complete ADaptive, Online Non-Singleton (ADONiS) framework which incorporates online uncertainty detection and associated parameterization of the Non-Singleton input fuzzy sets, thus providing an improved capacity to adapt to variations in the level of input-affecting noise, common in real-world applications.

The proposed approach avoids both the need for a priori knowledge of the uncertainty levels experienced at runtime and the need for offline training while providing the means for systems to continuously adapt to changing levels of uncertainty. Specifically, in the proposed approach, input fuzzy set parameters are continuously adapted based on information gained from an uncertainty level estimation process which iteratively estimates uncertainty levels over a sequence of recent observations.

The proposed ADONiS framework for combining online determination of uncertainty levels with associated adaptation of input fuzzy sets provides an efficient and effective solution which elegantly models input uncertainty ‘where it arises’ without requiring changes in any other part (e.g. antecedents, rules or consequents) of the FLS. In doing so, ADONiS limits tuning to the fuzzification stage and remain rules ‘untouched’ (which can be generated based on experts insights or in a data-driven way), thus providing a fundamental requirement for good interpretability. –if rules and sets were understood well initially.

As time series forecasting provides an ideal test bed for the systematic evaluation (offering the potential to accurately control the levels of uncertainty/noise affecting system inputs at any given time) of techniques designed to deal with input uncertainty, in this paper, we focus on applying the proposed ADONiS framework to the context of two common chaotic time series (Mackey-Glass, Lorenz) prediction as an initial area of the application enabling efficient evaluation and demonstration.

An animated illustration of the ADONiS adaptive behaviour to variation in the levels of uncertainty affecting a system’s inputs can be seen here.

At each time step, inputs are associated with a given non-singleton FS, for which the parameters are determined directly by the levels of uncertainty detected within the preceding time frame. Employing an uncertainty detection technique to construct input FSs provides the capacity for adapting to changes in the levels of uncertainty affecting a system (e.g. in respect to varying environmental circumstances).

Acknowledging the fact that in the real world, sources of (varying levels of) uncertainty are pervasive, a variety of different training/testing scenarios were explored to systematically evaluate the proposed framework. The results from the comparison of the proposed Adaptive and Non-adaptive techniques suggest that the proposed ADONiS approach of dynamically changing input FSs provides significant advantages, particularly in environments that include high variation in noise levels, which are common in real-world applications. For more details, please see the paper 10.1109/TFUZZ.2019.2933787.

IEEE Transactions on AI now live


The IEEE Transactions on AI is now live and open for submissions at


"The IEEE Transactions on Artificial Intelligence (TAI) is a multidisciplinary journal publishing papers on theories and methodologies of Artificial Intelligence. Applications of Artificial Intelligence are also considered.

Topics covered by IEEE TAI include, but not limited to, Agent-based Systems, Augmented Intelligence, Autonomic Computing, Constraint Systems, Explainable AI, Knowledge-Based Systems, Learning Theories, Planning, Reasoning, Search, Natural Language Processing, and Applications. Technical papers addressing contemporary topics in AI such as Ethics and Social Implications are welcomed."

For more details, see the Call for Papers here and the journal website here.

Professor Robert John


Today marked the funeral of Professor Robert (Bob) John, a great friend, scholar, and colleague. Bob was a member and supporter of LUCID from its very beginning and supported everyone across the group, sharing his expertise on fuzzy sets, in particular type-2 fuzzy sets and fuzzy logic. Bob, you will be missed.

Christian & LUCID, 5th March 2020

LUCID set to present tutorial at WCCI 2020


Members of the LUCID group are set to give a tutorial on

Using intervals to capture and handle uncertainty

at the World Congress on Computational intelligence (WCCI), July 19-24, 2020, Glasgow, UK


Uncertainty is pervasive across data and data sources, from sensors in engineering applications to human preference and expertise in areas as diverse as marketing to cyber security. Appropriate handling of such uncertainties depends upon three main stages: capture, modelling, and analysis of/reasoning with results.

In recent years, interest has surged in using data types that are fundamentally uncertain – in particular intervals (rather than exact numbers). This has promoted novel research into multiple facets of handling uncertainty using interval-values. This includes capturing the uncertainty at source, and modelling it using intervals or higher-level models such as fuzzy sets. A variety of approaches to analysing said data have been introduced, from interval arithmetic and statistics on intervals, to similarity and distance measures applied to both ‘raw’ interval-valued datasets and fuzzy set models of the original data.

Going forward, it is expected that the use of intervals within machine learning and AI techniques will continue to grow, providing an intuitive means of capturing, accounting for, and communicating uncertainty in data.

This tutorial is designed to give researchers a practical introduction to the use of intervals for handling uncertainty in data. The tutorial will discuss relevant types and sources of uncertainty before proceeding to review and demonstrate practical approaches and tools that enable the capture, modelling and analysis of interval-valued data. This session will provide participants with an end-to-end overview and in-depth starting point for leveraging intervals within their own research.

The tutorial is structured into four main components:

  1.  Capturing intervals from people

The first part of the tutorial will discuss the challenges behind capturing intervals in practice, before providing some practical solutions. This will include the underlying rationale, the nature and different types of intervals – and why these matter. As a use-case, we will discuss the elicitation of intervals within the quantitative social sciences, as part of a recently introduced interval-valued questionnaire approach using a freely available software platform: DECSYS.

  2. Handling and analysing interval-valued data

The second part of the tutorial will review key techniques for handling ‘raw’ interval-valued data, including interval arithmetic and the computation of summary statistics – along with associated challenges (e.g. the dependency problem).

  3. Modelling intervals using fuzzy sets

Beyond handling interval-valued data directly, a variety of approaches have been developed to model multi-source interval-valued data using fuzzy sets. We will discuss and demonstrate key algorithms, focussing in particular on the Interval Agreement Approach (IAA), which is designed to model interval-valued datasets while minimising modelling assumptions (e.g. outlier removal).

  4. Case studies

In the final part of the tutorial, we will discuss a set of recent studies. These serve as real-world examples – demonstrating the efficacy of intervals through research and in applications that range from cyber security to engineering and psychology.


The overall time of the tutorial will be three hours, with approximately 40 minutes per section, and a 20 minute break.


There are no pre-requisites for this tutorial, although a familiarity with fuzzy sets will be an advantage.


Prof. Christian Wagner, Prof. Vladik Kreinovich, Dr Josie McCulloch, Dr Zack Ellerby

The organisers have a track record of organising and chairing special sessions at previous IEEE conferences, annually, since 2009.

LUCID wins Young CRITIS Award


A paper authored by three LUCID members (Zack Ellerby, Josie McCulloch and Christian Wagner), together with a student from Horizon CDT (Melanie Wilson), was recently presented at the 14th International Conference on Critical Information Infrastructures Security (Linköping, Sweden). This paper won the ‘Young CRITIS Award’ 2019, having been judged as the best by someone under the age of 35 at the conference.

The paper, titled ‘Exploring how Component Factors and their Uncertainty Affect Judgements of Risk in Cyber-Security’ is available to view here:



The LUCID group have returned from a successful trip to the International Conference on Fuzzy Systems FUZZ-IEEE 2019 in New Orleans, Louisiana. Multiple papers are published from the group, all of which are listed below:

Fuzzy Integral Driven Ensemble Classification using A Priori Fuzzy Measures

Utkarsh Agrawal, Christian Wagner, Jonathan M. Garibaldi and Daniele Soria

On the Concept of Meaningfulness in Constrained Type-2 Fuzzy Sets

Pasquale D'Alterio, Jonathan Garibaldi and Robert John

DECSYS - Discrete and Ellipse-based response Capture SYStem

Zack Ellerby, Josie McCulloch, John Young and Christian Wagner

A Preliminary Approach for the Exploitation of Citizen Science Data for Fast and Robust Fuzzy k-Nearest Neighbour Classification

Manuel Jimenez, Mercedes Torres Torres, Robert John and Isaac Triguero

Measuring Similarity Between Discontinuous Intervals - Challenges and Solutions

Shaily Kabir, Christian Wagner, Timothy C. Havens and Derek T. Anderson

On Comparing and Selecting Approaches to Model Interval-Valued Data as Fuzzy Sets

Josie McCulloch, Zack Ellerby and Christian Wagner

Measuring Inter-group Agreement on zSlice Based General Type-2 Fuzzy Sets

Javier Navarro and Christian Wagner

Leveraging IT2 Input Fuzzy Sets in Non-Singleton Fuzzy Logic Systems to Dynamically Adapt to Varying Uncertainty Levels

Direnc Pekaslan, Christian Wagner and Jonathan M. Garibaldi

A Measure of Structural Complexity of Hierarchical Fuzzy Systems Adapted from Software Engineering

Tajul Rosli Razak, Jonathan M. Garibaldi and Christian Wagner

A Novel Weighted Combination Method for Feature Selection using Fuzzy Sets

Zixiao Shen, Xin Chen, Jonathan M. Garibaldi

Fuzzy Hot Spot Identification for Big Data: An Initial Approach

Rebecca Tickle, Isaac Triguero, Grazziela P. Figueredo, Ender Ozcan, Mohammad Mesgarpour and Robert I. John

New paper in Transactions on Fuzzy Systems


The paper "On the Relationship between Similarity Measures and Thresholds of Statistical Significance in the Context of Comparing Fuzzy Sets" (by Josie McCulloch, Zack Ellerby and Christian Wagner) has been accepted for publication and is available now here:


Comparing fuzzy sets by computing their similarity is common, with a large set of measures of similarity available. However, while commonplace in the computational intelligence community, the application and results of similarity measures are less common in the wider scientific context, where statistical approaches are the standard for comparing distributions. This is challenging, as it means that developments around similarity measures arising from the fuzzy community are inaccessible to the wider scientific community; and that the fuzzy community fails to take advantage of a strong statistical understanding which may be applicable to comparing (fuzzy membership) functions. In this paper, we commence a body of work on systematically relating the outputs of similarity measures to the notion of statistically significant difference; that is, how (dis)similar do two fuzzy sets need to be for them to be statistically different? We explain that in this context it is useful to initially focus on dis-similarity, rather than similarity, as the former aligns directly with the widely used concept of statistical difference. We propose two methods of applying statistical tests to the outputs of fuzzy dissimilarity measures to determine significant difference. We show how the proposed work provides deeper insight into the behaviour and possible interpretation of degrees of dis-similarity and, consequently, similarity, and how the interpretation differs in respect to context (e.g., the complexity of the fuzzy sets).

New paper in Fuzzy Sets and Systems


The paper "Similarity between interval-valued fuzzy sets taking into account the width of the intervals and admissible orders" (by H. Bustince, C. Marco-Detchart, J. Fernandez, C. Wagner, J.M. Garibaldi, Z. Takác) has been accepted for publication to Fuzzy Sets and Systems:

For abstract and highlights, see: 

CPDP 2019 - Panel on AI Governance


Earlier this year, I was invited to contribute to the panel on 'AI Governance: Role of the legislators, tech companies and standard bodies' at CPDP 2019 in Brussels, Belgium. Big thanks to Mark Cole, Andra Giurgiu and the University of Luxembourg for organising and hosting an exciting and timely panel (and for inviting me, even though I know nothing about governance :) ). Also, thank you to the CPDP organisers - it was a great, really stimulating and extremely well organised conference! 

A video of the panel is now available here:, with brief details on what to expect below.

​All the best,


Panel organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, Co-organiser Andra Giurgiu, University of Luxembourg (LU), 

Moderator: Erik Valgaeren, Stibbe (BE)

Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE)

AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond security and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI?

• Do we need AI governance? If so, who should be in charge of it?

• Is there a need to ensure consistency of such governance?

• What are the risks? Do we know them and are we in the right position to address them?

• Are existing structures/bodies sufficient to address these issues or do we perhaps need to

create news ones?

LUCID Christmas Dinner


LUCID members meet for Christmas dinner.

LUCID social night for celebrating PhD degree received by Elissa Madi


LUCID members meet in Nottingham city centre to celebrate our member Elissa Madi following her viva passing. Throughout her PhD research she focused on Type-2 Fuzzy TOPSIS and worked upon improving Multi-Criteria Decision Making Models. We all in LUCID wish Elissa the very best in the next stage of her life! 

Congratulations to Elissa Madi


Congratulations to LUCID member Elissa Madi who passed her viva on Friday subject to minor corrections!

Elissa was primarily supervised by Prof Jon Garibaldi.  Her thesis is entitled ‘An Improved Uncertainty in Multi-Criteria Decision Making Model Based on Type-2 Fuzzy TOPSIS’.

Cyber Security Job with LUCID


Cyber Security Threat Data Analyst - KTP Associate (fixed term)

Closing Date: Friday, 5th October 2018

Based primarily at J.P. Morgan, Canary Wharf, London

This is an exciting opportunity for an ambitious individual to advance their career through a Knowledge Transfer Partnership (KTP). You will be working with JP Morgan and the School of Computer Science at the University of Nottingham to develop and embed a novel methodology to deliver improved forward assessment of the likelihood of cyber security threats in respect to a variety of uncertain data.

You will be employed by the University of Nottingham (School of Computer Science) but will be based primarily at JP Morgan, Canary Wharf, London.

This post will be offered on a fixed-term contract for a period of 36 months.

See here for more details on the role and to apply. 

SyFSeL is a free open-source library that automatically generates synthetic fuzzy sets. It is aimed for use in empirically testing methods developed for fuzzy sets. SyFSeL generates as many sets as desired, with specified membership function type (normal, bi-modal or multi-modal) and fuzzy set type (type-1 or type-2) to enable users to emulate real data. Fuzzy sets are stored in csv format so users can easily import the generated sets into their own fuzzy systems software and SyFSeL can also create graphical plots of the generated sets.

The library is available through the software page of the LUCID website and is available here.

For more information on the library and how to use it, see the related paper here. 

LUCID is proud to announce that its PGR Student member Shaily Kabir has been awarded with the 2018 IEEE Computational Intelligence Society Graduate Research Grant. This grant is given to deserving PhD students with meritorious projects who seek to carry out their research, therefore, Shaily will work during this coming summer break period with Dr. Timothy C. Havens at the Michigan Technological University (USA).

Congratulations Shaily! 

Based on a collaboration with NTU Singapore, a new paper on leveraging the more faithful tracking of input uncertainty in the context of Quadcopter Unmanned Aerial Vehicle (UAV) control has been accepted for publication in the IEEE/ASME Transactions on Mechatronics.

​An early access copy is available via the DOI here: 

​"Input Uncertainty Sensitivity Enhanced Non-Singleton Fuzzy Logic Controllers for Long-Term Navigation of Quadrotor UAVs":   

The UK Parliamentary Office of Science and Technology has published a POST note on Communicating Risk, including input on uncertainty by LUCID. For a summary of the note, read on, while summary, key points, and full report are available on the POST website here. "People's responses to risk are shaped by the way that such risks are communicated. Communicating risks effectively can defuse concerns, mitigate disaster situations and build trust with public institutions and organisations. This POSTnote defines the often misunderstood concepts of risk, uncertainty and hazard and describes the key stakeholders communicating it. It examines the factors that shape how people perceive and respond to such risks and summarises evidence on effective risk communication strategies." 

As part of ongoing collaboration across the UK Cyber Security sector, the LUCID project on 'Leveraging the Multi-Stakeholder Nature of Cyber Security', led by Christian Wagner is collaborating with the Research Institute of Science in Cyber Security (RISCS). See here for an interview with Christian on key aspects of the project including the importance of capturing uncertainty during data collection from security experts.  

As part of a new EPSRC funded research project investigating “Leveraging the Multi-Stakeholder Nature of Cyber Security” (EP/P011918/1) on human centred cyber security, working with the NCSC and Carnegie Mellon University (USA), we are exploring novel approaches of capturing and modelling data on the vulnerability of computer systems from a variety of sources, specifically human experts, with the aim of developing new ways of alerting stakeholders to specific areas of cyber security risk in their systems.

To support this project, we are excited to offer two positions for post-doctoral research fellows in cyber security which provide exceptional opportunities to the successful applicants, including working with leading academic and institutional partners in cyber security; being based at one of the leading universities in the UK; benefitting from fully funded residencies at partner institutions including Carnegie Mellon University to support collaboration; and competitive remuneration. The two positions have different foci as follows:

·         Research Associate/Fellow in Human-Centric Cyber Security

·         Research Associate/Fellow in Data-Driven Cyber Security 

Fuzzycreator is a toolkit for automatic generation and analysis of fuzzy sets from data. It facilitates the creation of both conventional and non-conventional (non-normal and non-convex) type-1, interval type-2 and general type-2 (zSlices-based) fuzzy sets from data. These fuzzy sets may then be analysed and compared through a series of tools and measures (included in the toolkit), such as evaluating their similarity and distance.

It is now available through the LUCID website at

Detailed documentation is available within the toolkit and a high-level overview will be available soon. 

Tutorials at Fuzz-IEEE 2017:  

The paper by Saeed Alqahtani and Bob John has just been accepted for presentation at SSCI 2016. 

Abstract—The use of Internet has been increasing day by day and the internet traffic is exponentially increasing. The services providers such as web services providers, email services providers, and cloud service providers have to deal with millions of users per second; and thus, the level of threats to their growing networks is also very high. To deal with this much number of users is a big challenge but detection and prevention of such kinds of threats is even more challenging and vital. This is due to the fact that those threats might cause a severe loss to the service providers in terms of privacy leakage or unavailability of the services to the users. To incorporate this issue, several Intrusion Detections Systems (IDS) have been developed that differ in their detection capabilities, performance and accuracy. In this study, we have used SNORT and SURICATA as well-known IDS systems that are used worldwide. The aim of this paper is to analytically compare the functionality, working and the capability of these two IDS systems in order to detect the intrusions and different kinds of cyber-attacks within M yCloud network. Furthermore, this study also proposes a Fuzzy-Logic engine based on these two IDSs in order to enhances the performance and accuracy of these two systems in terms of increased accuracy, specificity, sensitivity and reduced false alarms. Several experiments in this compatrative study have been conducted by using and testing ISCX dataset, which results that fuzzy logic based IDS outperforms IDS alone whereas FL-SnortIDS system outperforms FL-SuricataIDS.

​You can download here 

The paper "Measuring Agreement on Linguistic Expressions in Medical Treatment Scenarios" by Javier Navarro, Christian Wagner, Uwe Aickelin, Lynsey Green and Robert Ashford has been accepted to the 2016 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2016) which will be held in Athens, Greece in December 2016. This paper comes from a study made in collaboration with the East Midlands Sarcoma Service, Nottingham University Hospitals.

Abstract of the paper is included below. A full version of the paper will be made available after final amendments.

Abstract: Quality of life assessment represents a key process of deciding treatment success and viability. As such, patients' perceptions of their functional status and well-being are important inputs for impairment assessment. Given that patient completed questionnaires are often used to assess patient status and determine future treatment options, it is important to know the level of agreement of the words used by patients and different groups of medical professionals. In this paper, we propose a measure called the Agreement Ratio which provides a ratio of overall agreement when modelling words through Fuzzy Sets (FSs). The measure has been specifically designed for assessing this agreement in fuzzy sets which are generated from data such as patient responses. The measure relies on using the Jaccard Similarity Measure for comparing the different levels of agreement in the FSs generated. Synthetic examples are provided in order to show how to calculate the measure for given Fuzzy Sets. An application to real-world data is provided as well as a discussion about the results and the potential of the proposed measure.

The paper "Improving Security Requirement Adequacy" by Hanan Hibishi, Travis D. Breaux and Christian Wagner has been accepted to the 2016 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2016) will be held in Athens, Greece in December 2016. The paper has resulted from a recent collaboration between Carnegie Mellon and Nottingham Universities, with Hanan visiting Nottingham in early 2016.

Full title and abstract are included below. A full version of the paper will be available soon.

Title: Improving Security Requirement Adequacy - An Interval Type 2 Fuzzy Logic Security Assessment System

Abstract: Organizations rely on security experts to improve the security of their systems. These professionals use background knowledge and experience to align known threats and vulnerabilities before selecting mitigation options. The substantial depth of expertise in any one area (e.g., databases, networks, operating systems) precludes the possibility that an expert would have complete knowledge about all threats and vulnerabilities. To begin addressing this problem of fragmented knowledge, we investigate the challenge of developing a security requirements rule base that mimics multi-human expert reasoning to enable new decision-support systems.  In this paper, we show how to collect relevant information from cyber security experts to enable the generation of: (1) interval type-2 fuzzy sets that capture intra- and inter-expert uncertainty around vulnerability levels; and (2) fuzzy logic rules driving the decision-making process within the requirements analysis. The proposed method relies on comparative ratings of security requirements in the context of concrete vignettes, providing a novel, interdisciplinary approach to knowledge generation for fuzzy logic systems. The paper presents an initial evaluation of the proposed approach through 52 scenarios with 13 experts to compare their assessments to those of the fuzzy logic decision support system. The results show that the system provides reliable assessments to the security analysts, in particular, generating more conservative assessments in 19% of the test scenarios compared to the experts’ ratings.