The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.
Tutorial proposals are accepted until:
September 13, 2017
If you wish to propose a new Tutorial please kindly fill out and submit this Expression of Interest
Interpretable Fuzzy Systems
University of Bari "A. Moro"
Corrado Mencar is currently Assistant Professor at the Department of Informatics, University of Bari, Italy. He received his Ms.Sci. degree in Informatics in 2000 and a Ph.D. degree in Informatics in 2005, both at the University of Bari. In 2001 he was employed as software analyst and designer for some Italian software farms. In 2005 he joined the University of Bari as Assistant Professor by actively doing research in computational intelligence and related fields. His current research interests include fuzzy logic and fuzzy systems, granular computing, computational web intelligence, intelligent data analysis and bioinformatics. The most important achievements in his research are related to interpretability of fuzzy systems, i.e. techniques for endowing fuzzy systems with empirically acquired knowledge that can be communicated to users in a way that is easy to read and understand. He introduced the concept of semantic co-intension in modeling interpretable fuzzy systems, which promoted a distinction between semantical and structural aspects of interpretability. In his research activities, he joined several research projects and published more than 80 peer-reviewed international publications. He is Associate Editor for internationals journals and featured reviewer for the ACM Computing Reviews. CM taught in several undergraduate and graduate courses on topics related to his research as well as to programming fundamentals, computer architectures and operating systems. He has been supervisor of a number of Ph.D. students actively involved in research on interpretable fuzzy systems and intelligent data analysis.
The key factor for the success of fuzzy logic stands in the ability of modeling and processing perceptions instead of measurements. In most cases, such perceptions are expressed in natural language. Thus, fuzzy logic acts as a mathematical underpinning for modeling and processing perceptions described in natural language. As a consequence, fuzzy systems are endowed with the capability of conjugating a complex behavior with a simple description in terms of linguistic rules. In many cases, the compilation of fuzzy systems has been accomplished manually, with human knowledge purposely injected in fuzzy rules in order to model the desired behavior. However, the great success of fuzzy logic led to the development of many algorithms aimed at acquiring knowledge from data, which can be expressed in terms of fuzzy rules. This enabled the automatic design of fuzzy systems through data-driven design techniques, which is common practice nowadays. Nevertheless, while fuzzy sets can be generally used to model perceptions, some of them do not lead to a straight interpretation in natural language. In consequence of this, the adoption of accuracy-driven algorithms for acquiring knowledge from data often results in unintelligible models. In those cases, the fundamental plus of fuzzy logic is lost and the derived models are comparable to other measurement-based models (like neural networks) in terms of knowledge interpretability. In a nutshell, interpretability is not granted by the adoption of fuzzy logic, this representing a necessary yet not a sufficient requirement for modeling and processing perceptions. However, interpretability is a quality that is not easy to define and quantify. Several open and challenging questions arise while considering interpretability in fuzzy modeling: What is interpretability? Why interpretability is worth considering? How to ensure interpretability? How to assess (quantify) interpretability? How to design interpretable fuzzy models? And so on. The objective of this tutorial is to address these questions and to discuss the latest research results on modeling interpretable fuzzy systems. By a clear understanding of what interpretability is and how to design and assess interpretable fuzzy models, it is possible to approach the development of intelligent systems that are not only capable of automatically acquiring knowledge from data, but can communicate and interact with human users for human-centered information processing and collaborative intelligence.
Fuzzy modeling; Interpretability; Fuzzy Rule-based Systems.