IJCCI is a joint conference composed of three concurrent conferences: ECTA, FCTA and NCTA. These three conferences are always co-located and held in parallel. Keynote lectures are plenary sessions and can be attended by all IJCCI participants.
KEYNOTE SPEAKERS LIST
Qiangfu Zhao, University of Aizu, Japan
Title: Awareness Computing: What, Why, and How?
Witold Pedrycz, University of Alberta / Polish Academy of Sciences, Canada / Poland
Title: Concepts and Design of Granular Fuzzy Neural Networks: New Constructs of Computational Intelligence
Didier Dubois, Institut de Recherche en Informatique de Toulouse, France
Title: Ontic vs. Epistemic Fuzzy Sets in Modeling and Data Processing Tasks
Marco A. Montes de Oca, University of Delaware, U.S.A.
Title: Incremental Social Learning in Swarm Intelligence Algorithms for Optimization
Plamen Angelov, Lancaster University, U.K.
Title: Autonomous Learning Machines: Generating Rules from Data Streams
Michel Verleysen, Université Catholique de Louvain, Belgium
Title: Feature Selection for High-Dimensional Data Analysis
University of Aizu
Qiangfu Zhao received the B.S. degree in Computer Science from Shandong University (China) in 1982; the M. Eng. degree in Information Engineering from Toyohashi University of Technology (Japan) in 1985; and D. Eng. degree in Electronic Engineering from Tohoku University (Japan), in 1988. He was an associate professor from 1991 to 1993 at Beijing Institute of Technology; associate professor from 1993 to 1995 at Tohoku University (Japan); associate professor from 1995 to 1999 at the University of Aizu (Japan); and tenure full professor since 1999 at the University of Aizu. He is the head of System Intelligence Laboratory; director of the Information Systems and Technology Center; associate editor of IEEE Transactions on SMC-B; associate editor of the International Journal of Machine Learning and Cybernetics; and associate editor of the International Journal of Swarm Intelligence and Evolutionary Computation. He is the founding co-chair of the Technical Committee on Awareness Computing in IEEE Systems, Man, and Cybernetics Society and the Task Force on Aware Computing in IEEE Computational Intelligence Society. He has organized or co-organized several workshops/symposiums/conferences, including the 19th Symposium on Intelligent Systems (FAN2009); the 2009 International Workshop on Aware Computing (IWAC2009); the 2010 International Symposium on Aware Computing (ISAC2010); and the 2011 IEEE International Conference on Awareness Science and Technology (iCAST2011). He has published more than 130 refereed journal and international conference papers related to optimal linear system design, neuro-computing, evolutionary computing, awareness computing, and machine learning.
Since 1950s, artificial intelligence (AI) has become a dream of many researchers. In the 1980s, Japan launched the 5th generation computer project, expecting to create some intelligent computing machine, but failed.
Noting that the logic approach is not enough for automatic acquisition of knowledge, soft computing (including mainly neuro-computing, fuzzy logic and evolutionary computation) became another main stream for realizing AI in the early 1990s. After another 2 decades, we have not got any system that is as intelligent as a human, from the sense of "over-all performance", although in some special fields, AI systems can be superior to the human experts. The question is that what we are expecting from AI? If we are expecting another kind of intelligent being that is as clever as or cleverer than us, we may need centuries or even longer to reach the goal. In fact, we human being, as a living being, should try our best to prevent our race from being destroyed, and should not create a rival that may one day destroy us. What we should expect is to create some intelligent things (not beings) that can help us to dominate and to enjoy this world. These kinds of intelligent things should NOT be able to think like us, create like us, and behave like us.
Instead of trying to create "intelligence", we may try to create "awareness". According to Wikipedia, awareness is a term referring to the ability to perceive, to feel, or to be conscious of events, objects or patterns, which does not necessarily imply understanding. According to Merriam-Webster, awareness implies vigilance in observing or alertness in drawing inferences from what one experiences. Roughly, we may define awareness as a capability to detect some important events without (fully) understanding the events themselves. Intuitively, creating awareness should be easier than creating intelligence. However, this does not mean that awareness is less important. An aware system may not be clever enough to provide understandable knowledge about the event in a logical way; it nevertheless can tell us something we are not aware of (due to our careless, due to the limited computing power of our brains, due to the limited information we get, etc.). An aware system can provide some important materials or information for us to make a decision; help us to find out the most urgent task to do to achieve a goal; and so on. So far, awareness computing has been studied in many different areas.
Context awareness, intention awareness, situation awareness, and energy awareness are just several examples. Most of the studies have been focused on the engineering aspect of awareness. The question is that, can we create a general aware system that can be aware of "anything" important for us to make a decision, or "anything" important for us to enjoy our lives? In this talk, I would like to study awareness from a more scientific point of view; propose an awareness model that may be generally useful for solving any engineering problems; and explain the model with several case studies. Of course, this approach is by no means the only approach for solving the problem. It is just the first step towards our goal: to create the awareness. Of course, creating awareness is not the final goal for many AI researchers.
University of Alberta / Polish Academy of Sciences
Canada / Poland
Witold Pedrycz is a Professor and Canada Research Chair (CRC - Computational Intelligence) in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. In 2009 Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. He main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data mining, fuzzy control, pattern recognition, knowledge-based neural networks, relational computing, and Software Engineering. He has published numerous papers in this area. He is also an author of 14 research monographs covering various aspects of Computational Intelligence and Software Engineering. Witold Pedrycz has been a member of numerous program committees of IEEE conferences in the area of fuzzy sets and neurocomputing.
Dr. Pedrycz is intensively involved in editorial activities. He is an Editor-in-Chief of Information Sciences and Editor-in-Chief of IEEE Transactions on Systems, Man, and Cybernetics - part A. He currently serves as an Associate Editor of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of other international journals. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Council. He is a recipient of the IEEE Canada Computer Engineering Medal 2008. In 2009 he has received a Cajastur Prize for Soft Computing from the European Centre for Soft Computing for "pioneering and multifaceted contributions to Granular Computing".
Neural networks and fuzzy neural networks - fundamental constructs of Computational Intelligence are sources of knowledge. In real-word scenarios, such models become engaged in various pursuits of knowledge management:
collaboration, consensus building, knowledge transfer and alike. A concept that is crucial to knowledge management is that of information granularity.
Information granules offer a great deal of flexibility and become necessary to realize mechanisms of knowledge management. The result of knowledge management is a granular neural network or granular fuzzy neural network, which become a more general model by emerging at the higher level of abstraction. Numeric connections are generalized to their granular counterparts, say intervals, fuzzy sets, rough sets, just to name a few commonly available alternatives.
The diversity of formal treatments of information granules implies a significant variety of realizations of granular neurocomputing coming in the form of interval fuzzy neural networks, fuzzy fuzzy (fuzzy2) neural networks, probabilistic neural networks, and rough neural networks.
We discuss main categories of mechanisms of knowledge management including knowledge transfer and two modes of collaboration (hierarchical and non-hierarchical). In all design pursuits of granular fuzzy models, information granularity (and ensuing information granules) is regarded as an important design asset whose proper allocation results in the optimal granular architecture. Both a single-criterion and multiobjective formulations of the optimization of granular neural networks are presented.
A number of essential protocols of granularity allocation are studied along with the detailed optimization schemes. Given the underlying nature of the optimization problem itself, we confine ourselves to the technology of Evolutionary and Swarm Computing.
Institut de Recherche en Informatique de Toulouse
Didier Dubois is a Research Advisor at IRIT, the Computer Science Department of Paul Sabatier University in Toulouse, France and belongs to the French National Centre for Scientific Resarch (CNRS). His topics of interest range from Artificial Intelligence to Operations Research and Decision Sciences, with emphasis on the modelling, representation and processing of imprecise and uncertain information in reasoning, decision and risk analysis.
He is the co-author, with Henri Prade, of two monographs on fuzzy sets and possibility theory, and 15 edited volumes on uncertain reasoning, fuzzy sets, and decision analysis. Also with Henri Prade, he coordinated the HANDBOOK of FUZZY SETS series published by Kluwer (7 volumes, 1998-2000) including the book Fundamentals of Fuzzy Sets. He has contributed about 200 technical journal papers on uncertainty theories and applications. He is a Co-Editor-in -Chief of the journal Fuzzy Sets and Systems and a member of the Editorial Board of several technical journals dealing with uncertain reasoning. He is a former president of the International Fuzzy Systems Association (IFSA, 1995-1997) and an IFSA and an ECCAI fellow. He received the 2002 Pioneer Award of the IEEE Neural Network Society.
As acknowledged for a long time, sets may have a conjunctive or a disjunctive reading. In the conjunctive reading a fuzzy set represents an object of interest for which a gradual (rather than Boolean) description makes sense. In contrast disjunctive fuzzy sets refer to the use of sets as a representation of incomplete knowledge.
They do not model objects or quantities but partial information about an underlying object or a precise quantity. In this case the fuzzy set captures uncertainty, and its membership function is a possibility distribution.
This is because a fuzzy set is a set of possible values, while its fuzziness only brings shades of uncertainty. We call epistemic such fuzzy sets, since they represent states of incomplete knowledge. Epistemic uncertainty is the realm of possibility theory, with applications such as computing with fuzzy intervals, approximate reasoning, or imprecise regression and kriging.
Distinguishing between ontic and epistemic fuzzy sets is important in information-processing tasks because, failing to make this distinction, there is a risk of misunderstanding basic notions and tools, such as distance between fuzzy sets, variance of a fuzzy random variable, interval-valued or type 2 fuzzy sets; or yet notions of fuzzy regression, fuzzy models, fuzzy equations, etc. We discuss several examples where the ontic and epistemic points of view yield different approaches and results to these concepts.
Marco A. Montes de Oca
University of Delaware
Marco A. Montes de Oca is interested in the theory and practice of swarm intelligence. He has published more than 20 articles in journals and conferences that deal with the three main areas of application of swarm intelligence, namely, data mining, optimization, and swarm robotics. Marco A. Montes de Oca was born in Mexico City, Mexico in 1979. He earned a B.S. in Computer Systems Engineering from the Instituto Politecnico Nacional, Mexico City, Mexico in 2001, a M.S. in Intelligent Systems from the Instituto Tecnologico y de Estudios Superiores de Monterrey, Monterrey, N.L., Mexico in 2005, and a Ph.D. in Engineering Sciences from the Universite Libre de Bruxelles, Brussels, Belgium, in 2011. He is currently a post-doctoral researcher at the Dept. of Mathematical Sciences at the University of Delaware, Newark, DE, USA.
Swarm intelligence is the collective-level, problem-solving behavior of groups of relatively simple agents. Local interactions among agents, either direct or indirect through the environment, are fundamental for the emergence of swarm intelligence; however, there is a class of interactions, referred to as interference, that actually blocks or hinders the agents' goal-seeking behavior.
Traditional approaches deal with interference by complexifying the behavior and/or the characteristics of the agents that comprise a swarm intelligence system, limiting its scalability and increasing the difficulty of the design task. A framework, called incremental social learning (ISL), has been proposed to tackle the interference problem in swarm intelligence systems. Through the use of ISL, interference can be reduced without changing the original design of the system's constituent agents. The observable effect of ISL on a swarm intelligence system is an improvement of the system's performance. In this talk, the ISL framework will be described in detail. Furthermore, three instantiations of the framework, which demonstrate the framework's effectiveness, will also be presented. The swarm intelligence systems used as case studies are the particle swarm optimization algorithm, ant colony optimization algorithm for continuous domains, and the artificial bee colony optimization algorithm.
Dr Plamen Angelov, is a Reader in Computational Intelligence and coordinator of the Intelligent Systems Research at Infolab21, Lancaster University, UK. He is a Senior Member of the IEEE and Chair of two Technical Committees (TC); TC on Standards, Computational Intelligence Society and TC on Evolving Intelligent Systems, Systems, Man and Cybernetics Society. He is also a member of the UK Autonomous Systems National TC, of the Autonomous Systems Study Group, NorthWest Science Council, UK and of the Autonomous Systems Network of the Society of British Aerospace Companies. He is a very active academic and researcher who authored or co-authored over 150 peer reviewed publications in leading journals (50+) peer-reviewed conference proceedings, a patent, a research monograph, a number of edited books, and has an active research portfolio in the area of computational intelligence and autonomous system modelling, identification, and machine learning. He has internationally recognised pioneering results into on-line and evolving methodologies and algorithms for knowledge extraction in the form of human-intelligible fuzzy rule-based systems and autonomous machine learning. Dr. Angelov is also a very active researcher leading projects funded by EPSRC, ASHRAE-USA, EC FP6 and 7, The Royal Society, Nuffield Foundation, DTI/DBIS, MoD, industry (BAE Systems, 4S Information Systems, Sagem/SAFRAN, United Aircraft Corporation and Concern Avionica, NLR, etc.). His research contributes to the competitiveness of the industry, defence and quality of life through projects, such as the ASTRAEA project - a £32M (phase I and £30M phase II) programme, in which Dr. Angelov led projects on Collision Avoidance (£150K, 2006/08), and Adaptive Routeing (£75K; 2006/08). The work on this project was recognised by 'The Engineer Innovation and Technology 2008 Award in two categories: i) Aerospace and Defence and ii) The Special Award. Other examples of research that has direct impact on the competitiveness of UK industry and quality of life are the BAE Systems-funded project on Sense and Avoid (principal investigator, £66K; 2006/07), BAE funded project on UAS Passive Sense, Detect and Avoid Algorithm Development (£24K consultancy, a part of ASTRAEA-II, 2009), the BAE Systems-funded project (co-investigator, £44K, 2008) on UAV Safety Support, EC-funded project (€1.3M, co-investigator) on Safety (and maintenance) improvement trough automated flight data Analysis, the Ministry of Defence funded projects ('Multi-source Intelligence: STAKE: Real-time Spatio-Temporal Analysis and Knowledge Extraction through Evolving Clustering', £30K, principal investigator, 2011 and Assisted Carriage: Intelligent Leader-follower algorithms for ground platforms, £42K, 2009 which developed unmanned ground-based vehicle prototype taken further by Boeing-UK in a demonstrator programme in 2009-11), so called 'innovation vouchers by the North-West Development Agency-UK and Autonomous Vehicles International Ltd. (£10K, 2010, principal investigator), MBDA-led project on Algorithms for automatic feature extraction and object classification from aerial images (£56K, 2010) funded by the French and British defence ministries. Dr. Angelov is also the founding Editor-in-Chief of the Springer's journal on Evolving Systems and serves as an Associate Editor of several other international journals. He also Chairs annual conferences organised by IEEE, acts as Visiting Professor (2005, Brazil; 2007, Germany; 2010, Spain) regularly gives invited and plenary talks at leading companies (Ford, The Dow Chemical, USA; QinetiQ, BAE Systems, Thales, etc.) and universities (Michigan, USA; Delft, the Netherlands; Leuven, Belgium, Linz, Austria, Campinas, Brazil, Wolfenbuettel, Germany, etc). More information can be found at www.lancs.ac.uk/staff/angelov.
The problem of extracting knowledge and generating interpretable/tractable (fuzzy) rules from data has been around for some time. Nowadays the data are streaming from Internet, social and sensor networks, portable devices with mobile communication and low cost high volume data stores. The challenge is to adapt the existing and/or develop innovative methodologies to address the problem of autonomous generation of models by learning creating in this way the new generation of potentially small and portable smart gadgets with attractive properties - ALMAs (autonomous learning machines) that can also communicate/transmit the knowledge that was extracted from the data streams, collaborate, adapt to the environment and become an important part of the future networked society. In this talk, of course, we cannot solve or address all problems that ALMA relates to, but will try to lay out the principles and basic steps to create ALMA and will compare this with the traditional approach. We will cover the topics of recursive density estimation (RDE), evolving data clouds, evolving classifiers, predictors and controllers. It will be demonstrated how RDE can be used for outlier detection and also for forming new data clouds which in turn can serve as granules of the data space responsible for local simpler models in a multi-model approach. Te recently introduced approach for evolving clouds will be compared to the evolving and traditional clustering approaches and its applicability to model structure synthesis problems will be demonstrated. Examples of application problems of classification and prediction will be illustrated. The proposed ALMA can be used as a basis of software algorithms and agents and hardware devices in various problems in industry, defense, security, space exploration, robotics, human behavior analysis, assisted living, etc...
Université Catholique de Louvain
Michel Verleysen received the M.S. and Ph.D. degrees in electrical engineering from the Université catholique de Louvain (Belgium) in 1987 and 1992, respectively. He was an invited professor at the Swiss E.P.F.L. (Ecole Polytechnique Fédérale de Lausanne, Switzerland) in 1992, at the Université d'Evry Val d'Essonne (France) in 2001, and at the Université ParisI-Panthéon-Sorbonne from 2002 to 2011, respectively. He is now Full Professor at the Université catholique de Louvain, and Honorary Research Director of the Belgian F.N.R.S. (National Fund for Scientific Research). He is editor-in-chief of the Neural Processing Letters journal (published by Springer), chairman of the annual ESANN conference (European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning), past associate editor of the IEEE Trans. on Neural Networks journal, and member of the editorial board and program committee of several journals and conferences on neural networks and learning. He was the chairman of the IEEE Computational Intelligence Society Benelux chapter (2008-2010), and member of the executive board of the European Neural Networks Society (2005-2010). He is author or co-author of more than 250 scientific papers in international journals and books or communications to conferences with reviewing committee. He is the co-author of the scientific popularization book on artificial neural networks in the series "Que Sais-Je?", in French, and of the "Nonlinear Dimensionality Reduction" book published by Springer in 2007. His research interests include machine learning, feature selection, artificial neural networks, self-organization, time-series forecasting, nonlinear statistics, adaptive signal processing, and high-dimensional data analysis.
Machine learning is used nowadays to build models for classification and regression tasks, among others. The learning principle consists in designing the models based on the information contained in the dataset, with as few as possible a priori restriction about the class of models of interest.
While many paradigms exist and are widely used in the context of machine learning, most of them suffer from the "curse of dimensionality". The curse of dimensionality means that some strange phenomena appear when data are represented in a high-dimensional space. These phenomena are most often counter-intuitive: the conventional geometrical interpretation of data analysis in 2- or 3-dimensional spaces cannot be extended to much higher dimensions.
Among the problems related to the curse of dimensionality, the feature redundancy and concentration of the norm are probably those that have the largest impact on data analysis tools. Feature redundancy means that models will lose the identifiability property (for example they will oscillate between equivalent solutions), will be difficult to interpret, etc.; although it is an advantage on the point of view of information content in the data, the redundancy will make the learning of the model more difficult. The concentration of the norm is a more specific unfortunate property of high-dimensional vectors: when the dimension of the space increases, norms and distance will concentrate, making the discrimination between data more difficult.
Feature selection is a key challenge in machine learning, which helps fighting the curse of dimensionality. Feature selection allows us to reduce the number of features effectively used by the models, either beforehand (filter approaches), or during learning (wrapper and embedded approaches). This talk will present state-of-the-art approaches to feature selection, with a particular emphasis on information-theoretic filter approaches. It will also be shown that information-theoretic filter approaches are particularly suited to accommodate for non-standard data (structured data, data with missing values, infinite-dimensional data, etc.), opening the way to new research and application areas in data analysis.