PRIMORIS      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

IJCCI is a joint conference composed of three concurrent conferences: ECTA, FCTA and NCTA. These three conferences are always co-located and held in parallel. Keynote lectures are plenary sessions and can be attended by all IJCCI participants.

Why don’t airplanes flap their wings? Or: How much neurobiology do we need in future computers?
Karlheinz Meier, Universität Heidelberg, Germany

Automatic Algorithm Configuration: Methods, Applications, and Perspectives
Thomas Stützle, Université Libre de Bruxelles, Belgium

Fuzzy approaches to data mining
Bernadette Bouchon-Meunier, University Pierre et Marie Curie-Paris 6, France

STEALTH: Modeling Coevolutionary Dynamics of Tax Evasion and Auditing
Una-May O'Reilly, MIT Computer Science and Artificial Intelligence Laboratory, United States

 

Why don’t airplanes flap their wings? Or: How much neurobiology do we need in future computers?

Karlheinz Meier
Universität Heidelberg
Germany
 

Brief Bio
Karlheinz Meier received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25 years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 15 year effort to design, build and operate an electronics data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson. Following scientific staff positions at DESY and CERN he was appointed full professor of physics at Heidelberg university in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA). Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as tools for neuroscience and generic methods for inference from large data volumes.


Abstract
Neuromorphic architectures are considered an attractive approach to implement cognitive computing in hardware. Like the brain they are expected to detect spatio-temporal structures in complex data at a very low cost in energy and training time. Recent implementations of deep learning with convolutional neural networks on traditional architectures have shown remarkable results which has led to a debate on wether more biological detail like spike communication would be useful or just a burden. In the keynote I will discuss this open question and argue for a systematic approach towards neuromorphic architectures with an optimized degree of biological realism.



 

 

Automatic Algorithm Configuration: Methods, Applications, and Perspectives

Thomas Stützle
Université Libre de Bruxelles
Belgium
 

Brief Bio
Thomas Stützle is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium and fellow of the IEEE. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 20 edited proceedings or books, 8 journal special issues, and more than 200 journal, conference articles and book chapters, many of which are highly cited.  He is associate editor of Computational Intelligence, Swarm Intelligence, International Transactions in Operational Research, and Applied Mathematics and Computation and on the editorial board of seven other journals including Evolutionary Computation and Journal of Artificial Intelligence Research. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration.


Abstract
The design of algorithms for computationally hard problems is time-consuming and difficult for a number of reasons such as the complexity of such problems, the large number of degrees of freedom in algorithm design and the setting of numerical parameters, and the difficulties of algorithm analysis due to heuristic biases and stochasticity. In recent years, automatic algorithm configuration methods have been developed to effectively search large and diverse parameter spaces; these methods have been shown to be able to identify superior algorithm designs and to find performance improving parameter settings. 
In this talk, I will highlight the advantages of addressing algorithm design and configuration by algorithmic techniques; describe the main existing automatic algorithm configuration techniques; and discuss various successful applications of automatic algorithm configuration. These applications include the automatic generation of hybrid stochastic local search algorithms, the design of multi-objective optimisers, and the automatic improvement of algorithm anytime behaviour. I will conclude arguing that automatic algorithm configuration has the potential to transform the way algorithms for difficult problems are designed and developed in the future.



 

 

Fuzzy approaches to data mining

Bernadette Bouchon-Meunier
University Pierre et Marie Curie-Paris 6
France
 

Brief Bio
Bernadette Bouchon-Meunier is a director of research emeritus at the National Centre for Scientific Research, the former head of the department of Databases and Machine Learning in the Computer Science Laboratory of the University Pierre et Marie Curie-Paris 6 (LIP6). She is the Editor-in-Chief of the International Journal of Uncertainty, Fuzziness and Knowledge-based Systems, the (co)-editor of 25 books, and the (co)-author of five. She has (co-)authored more than 400 papers on approximate and similarity-based reasoning, as well as the application of fuzzy logic and machine learning techniques to decision-making, data mining, risk forecasting, information retrieval, user modelling, sensorial and emotional information processing.
Co-executive director of the IPMU International Conference held every other year since 1986, she also served as the FUZZ-IEEE 2010 and FUZZ-IEEE 2013 Program Chair, the IEEE Symposium Series on Computational Intelligence (SSCI 2011) General Chair and the FUZZ-IEEE 2012 Conference Chair, as well as the Honorary chair of IEEE SSCI 2013 and IEEE CIVEMSA 2013.
She is currently the IEEE Computational Intelligence Society Vice-President for Conferences, the IEEE France Section Vice-President for Chapters and the IEEE France Section Computational Intelligence chapter chair. She is an IEEE fellow and an International Fuzzy Systems Association fellow. She received the IEEE Computational Intelligence Society Meritorious Service Award in 2012.


Abstract
Fuzzy logic provides interesting tools for data mining, mainly because of its ability to represent imperfect information, for instance by means of imprecise categories, measures of resemblance or aggregation methods, as well as fuzzy machine learning methods. This ability is of crucial importance when data are complex, large, and contain heterogeneous, imprecise, vague, uncertain, incomplete data.
We focus our study on fuzzy similarities and their utilization in many steps of the process of data mining, such as clustering, construction of prototypes, fuzzy querying, fuzzy learning.
We eventually illustrate our discourse by examples in real-world problems.



 

 

STEALTH: Modeling Coevolutionary Dynamics of Tax Evasion and Auditing

Una-May O'Reilly
MIT Computer Science and Artificial Intelligence Laboratory
United States
 

Brief Bio
Una­May O'Reilly is founder and co­leader of the AnyScale Learning For All (ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory. ALFA focuses on scalable machine learning, evolutionary algorithms, and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in clinical medicine knowledge discovery, wind energy and MOOC technology. She received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, now ACM Sig­EVO. She now serves as Vice­Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference,  GECCO, in 2005. She has served on the GECCO business committee, co­led the 2006 and 2009 Genetic Programming: Theory to Practice Workshops and co­chaired EuroGP, the largest conference devoted to Genetic Programming. In 2013 she inaugurated the Women in Evolutionary Computation group at GECCO. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research.


More information at http://people.csail.mit.edu/unamay/


Abstract
STEALTH is an AI system that detects tax law non-compliance by modeling the co-evolution of tax evasion schemes and their discovery through abstracted audits. Tax evasion accounts for billions of lost income each year. When the government pursues a tax evasion scheme and changes the tax law or audit procedures, the tax evasion schemes evolve and change into an undetectable form. The arms race between tax evasion schemes with tax authority actions presents a significant challenge to guide and plan enforcement efforts.


Acknowledgement: Work done with Jacob Rosen, Erik Hemberg of ALFA (http://groups.csail.mit.edu/ALFA), plus Geoff Warner and Sanith Wijesinghe of MITRE Corporation (http://csail.mit.edu/).



footer