Loading...
Projects / Programmes source: ARIS

Mr-BEC: Modern approaches for Benchmarking in Evolutionary Computation

Research activity

Code Science Field Subfield
2.07.00  Engineering sciences and technologies  Computer science and informatics   

Code Science Field
P170  Natural sciences and mathematics  Computer science, numerical analysis, systems, control 

Code Science Field
1.02  Natural Sciences  Computer and information sciences 
Keywords
Performance assessment approaches, statistics, evolutionary computation, information theory, random matrix theory, benchmarking
Evaluation (rules)
source: COBISS
Researchers (1)
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  50854  PhD Tome Eftimov  Computer science and informatics  Head  2019 - 2021  233 
Organisations (1)
no. Code Research organisation City Registration number No. of publicationsNo. of publications
1.  0106  Jožef Stefan Institute  Ljubljana  5051606000  91,094 
Abstract
Evolutionary computation (EC) is a subfield of Computational Intelligence. Its main research is development of algorithms for global optimization inspired by biological evolution. These algorithms are efficient for finding good solutions to NP-hard problems for which solutions cannot be computed in analytical or semi-analytical form, or by using deterministic algorithms. Many real-world scenarios involve optimization problems, for example, when minimizing risks, minimizing cost, maximizing reliability, and maximizing efficiency. Additionally, in combination with machine learning algorithms they represent powerful techniques for solving many prediction problems in industry. Benchmarking in EC is a crucial task and is used to evaluate the performance of an algorithm against other algorithms. Benchmarking theory involves asking three main questions: i) which problems to choose for benchmarking, ii) how to design the experiment, and iii) how to evaluate performance. The focus of the proposed project will be on approaches used for performance evaluation, when the problems have already been chosen and the experiments set. Existing approaches for assessing the performance of algorithms are based on a statistical comparison of the algorithms’ results. Even if it is crucial for research that state-of-the-art performance assessment approaches are related to statistical significance, there is still a large gap between theory and real-world scenarios. This is because sometimes the statistical significance that exists is not significant in a practical sense. Another issue is selecting a metric that can describe different aspects of the performance. It can also happen that a good performance by an algorithm is a result of a correlation between the selected performance metric and the methodology of the algorithm rather than a real unbiased performance. Lastly, the performance metric usually transforms the optimization result into one-dimensional data that is further analyzed, without considering the information that exists in the high-dimensional space, which could give additional insight into the algorithm’s performance. All these questions should be considered together in order to provide a real in-depth understanding of an algorithms’ performance beyond “simple” statistics, which will improve the applicability of the algorithms. The main objective of the proposed project is to invent, develop, implement, and evaluate a framework for benchmarking in evolutionary computation, which will consist of methodologies that will bring about an in-depth understanding of an algorithms’ behavior, especially focusing on identifying practical significance, obtaining knowledge about performance using the information from space distribution (high-dimensional data), and making a more general benchmarking conclusion using a set of performance metrics. The methodologies will be based on a synergism between statistics, information theory, and random matrix theory. The proposed methodologies will be based on ranking schemes that will transform raw data into input data for the analysis with an appropriate statistical test. Common to all ranking schemes is that they will be based on comparing distributions in an attempt to address the data that describe different performance aspects. The development of the proposed methodology and its implementation is motivated by the continuous growth of industrial optimization problems, which lead to requirement for better understanding of the nature and methodologies behind the algorithms. We expect our proposed methodologies to have the greatest impact on modern approaches for benchmarking in evolutionary computation, leading to an in-depth understanding of an algorithms’ performance. We will also show the general applicability of the proposed methodology through identifying cases from research domains other than EC, such as machine learning, natural language processing, and signal processing.
Significance for science
We expect our proposed methodology to have the greatest impact on modern approaches for benchmarking in evolutionary computation, leading to an in-depth understanding of an algorithms’ performance. We expect that this novel methodology will be further combined with meta-learning to transfer the gained knowledge from benchmarking to real-world scenarios. In the area of benchmarking theory, this will also be a step towards helping researchers apply a proper study analysis, since traditional approaches requires knowledge of the necessary data conditions about the data that must be met in order for it to be applied. This step is often omitted and researchers simply apply a statistical test, in most cases borrowed from a similar, published study, which can be inappropriate for their data. This kind of misunderstanding is all too common in the research community and can be observed even in many high-ranking journal papers. It can be assumed, that this is sometimes made on purpose to mislead the reader in believing that the authors’ results are better than they actually are. We will show the general applicability of the proposed methodology through identifying cases from research domains other than EC, such as machine learning, natural language processing, and signal processing. We expect this research proposal to result in several (2-4) publications in high-ranking journals (IEEE Transactions on Evolutionary Computation, Information Sciences, Information Fusion), and participation at several (3-6) international conferences, where papers and tutorials will be presented, and workshops will be organized (The Genetic and Evolutionary Computation Conference (GECCO), IEEE Congress on Evolutionary Computation (CEC), IEEE Symposium on Computational Intelligence (IEEE-SSCI)). This could also serve as a vehicle for possible Horizon 2020 project proposals and/or proposals to other science funding agencies.
Significance for the country
We expect our proposed methodology to have the greatest impact on modern approaches for benchmarking in evolutionary computation, leading to an in-depth understanding of an algorithms’ performance. We expect that this novel methodology will be further combined with meta-learning to transfer the gained knowledge from benchmarking to real-world scenarios. In the area of benchmarking theory, this will also be a step towards helping researchers apply a proper study analysis, since traditional approaches requires knowledge of the necessary data conditions about the data that must be met in order for it to be applied. This step is often omitted and researchers simply apply a statistical test, in most cases borrowed from a similar, published study, which can be inappropriate for their data. This kind of misunderstanding is all too common in the research community and can be observed even in many high-ranking journal papers. It can be assumed, that this is sometimes made on purpose to mislead the reader in believing that the authors’ results are better than they actually are. We will show the general applicability of the proposed methodology through identifying cases from research domains other than EC, such as machine learning, natural language processing, and signal processing. We expect this research proposal to result in several (2-4) publications in high-ranking journals (IEEE Transactions on Evolutionary Computation, Information Sciences, Information Fusion), and participation at several (3-6) international conferences, where papers and tutorials will be presented, and workshops will be organized (The Genetic and Evolutionary Computation Conference (GECCO), IEEE Congress on Evolutionary Computation (CEC), IEEE Symposium on Computational Intelligence (IEEE-SSCI)). This could also serve as a vehicle for possible Horizon 2020 project proposals and/or proposals to other science funding agencies.
Most important scientific results Interim report
Most important socioeconomically and culturally relevant results
Views history
Favourite