Miroslav Šnorek

My photographMiroslav Šnorek was born in south-bohemian town Písek, Czech Republic, in 1947. He studied Technical Cybernetics at Czech Technical University Prague, where he obtained his degrees in 1970. He is now working as Associate Professor at the Department of Computer Science and Engineering of Electrical Faculty of the same University (CTU).

Publications

  • Drchal, J. and Kapraľ, O. and Koutník, J. and Šnorek, M.: Combining Multiple Inputs in HyperNEAT Mobile Agent Controller. vol. 2 nr. , p. 775-783, Springer, Berlin, 2009. ISSN 0302-9743 BibTex, PDF

    In this paper we present neuro-evolution of neural network controllers for mobile agents in a simulated environment. The controller is obtained through evolution of hypercube encoded weights of recurrent neural networks (HyperNEAT). The simulated agent’s goal is to find a target in a shortest time interval. The generated neural network processes three different inputs – surface quality, obstacles and distance to the target. A behavior emerged in agents features ability of driving on roads, obstacle avoidance and provides an efficient way of the target search.

  • Aleš Pilný, Wolfgang Oertel, Pavel Kordík, Miroslav Šnorek: Correlation-based Feature Ranking in Combination with Embedded Feature Selection. vol. nr. , p. , , 2009. ISSN BibTex, PDF

    Most of Feature Ranking and Feature Selection approaches can be used for categorial data only. Some of them rely on statistical measures of the data, some are tailored to a specific data mining algorithm (wrapper approach). In this paper we present new methods for feature ranking and selection obtained as a combination of the above mentioned approaches. The data mining algorithm (GAME) is designed for numerical data, but it can be applied to categorial data as well. It incorporates feature selection mechanisms and new methods, proposed in this paper, derive feature ranking from final data mining model. The rank of each feature selected by model is computed by processing correlations of outputs between neighboring model’s neurons in different ways. We used four different methods based on fuzzy logic, certainty factors and simple calculus. The performance of these four feature ranking methods was tested on artificial data sets, on well known Ionosphere data set and on well known Housing data set with continuous variables. The results indicated that the method based on simple calculus approach was significantly worse than other three methods. These methods produce ranking consistent with recently published studies.

  • Drchal, J. and Koutník, J. and Šnorek, M.: HyperNEAT Controlled Robots Learn How to Drive on Roads in Simulated Environment. In: 2009 IEEE Congress on Evolutionary Computation, p. 6, Research Publishing Services, Singapore, 2009. ISBN 978-1-4244-2959-2 BibTex, PDF

    In this paper we describe simulation of autonomous robots controlled by recurrent neural networks, which are evolved through indirect encoding using HyperNEAT algorithm. The robots utilize 180 degree wide sensor array. Thanks to the scalability of the neural network generated by HyperNEAT, the sensor array can have various resolution. This would allow to use camera as an input for neural network controller used in real robot. The robots were simulated using software simulation environment. In the experiments the robots were trained to drive with imaximum average speed. Such fitness forces them to learn how to drive on roads and avoid collisions. Evolved neural networks show excellent scalability. Scaling of the sensory input breaks performance of the robots, which should be gained back with re-training of the robot with a different sensory input resolution.

  • Zdeněk Buk and Jan Koutník and Miroslav Šnorek: NEAT in HyperNEAT Substituted with Genetic Programming. vol. 5495 nr. , p. 243-252, Springer, Kuopio, Finland, 2009. ISSN BibTex, PDF

    In this paper we present application of genetic programming (GP) [1] to evolution of indirect encoding of neural network weights. We compare usage of original HyperNEAT algorithm with our implementation, in which we replaced the underlying NEAT with genetic programming. The algorithm was named HyperGP. The evolved neural networks were used as controllers of autonomous mobile agents (robots) in simulation. The agents were trained to drive with maximum average speed. This forces them to learn how to drive on roads and avoid collisions. The genetic programming lacking the NEAT complexification property shows better exploration ability and tends to generate more complex solutions in fewer generations. On the other hand, the basic genetic programming generates quite complex functions for weights generation. Both approaches generate neural controllers with similar abilities.

  • : Behaviour of FeRaNGA Method for Feature Ranking During Learning Process Using Inductive Modelling. Proceedings of the 2nd International Conference on Inductive Modelling. Kiev: Ukr. INTEI vol. nr. , p. , , 2008. ISSN BibTex, PDF

    Nowadays a Feature Ranking (FR) is commonly used method for obtaining information about a large data sets with various dimensionality. This knowledge can be used in a next step of data processing. Accuracy and a speed of experiments can be improved by this. Our approach is based on Artificial Neural Networks (ANN) instead of classical statistical methods. We obtain the knowledge as a by-product of Niching Genetic Algorithm (NGA) used for creation of a feedforward hybrid neural network called GAME. In this paper we present a behaviour of FeRaNGA (Feature Ranking method using Niching Genetic Algorithm(NGA)) during a learning process, especially in every layer of generated GAME network. We want to answer how important is NGA configuration and processing procedure for FR results because behaviour of GA is nondeterministic and thereby were results of FeRaNGA also indefinitive. This method ranks features depending on a percentage of processing elements that survived a selection process. Processing elements transforms parent input features to an output. The selection process is realized by means of NGA where units connected to the least significant features starve and fade from population. To obtain the best results and to find optimal configuration is behaviour of the FeRaNGA algortithm tested using various parameters of NGA and number of ensemble GAME models on well known artificial data sets.

  • Ales Pilny, Pavel Kordik, Miroslav Snorek: Feature Ranking Derived from Data Mining Process. In Artificial Neural Networks - ICANN 2008, 18th International Conference Proceedings vol. nr. , p. , Heidelberg: Springer, http://portal.acm.org/citation.cfm?id=1429510, 2008. ISSN BibTex, PDF

    Most common feature ranking methods are based on the sta- tistical approach. This paper compare several statistical methods with new method for feature ranking derived from data mining process. This method ranks features depending on percentage of child units that sur- vived the selection process. A child unit is a processing element trans- forming the parent input features to the output. After training, units are interconnected in the feedforward hybrid neural network called GAME. The selection process is realized by means of niching genetic algorithm, where units connected to least significant features starve and fade from population. Parameters of new feature ranking algorithm are investigated and comparison among different methods is presented on well known real world and artificial data sets.

  • Koutnik, J., Snorek, M.: Temporal Hebbian Self-Organizing Map for Sequences. In: 16th International Conference on Artificial Neural Networks Proceedings (ICANN 2006), Part I, p. 632--641, Springer Berlin / Heidelberg, 2008. ISBN 978-3-540-87535-2 BibTex
  • Drchal J., Šnorek M.: Diversity visualization in evolutionary algorithms. In: Proceedings of 41th Spring International Conference MOSIS 07, Modelling and Simulation of Systems, p. 77--84, Ostrava: MARQ, 2007. ISBN 978-80-86840-30-7 BibTex, PDF

    Evolutionary Algorithms (EAs) are well-known nature-inspired optimization methods. Diversity is an essenial aspect of each EA. It describes the variability of organisms in population. The lack of diversity is common problem - diversity should be preserved in order to evade local extremes (premature convergence). Niching algorithms are modifications of classical EAs. Niching is based on dividing the population into separate subpopulations - it spreads the organisms effectively all over the search space and hence making the overall population diverse. Using niching methods also requires setting of their parameters, which can be very difficult. This paper presents a novel way of diversity visualization based on physical system simulation. This visualization is helpful when designing and tuning niching algorithms but it has also other uses. The visualization will be presented on NEAT - the evolutionary algorithm which optimizes both the topology and the parameters of neural networks.

  • Jan Koutnik, Miroslav Šnorek: New Trends in Simulation of Neural Networks. In: Proceedings of 6th EUROSIM Congress on Modelling and Simulation, , Ljubljana, 2007. ISBN 3-901608-32-X BibTex

    In this paper actual simulation techniques and simulation systems for artificial neural networks are compared. We focus on neural network simulators that allow a user easy design of new neural networks. There are several simulation strategies that can be exploited by modern neural network simulators described. We considered the synchronous simulation as the most effective for parallel systems like artificial neural networks. Examples of general simulation systems that can be used for simulation of neural networks are mentioned. Current neural network simulators commonly depend on a type of neural network simulated and cannot be easily extended to simulate a different or a neural network with a brand new architecture and function. Universal simulation tools seem to be suitable for network design but do not support connectionism natively. The missing language constructions and tools for native support of connecting objects in the simulation lead us to design a new simulation tool SiMoNNe - Simulator of Modular Neural Networks, which allows easy design and simulation of neural networks using a high level programming language. The language itself is object oriented with weak type control. It supports native connection of simulated neurons, layers, modules and networks, matrix calculations, easy control of simulation parameters using expressions, re-usability of the result as a source code and more. The language is interactive and allows connection of a GUI to the SiMoNNe core.

  • Pavel Kordík, Oleg Kovářík, Miroslav Šnorek: Optimization of Models: Looking for the Best Strategy. In: Proceedings of 6th EUROSIM Congress on Modelling and Simulation, , Ljubjana, 2007. ISBN 3-901608-32-X BibTex, PDF

    When parameters of model are being adjusted, model is learning to mimic the behaviour of a real world system. Optimization methods are responsible for parameters adjustment. The problem is that each real world system is different and its model should be of different complexity. It is almost impossible to decide which optimization method will perform the best (optimally adjust parameters of the model). In this paper we compare the performance of several methods for nonlinear parameters optimization. The gradient based methods such as Quasi-Newton or Conjugate Gradient are compared to several nature inspired methods. We designed an evolutionary algorithm selecting the best optimization methods for models of various complexity. Our experiments proved that the evolution of optimization methods for particular problems is very promising approach.

  • Kordik P., Saidl J., Snorek M.: Evolutionary Search for Interesting Behavior of Neural Network Ensembles. In: 2006 IEEE Congress on Evolutionary Computation, p. 235-238, Los Alamitos: IEEE Computer Society, 2006. ISBN 0-7803-9489-5 BibTex, PDF
  • Drchal J., Šnorek M., Kordík P.: Maintaining Diversity in Population of Evolved Models. In: Proceedings of 40th Spring International Conference MOSIS 06, Modelling and Simulation of Systems, Ostrava: MARQ, 2006. ISBN 80-86840-21-2 BibTex, PDF

    This paper deals with creation of models by means of evolutionary algorithms, particularly with maintaining diversity of population using niching methods. Niching algorithms are known for their ability to search for more optima simultaneously. This is done by splitting the population of models into separate species. Species protect promising but yet not fully developed models. Search for more optima at the same time helps to avoid a premature convergence and therefore deals effectively with local optima. Efficiency of two different niching methods is compared on NEAT applied to the neuro-evolution of models.

  • Buk, Z., Šnorek, M.: Processing the Time Context Based Data Using Soft Computing Methods in Mathematica. In: 5th International Conference APLIMAT, Bratislava: Slovak University of Technology, February 2006. ISBN 80-967305-4-1 BibTex
  • Koutnik J., Snorek M.: Self-Organizing Neural Networks for Signal Recognition. In: 16th International Conference on Artificial Neural Networks Proceedings (ICANN 2006), Part I, p. 406-414, Springer Berlin / Heidelberg, 2006. ISBN 978-3-540-38625-4 BibTex, poster

    In this paper we introduce a self-organizing neural network that is capable of recognition of temporal signals. Conventional self-organizing neural networks like recurrent variant of Self-Organizing Map provide clustering of input sequences in space and time but the identification of the sequence itself requires supervised recognition process, when such network is used. In our network called TICALM the recognition is expressed by speed of convergence of the network while processing either learned or an unknown signal. TICALM network capabilities are shown on an experiment with handwriting recognition.

  • Koutnik J., Snorek M.: Neural Network Generating Hidden Markov Chain. In: Adaptive and Natural Computing Algorithms - Proceedings of the International Conference in Coimbra, p. 518-521, Wien: Springer, 2005. ISBN BibTex

    In this paper we introduce technique how a neural network can generate a Hidden Markov Chain. We use neural network called Temporal Information Categorizing and Learning Map. The network is an enhanced version of standard Categorizing and Learning Module (CALM). Our modifications include Euclidean metrics instead of weighted sum formerly used for categorization of the input space. Construction of the Hidden Markov Chain is provided by turning steady weight internal synapses to associative learning synapses. Result obtained from testing on simple artificial data promises applicability in a real problem domain. We present a visualization technique of the obtained Hidden Markov Chain and the method how the results can be validated. Experiments are being performed.

  • Buk, Z., Šnorek, M., Skrbek, M.: Simulating the Finite State Machines using Fullly Recurrent Neural Network. In: Proceedings of XXVII-th International Autumn Colloquium ASIS 2005, p. 199-204, Ostrava: MARQ, September 2005. ISBN 80-86840-16-6 BibTex

    In contrast to classic feed-forward neural networks, the recurrent neural networks have an ability to process the time context of input data. Just because of this memory-feature the recurrent neural networks seem to be a powerful instrument for processing the time series, the natural language processing, etc. In this paper we present the ability of a fully recurrent type of neural network to simulate the finite state machines and the technique of automata description extraction from such a network.

  • Koutnik J., Snorek M.: Efficient Simulation of Modular Neural Networks. In: Proceedings of the 5th EUROSIM Congres Modelling and Simulation, Vienna: EUROSIM-FRANCOSIM-ARGESIM, 2004. ISBN 3-901608-28-1 BibTex, PDF

    In this paper we describe a new language for efficient simulation of modular neural networks called SiMoNNe. After an unsuccessful search for a suitable simulation environment we designed a simulator driven by a high level programming language which allows easy and fast creation, simulation and testing of various neural network architectures. Not only modular neural networks can be simulated but also well known conventional neural network paradigms can be simulated by SiMoNNe.

  • Koutnik J., Snorek M.: Single Categorizing and Learning Module for Temporal Sequences. In: Proceedings of the International Joint Conference on Neural Networks, p. 2977-2982, Piscataway: IEEE, 2004. ISBN 0-7803-8360-5 BibTex, PDF

    Modifications of an existing neural network called Categorizing and Learning Module (CALM) that allow learning of temporal sequences are introduced in this paper. We embedded an associative learning mechanism which allows to look into the past when classifying present stimuli. We have built in the Euclidean metrics instead of the weighted sum found in the original learning rule. This improvement allows better discrimination in case of learning low dimensional patterns in the temporal sequences. Results were obtained from testing the enhanced module on simple artificial data. These experiments promise applicability of the enhanced module in a real problem domain.

  • Koutnik J., Snorek M.: Enhancement of Categorizing and Learning Module (CALM) - Embedded Detection of Signal Change. In: IJCNN 2003 Conference Proceedings, p. 3233-3237, Piscataway: IEEE, 2003. ISBN 0-7308-7899-7 BibTex, PDF
  • Koutnik J., Brunner J., Snorek M.: The GOLOKO Neural Network for Vision - Analysis of Behavior. In: Proceedings of the International Conference on Computer Vision and Graphics, p. 437-442, Gliwice: Silesian Technical University, 2002. ISBN 83-9176-831-7 BibTex, PDF
  • Skrbek, M., Snorek, M.: SHIFT-ADD Neural Architecture. In: Proceeding of ICECS'99, p. 411-414, , Cyprus, 1999. ISBN 0-7803-5682-9 BibTex