pondělí 31. října 2011

Leonid Perlovsky

I´m now something like a platonic love with Leonid Perlovsky. It is full of news, full of surprises, disgust and also happiness from discovering new areas.

Let's introduce my story. From the beginning of my PhD.studies I have started to read a lot. After half a year I had read enough to know something but not enough to be good at something. Michal asked me to read and try to understand L.Perlovsky and his theory which he thought could be very interesting for him. He is playing with growing neural networks and some cognitive architectures. My task was to look deeper into the Neural field theory (which is the theory introduced by Leonid Perlovsky) and try to implement it.

After reading a lot of papers and also one book I started to understand at least a little what he is talking about. Or I thought I understand. That was totally new field for me, because I have never spent time with clustering, classification or other data mining methods before. So I was something like a blind man who is listening how does the world around look like. My view on his theory was a little bit strange because I didn't know relevant theories on which he is building his new approach. But let say that after few months I was able to work with it and I´ve even tried to implement the method in Matlab.

Until today I haven't been very successful, but I'm progressing. What is it all about and why I'm now a little bit sceptical?

The idea of this theory is that the bottom-up and top-down approaches are meating. Both data and model have some fuzziness and while it is approaching each other it is getting more and more crisp. Starting with some very fuzzy gaussian model it is starting to be more crisp when the fuzziness is getting smaller (he calls this dynamic fuzzy logic). The main idea is that when the error rate is starting to increase we will add new model and watch if there will be decrease in the errorness. When we find the best number of models we will try to change the parameters and approximate the data. So from the very fuzzy gaussian model with low number of parameters we are getting to the set of crisp parametrized models. With this approach Perlovsky is hoping to defeat the curse of dimensionality and the computational complexity which is so problematic in these clustering methods.

What was very strange and instructive for me was, when I read the book about feature selection computational methods and afterwards some dissertation thesis about Gaussian mixture models. What a surprise. This huge theory which was introduced for NASA is only Gaussian mixture model with fuzziness and nice biological and philosophical background. So nothing strange. But why he doesn't mention this? To confuse me as a novice in this area? It could be much easier to know all these things, because GMM are implemented and very well described. So I could only work on the improvements of the method and my task wouldn't be to start from the beginning and look for some relevant information in the big book full of philosophical and metaphysical thoughts.

So it is another advice for me before I start with some other thing.
Now I hope the work will continue much quicker having know all these things. Because I must admit that I'm quite fed up with searching and trying. Want some support in the results.

Žádné komentáře:

Okomentovat