About this book
Annual Reports. Conference Schedule. Discussion and Mailing List. Join us! Members' Research Interests. Supporting Conferences. Useful Links. Fuzzy Modeling, Rough-Fuzzy Clustering. Specific areas of interest include learning fuzzy systems, neuro-computing, sensors network, adaptive systems, data mining and intelligent mobile robot navigation strategies. I have done extensive research in ambient assisted living and applications of computational intelligence techniques.
Topics such as multi-objective optimization, complex systems research and advanced decision support systems are also of interest. Research Interests.
The Development of the Field of Neural Information Processing
Human-centered, multi-objective, and stochastic optimization, Cognitive systems, Evolutionary Computing, Human computer interaction, Machine learning and soft computing for water resources sustainability, Decision support systems. Soft computing algorithms and their applications to pattern recognition, bioinformatics, biometrics and various optimization problems.
In particular, the processes and mechanisms involved in the spatio-temporal integration of activity in the dendrites e. On the other hand, much larger networks of more simplified model neurons could be simulated due to the quickly increasing available computer power. These investigations have led to the establishment of the discipline of computational neuroscience as evidenced by several books, conferences, and journals e. The question of the neural code has been debated heatedly since the late s Perkel and Bullock, The main issue was the interpretation and use of neural spike responses in terms of single spike timing or spike frequency evaluation.
Much of this research was driven by the perhaps naive question, why the brain uses spikes for communication and maybe also for computation. Many sensor and motor functions have been implemented by networks of spiking neurons and there are large-scale hardware realizations for this e.
For hardware realizations of associative memories spiking activity may be useful because it fits well with the required sparseness of activity patterns Palm, However, no convincing general theoretical argument for a principled computational advantage of spikes vs. On the technical side this is paralleled by a quite common use of ideas from information theory in neurocomputing and learning theory, in particular the use and optimization of the logarithm of a posteriori probabilities or the Kullback-Leibler information distance for the derivation of learning rules citations in Palm, , ch.
The further we move away from the periphery into central information processing and true human cognitive abilities, the sparser gets the amount of insight or inspiration we can find in current computational neuroscience. There are some computational ideas concerning mirror neurons and language processing e. This situation is reminiscent of the development of the field of artificial intelligence during the last 40 years.
After very broad and general claims and initial successes in solving various particular problems e. Also this development points in the direction of more integrated behavioral approaches and perhaps even the use of neural or brain-like structures and processes in the realization of complex cognitive tasks possibly involving symbolic information processing neuro-symbolic integration 3. Of course, the realization of serious cognitive abilities or of artificial intelligence, with brain-like neural networks is a hard task, since it requires an understanding and design of networks at the system level , and complete cognitive tasks typically involve a substantial part of the whole brain and in particular of the cerebral cortex Palm et al.
However, this kind of modeling and understanding is definitely needed even in medicine when we want to model for example the use and effect of drugs in the treatment of central neurological, psychiatric or psychological disorders. We will be able to improve medical treatments substantially when we know in more detail the effects of the application of a drug, neurotransmitter or -modulator, at a particular location in the brain, maybe even at particular neurons or particular types of synapses. On the experimental side, the new field of cognitive neuroscience e.
Neuroscientists who had previously refrained from addressing concepts like consciousness, began discussing its localization in the brain based on the new technique of fMRI, which led to a revival of brain localization of higher cognitive functions in thousands of experimental studies and of philosophical debates about consciousness e.
If we want to study human cognitive abilities like language understanding, we can at best do it in animals that are evolutionary close to us. Neurophysiology in humans is possible by non-invasive methods like EEG and fMRI, but fMRI does not provide the spatial and temporal resolution to study in detail how a computation is performed, it only allows to narrow down where it is performed. Among other things, these experiments do not tend to substantiate localist claims, since it is not at all obvious, where to localize consciousness, working memory, language understanding and most components of cognitive architectures in the brain e.
This does not contradict the possibility of modularity in brain organization Fodor, , but it still remains unclear, what these modules might be beyond sensory modalities, for example and how they relate to the particular modules often postulated in mainstream cognitive psychology. Based on these developments leading to the present state of affairs, it should now be the time to further the theoretical understanding of complex cognitive abilities, including computationally demanding tasks as in artificial intelligence and psychologically and socially important faculties like introspection, empathy, consciousness and free will.
The development of such theories should be guided or constrained by our accumulated knowledge from neuroscience, psychology, and computer science. In order to foster the advancement of computational neuroscience in this direction, it may be useful, but it is certainly not sufficient to organize the collection and distribution of more complete and better experimental neuroscientific data in order to model these data as in the HBP 1 or the BAM project, Alivisatos et al.
Artificial neural network - Wikipedia
In addition, it is necessary to develop synthetic ideas of how certain cognitive abilities involving image or language understanding, planning and non-factual reasoning could be realized adequately in brain-like neural networks, i. Kahnemann has distinguished two kinds of processes that are involved in decision making: slow and fast. Psychological models of these processes e.
If we take cognition seriously and not just use it as a fancy label, we will open a new emerging field of interdisciplinary research between computer science, neuroscience and cognitive psychology. Criteria for a good neurocomputational cognitive model can be combined from criteria already demanded by neuroscientists, computer scientists and psychologists; some of them that immediately come to mind, are listed below.
Certainly any good cognitive model should address several of these criteria. The basic demand is of course that the model really solves a cognitive task. For this we need a behavioral description of the task, an outline of the solution and a computer program or simulation of it that can be tested on a variety of problem instances. This program should be realized in or demonstrably convertible into a neural network architecture. Based on this we can produce a list of criteria:.
Perhaps in this new kind of large-scale or system-level computational modeling some of the recent developments in the application oriented branch of neural information processing need to be reunited with the neuroscience oriented branch. After all, during evolution the development of intriguing cognitive abilities in the human brain has been pushed forward by the need to solve various complex tasks in the real world by reorganizing the same basic neural machinery. So in order to understand the concerted cooperation of several cortical areas and subcortical structures in the solution of complex cognitive tasks it may in fact be useful to consider the more sophisticated network architectures and learning schemes that have recently been put forward in order to solve complex practical problems in various fields of applications.
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. National Center for Biotechnology Information , U.
- Promised Land: Thirteen Books That Changed America.
- Applied Soft Computing!
- Product details.
- Song of Songs (Brazos Theological Commentary on the Bible).
Journal List Front Comput Neurosci v. Front Comput Neurosci. Published online Jan Author information Article notes Copyright and License information Disclaimer. Received Feb 24; Accepted Jul 1. The use, distribution and reproduction in other forums is permitted, provided the original author s or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Abstract Research in neural information processing has been successful in the past, providing useful approaches both to practical problems in computer science and to computational models in neuroscience. Keywords: computational neuroscience, artificial intelligence, large scale computational modeling, computational cognition, cognitive neurocomputing.
The Development of the Field of Neural Information Processing Beginning with the theoretical foundations of cybernetics and information theory by Wiener and Shannon , the field of theoretical neuroscience started to develop in the direction of neural information processing. New Challenges The further we move away from the periphery into central information processing and true human cognitive abilities, the sparser gets the amount of insight or inspiration we can find in current computational neuroscience.
Based on this we can produce a list of criteria: Scalability Efficiency in real time with realistic size Neural plausibility Introspective plausibility Reusability the model should be usable for several related problems Evolutionary plausibility how could it have evolved? Learnability how could it be learned? Conflict of Interest Statement The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Footnotes 1 humanbrainproject. References Abdel-Rahman M.
Acoustic modeling using deep belief networks. Designing and implementing optimization algorithms are based on several methods and have superior performance in many problems. However, in several applications, the search space increases exponentially with the problem size.
In order to overcome the limitations and to solve efficiently larger scale of combinatorial and highly nonlinear optimization problems, sets of more flexible and adaptable algorithms are compulsory. Bio-inspired computing is oriented toward applying outstanding information-processing aptitudes of the natural realm to the computation domain. It establishes a strong relationship with computational biology and other biology-inspired computing models due to its effectiveness and uniqueness even though it is still relatively new trend.
Some meta-heuristic search algorithms with population-based framework are capable of handling optimization in high-dimensional real-world problems in several domains including engineering, medicine, industry, education, and military.