## NABIL H. FARHAT## Research and Teaching Perspective:The Mathematicians and engineers find that
the behavior of a dynamical system is best described in
terms of an abstract space called the Mathematicians and nonlinear systems
theorists know that a high-dimensional nonlinear
dynamical system, such as the brain, can exhibit in its
state-space all three types of attractors: Most artificial neural network models in
use today "compute" solely with static (point)
attractors and ignore dynamic (periodic and chaotic)
attractors. The goal of our research in - Understanding how the cortex might use diverse attractors in its operation and in particular elucidate, through modeling and simulations, the roles of coherence (periodicity synchronicity and phase-locking), bifurcation (sudden change in qualitative behavior caused by extrinsic and intrinsic influences), and chaos (irregularity) in neuronal group activity.
- Identifying salient features of cortical organization (i.e. of the morphology and physiology of the cortex) that could be abstracted and incorporated in artificial neural networks in order to enhance their performance especially for carrying out functions that are beyond the reach of present neural net and connectionist models. Our studies of this have led so far to intriguing hypothesis on the nature of the neuronal code for higher-level brain processing i.e., the way the basic functional units in the cortex interpret extrinsic sensory data when combined with the intrinsic feedback received from other units and of the way the cortex integrates all this into motor function and behavior. This insight has enabled us, for example, to design biomorphic (biology-like) artificial networks that respond to the presentation of an image in a manner that is independent of displacement, rotation, change in size, or intensity of the image. Such networks are said to produce distortion invariant feature vectors. Invariant feature extraction is a fundamental operation in the design of automated recognition systems for a wide range of applications ranging from pattern recognition to robotics.
- Developing a learning algorithm for dynamical networks that would enable dynamical networks to handle (recognize, classify, or generate) complicated space-time patterns, something not easily or naturally done with conventional neural networks. This could lead to significant advances in the areas of automated language translation, human-machine interface, where machines can be efficiently controlled with spoken or gestured instructions, and in advanced robotics and complex control for intelligent agents.
- Developing the analog (opto-electronic) hardware needed to build fast, compact, and energy efficient dynamical neural networks that can be used to build intelligent systems for new applications.
In addition to furnishing training and
support of graduate research fellows, this research
program is also contributing to the development of
several new course offerings in interdisciplinary areas
such as *) Biomorphic means biology-like and dynamical because these networks exhibit and utilize in their operation dynamic attractors (cyclic and chaotic attractors) in addition to fixed-point attractors. ## Number of Ph.D. Students Graduated in Past 11 yrs: 14## Current Ph.D. Students:X. Ling, George(Jie) Yuan, Tian Liang, Ning Song ## Biographical Sketch## Teaching- ESE215: Electrical Circuits and Systems I Description Syllabus
- ESE310 Electric and Magnetic Fields
- ESE411 Electromagnetic Waves and Applications
- ESE412 Chaos and Complexity in Electrical and Biological Systems
- ESE511 Modern Optics and Image Understanding
- ESE539 Neural Networks and Applications
## LinksNabil H. Farhat Room 368 Moore School Tel: 898-5882 e.mail: farhat@ee.upenn.edu |