NABIL H. FARHAT

Research and Teaching Perspective:

The cortex is that part of the brain engaged in cognition, learning, intricate motor control, and other higher-level functions. In most simple terms, the cortex can be described as made up of large populations (tens of billions) of interacting neurons. Functionally, a neuron is an exquisitely nonlinear processing element. Therefore, one can view the brain as a very large population of interacting nonlinear information processing units. As such, it is considered by specialists to be a high-dimensional nonlinear dynamical system that is continually rearranging its state-space representation under extrinsic and intrinsic influences. The term dynamical is used to describe a system that is changing in time and the term high-dimensional refers to the enormous number of neurons involved.

Mathematicians and engineers find that the behavior of a dynamical system is best described in terms of an abstract space called the state-space of the system. Every state assumed by the system is represented by a point in its state space. As the system evolves in time, under external and internal influences, its state will change and the point in state-space will move describing a trajectory or orbit. If the dynamical system is dissipative, as is the case for the brain, the trajectories can terminate on attractors. An attractor is a region of the state space that draws all trajectories that start from nearby points falling within the "domain" or "basin" of attraction of the attractor.

Mathematicians and nonlinear systems theorists know that a high-dimensional nonlinear dynamical system, such as the brain, can exhibit in its state-space all three types of attractors: fixed-point where the trajectories terminate on a point (point attractor), periodic where the trajectories converge to a region where they describe a periodic (looping) motion, and chaotic (erratic) where trajectories converge to a region of the state space within which they describe an irregular and unpredictable motion.

Most artificial neural network models in use today "compute" solely with static (point) attractors and ignore dynamic (periodic and chaotic) attractors. The goal of our research in Biomorphic Dynamical* Neural Networks being carried out in our Photonic Neuroengineering laboratory is to develop a new generation of intelligent machines that are modeled after the way the cortex interprets and learns sensory data and can surpass in their capabilities what present-day artificial neural net and connectionist models can do. This research in what might be suitably called corticonics (to echo electronics) encompasses:

  1. Understanding how the cortex might use diverse attractors in its operation and in particular elucidate, through modeling and simulations, the roles of coherence (periodicity synchronicity and phase-locking), bifurcation (sudden change in qualitative behavior caused by extrinsic and intrinsic influences), and chaos (irregularity) in neuronal group activity.
  2. Identifying salient features of cortical organization (i.e. of the morphology and physiology of the cortex) that could be abstracted and incorporated in artificial neural networks in order to enhance their performance especially for carrying out functions that are beyond the reach of present neural net and connectionist models. Our studies of this have led so far to intriguing hypothesis on the nature of the neuronal code for higher-level brain processing i.e., the way the basic functional units in the cortex interpret extrinsic sensory data when combined with the intrinsic feedback received from other units and of the way the cortex integrates all this into motor function and behavior. This insight has enabled us, for example, to design biomorphic (biology-like) artificial networks that respond to the presentation of an image in a manner that is independent of displacement, rotation, change in size, or intensity of the image. Such networks are said to produce distortion invariant feature vectors. Invariant feature extraction is a fundamental operation in the design of automated recognition systems for a wide range of applications ranging from pattern recognition to robotics.
  3. Developing a learning algorithm for dynamical networks that would enable dynamical networks to handle (recognize, classify, or generate) complicated space-time patterns, something not easily or naturally done with conventional neural networks. This could lead to significant advances in the areas of automated language translation, human-machine interface, where machines can be efficiently controlled with spoken or gestured instructions, and in advanced robotics and complex control for intelligent agents.
  4. Developing the analog (opto-electronic) hardware needed to build fast, compact, and energy efficient dynamical neural networks that can be used to build intelligent systems for new applications.

In addition to furnishing training and support of graduate research fellows, this research program is also contributing to the development of several new course offerings in interdisciplinary areas such as neurodynamics, neural networks, and chaotic dynamics and complexity in electrical and biological systems that are helping us define new aspects of Electrical Engineering.

*) Biomorphic means biology-like and dynamical because these networks exhibit and utilize in their operation dynamic attractors (cyclic and chaotic attractors) in addition to fixed-point attractors.

Number of Ph.D. Students Graduated in Past 11 yrs: 14

Current Ph.D. Students:

X. Ling, George(Jie) Yuan, Tian Liang, Ning Song

Biographical Sketch

Teaching

  • ESE215: Electrical Circuits and Systems I Description Syllabus
  • ESE310 Electric and Magnetic Fields
  • ESE411 Electromagnetic Waves and Applications
  • ESE412 Chaos and Complexity in Electrical and Biological Systems
  • ESE511 Modern Optics and Image Understanding
  • ESE539 Neural Networks and Applications

Links



Nabil H. Farhat
Room 368 Moore School
Tel: 898-5882
e.mail: farhat@ee.upenn.edu