Neuromorphic Systems Research at IBM

Thumbnail

Event details

Date 10.06.2015
Hour 14:0017:00
Speaker Chung Lam, Kamil Rocki, Geoffrey W. Burr from IBM Research
Location
Category Conferences - Seminars
MINI-SYMPOSIUM JOINTLY HOSTED BY THE INSTITUTE OF ELECTRICAL ENGINEERING AND THE INSTITUTE OF BIOENGINEERING

Three top researchers from IBM will talk about the most recent advancements in this field. Each presentation will be followed by a short discussion session. The schedule is given below. Please see attached pdf for the complete schedule and speaker biographies.

Schedule of the special event :

14:00-14:40  Neuromorphic Engineering by Chung Lam, IBM Research, Yorktown Heights, NY, USA
Abstract: Microprocessors designed with von Neumann architecture are hitting the power and performance limits as silicon CMOS continues to scale the critical        dimensions of the circuit components towards single digit nanometer size limit. Multi-core processor, parallel processing without increasing operating frequency of the cores, was introduced in the early 2000 to extend the power and performance scaling, keeping Moore’s Law viable. Amdahl’s Law, however, argues that the performance speedup with parallel processing is governed by the percentage of algorithm that needed be serial. Evolution has provided us with the most efficient parallel processing architecture: the biological brain. In this talk, we shall examine what we can do with little that we know about how the brain works to design machines to mimick the brain.
14:40-14:50  Questions/Discussion
14:50-15:00  Coffee break
15:00-15:40  HTM-based Saccadic Vision System by Kamil Rocki, IBM Research–Almaden, San Jose, CA, USA
Abstract: In this project, Hierarchical Temporal Memory is used for rapid object categorization and tracking. Various studies have demonstrated the remarkable speed and efficiency with which humans process natural scenes. Despite the fact that our eyes' fovea region is very limited, we can efficiently view the world by redirecting the fovea between points of interest using eye movements called saccades. Using HTM, we are able to learn both simple spatial patterns representing such small fovea region, as well as predictable and invariant temporal patterns comprising whole sequence of saccades. Such an approach has two advantages: first, storing images as temporal sequences of small spatial building blocks is much more resource efficient than storing entire complex images. The second one is that there is no need to distinguish between storing and recognizing still and moving images.
15:40-15:50  Questions/Discussion
15:50-16:00  Coffee break
16:00-16:40  Crossbar Arrays for Storage Class Memory and non-Von Neumann Computing by Geoffrey W. Burr, IBM Research–Almaden, San Jose, CA, USA
For more than 50 years, the capabilities of Von Neumann-style information processing systems - in which a "memory" delivers operations and then operands to a dedicated "central processing unit" - have improved dramatically.  While it may seem that this remarkable history was driven by ever-increasing density (Moore's Law), the actual driver was Dennard's Law: a device-scaling methodology which allowed each generation of smaller transistors to actually perform better, in every way, than the previous generation. Unfortunately, Dennard's Law terminated some years ago, and as a result, Moore's Law is now slowing considerably. In a search for ways to continue to improve computing systems, the attention of the IT industry has turned to Non-Von Neumann algorithms, and in particular, to computing architectures motivated by the human brain.
At the same time, memory technology has been going through a period of rapid change, as new nonvolatile memories (NVM) - such as Phase Change Memory (PCM), Resistance RAM (RRAM), and Spin-Torque-Transfer Magnetic RAM (STT-MRAM) - emerge that complement and augment the traditional triad of SRAM, DRAM, and Flash.  Such memories could enable Storage-Class Memory (SCM) - an emerging memory category that seeks to combine the high performance and robustness of solid-state memory with the long-term retention and low cost of conventional hard-disk magnetic storage.
Such large arrays of NVM can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as the plastic (modifiable) “weight” of each “native” synaptic device.  This is an attractive application for these devices, because while many synaptic weights are required, requirements on yield and variability can be more relaxed. However, work in this field has remained highly qualitative in nature, and slow to scale in size.
I will discuss our recent work towards large crossbar arrays of NVM for both of these applications. After briefly reviewing earlier work on PCM, SCM, and access devices based on copper-containing Mixed-Ionic-Electronic-Conduction (MIEC), I will discuss our recent work on quantitatively assessing the engineering tradeoffs inherent in NVM-based neuromorphic systems.
16:40-16:50  Questions/Discussion