BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Constructive Initialization for Sigmoidal Neural Networks in Engin
 eering Mechanics Applications
DTSTART:20180515T111500
DTEND:20180515T121500
DTSTAMP:20260407T164048Z
UID:b9c55c2700cb2964f4a63143ea1f3b82d30a4521cc5293edf52cbf2f
CATEGORIES:Conferences - Seminars
DESCRIPTION:Prof. Jin-Song Pei\n\nSchool of Civil Engineering and Environm
 ental Science University of Oklahoma\, Norman\, Oklahoma\, USA\nFollowing 
 recent advances in sensing and testing technologies\, there is a pressing 
 issue of handling experimental data\, the quantity and quality of which ha
 ve been significantly expanded. Facing mountains of data (with numerous un
 knowns and uncertainties)\, we need to extract the most useful and accurat
 e underlying information for rapid assessment and critical decision making
 . Artificial neural networks (ANNs) and other nonparametric models provide
  alternatives when model-based methods or fixed basis functions are not ef
 fective. In engineering mechanics community\, developing comprehensive and
  high-fidelity models for nonlinear dynamical systems has been one of the 
 key research issues\, impacting a very broad range of applications. If des
 igned properly\, ANNs are very suitable to process and analyze datasets co
 llected from complex nonlinear dynamical systems and in a computationally 
 efficient manner. The use of neural networks\, however\, has been a somewh
 at controversial subject. When the inner workings of ANNs are not transpar
 ent to most users\, neural networks remain powerful but mysterious “blac
 k boxes”.\n \nTo demystify the reputed black-box\, the research objecti
 ve is to develop an insightful process of designing sigmoidal neural netwo
 rk architectures\, which includes making choice of initial values of weigh
 ts and biases and the number of hidden nodes\, with a final goal of having
  good generalization capability of trained neural networks. Our research h
 as developed direct (non-iterative) methods for: (a) utilizing a few basic
  “building blocks” (called “neural network prototypes”) individual
 ly or combinatorially and following a handful of proposed rules/guidelines
  to generate neural network architectures\, and (b) selecting initial weig
 hts and biases. Thereby we have approximated (1) the four basic arithmetic
  operations\, (2) polynomials\, (3) the exponential function\, (4) Gaussia
 n and Mexican hat functions\, (5) certain hardening and softening types of
  nonlinearities\, and others. Our methodology and techniques have been val
 idated using both experimental and simulated data.
LOCATION:GC G1 515 https://plan.epfl.ch/?room=GCG1515
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
