BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Network-Centric Computing for Online Services
DTSTART:20171213T090000
DTEND:20171213T110000
DTSTAMP:20260428T004637Z
UID:72b67384061a9a068a48e17d3f2fc6828815bb4b339ec822dc052038
CATEGORIES:Conferences - Seminars
DESCRIPTION:Alexandros Daglis\nAbstract\nModern datacenters provide an abu
 ndance of online services to billions of daily users. Each service compris
 es several software layers deployed on thousands of servers\, which commun
 icate over the network to collaboratively construct a response to each inc
 oming user request. In addition to frequent inter-server communication\, a
  large class of services involves minuscule computation per request and mi
 crosecond-scale response latency requirements per involved server. Such ex
 ecution profiles establish networking as a first-order performance determi
 nant and motivate a vertical system design rethink\, taking a network-cent
 ric approach. In this talk\, I will introduce a holistic system redesign t
 argeting the most challenging latency-sensitive online services\, includin
 g (i) a specialized lightweight network stack\; (ii) scalable on-chip inte
 gration of the network interface logic\; and (iii) new network operations 
 with richer\, end-to-end semantics that can be efficiently executed on sma
 rt network endpoints without CPU interaction. I will highlight the role an
 d demonstrate the effect of each of these three key features with systems 
 built throughout my dissertation.\n \nShort bio\nAlexandros (Alex) Daglis
  is a sixth-year PhD student at EPFL\, advised by Prof. Babak Falsafi and 
 Prof. Edouard Bugnion. His research interests lie in rack-scale computing 
 and datacenter architectures. Alex’s work advocates for tighter integrat
 ion and co-design of network and compute resources as a necessary approach
  to tackling the performance overheads associated with inter-node communic
 ation in scale-out architectures. He has been a founding member of Scale-O
 ut NUMA\, an architecture\, programming model\, and communication protocol
  for low-latency\, distributed in-memory processing. Scale-Out NUMA has be
 en patented and licensed by a major IT vendor. As an intern at HP Labs\, A
 lex worked on the design of The Machine’s unique memory subsystem.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
