15/06/2017 11:30Meyer 1003

 

Memcomputing: a brain-inspired topological computing paradigm

Massimiliano Di Ventra

University of California San Diego

Which features make the brain such a powerful and energy-efficient computing machine? Can we reproduce them in the solid state, and if so, what type of computing paradigm would we obtain? I will show that a machine that uses memory to both process and store information, like our brain, and is endowed with intrinsic parallelism and information overhead - namely takes advantage, via its collective state, of the network topology related to the problem - has a computational power far beyond our standard digital computers. We have named this novel computing paradigm "memcomputing". As an example, I will show the polynomial-time solution of prime factorization, the NP-hard version of the subset-sum problem and the Max-SAT using polynomial resources and self-organizing logic gates, namely gates that self-organize to satisfy their logical proposition . I will also demonstrate that these machines are described by a topological field theory and they compute via an instantonic phase, implying that they are robust against noise and disorder. The digital memcomputing machines that we propose can also be efficiently simulated, are scalable and can be easily realized with available nanotechnology components, and may help reveal aspects of computation of the brain.

Bio: Massimiliano Di Ventra obtained his undergraduate degree in Physics summa cum laude from the University of Trieste (Italy) in 1991 and did his PhD studies at the Ecole Polytechnique Federale de Lausanne (Switzerland) in 1993-1997. He has been Visiting Scientist at IBM T.J. Watson Research Center and Research Assistant Professor at Vanderbilt University before joining the Physics Department of Virginia Tech in 2000 as Assistant Professor. He was promoted to Associate Professor in 2003 and moved to the Physics Department of the University of California, San Diego, in 2004 where he was promoted to Full Professor in 2006. Di Ventra's research interests are in the theory of electronic and transport properties of nanoscale systems, non-equilibrium statistical mechanics, DNA sequencing/polymer dynamics in nanopores, and memory effects in nanostructures for applications in unconventional computing and biophysics. He has been invited to deliver more than 250 talks worldwide on these topics (including 10 plenary/keynote presentations, 9 talks at the March Meeting of the American Physical Society, 5 at the Materials Research Society, 2 at the American Chemical Society, and 2 at the SPIE). He has been Visiting Professor at the Technical University of Dresden (2015), University Paris-Sud (2015), Technical University of Denmark (2014), Ben-Gurion University (2013), Scuola Normale Superiore di Pisa (2012, 2011), and SISSA, Trieste (2012). He serves on the editorial board of several scientific journals and has won numerous awards and honors, including the NSF Early CAREER Award, the Ralph E. Powe Junior Faculty Enhancement Award, fellowship in the Institute of Physics and the American Physical Society. Di Ventra has published more than 200 papers in refereed journals, co-edited the textbook Introduction to Nanoscale Science and Technology (Springer, 2004) for undergraduate students, and he is single author of the graduate-level textbook Electrical Transport in Nanoscale Systems (Cambridge University Press, 2008).

20/06/2017 11:30Meyer 1007

 

Finding the Next Curves: Towards a Scalable Future for Specialized Architectures

Adi Fuchs

EE, Princeton

The end of CMOS transistors scaling marks a new era for modern computer systems. As the gains from traditional general-purpose processors diminish, researchers explore the new avenues of domain-specific computing. The premise of domain-specific computing is that by co-optimizing software and specialized hardware accelerators, it is possible to achieve higher performance per power rates. In contrast to technology scaling, specialization gains are not sustainably scalable, as there is a limit to the number of ways to map a computation problem to hardware under a fixed budget. Since hardware accelerators are also implemented using CMOS transistors, the gains from specialization will also diminish in the farther future, giving rise to a "specialization wall". We explore the integration of emerging memory technologies with specialized hardware accelerators to eliminate redundant computations and essentially trade non-scaling CMOS transistors for scaling memory technology. We evaluated our new architecture for different data center workloads. Our results show that, compare to highly-optimized accelerators, we achieve an average speedup of 3.5x, save on average 43% in energy, and save on average 57% in energy-delay product.

Bio: Adi Fuchs is a PhD Candidate in the EE Department at Princeton University working on novel computer architectures. His research explores the integration of new scaling technologies with existing computer systems and domain-specific accelerators. He earned his BSc and MSc degrees, cum laude and summa cum laude, respectively, both from the EE Department at Technion - Israel Institute of Technology

28/06/2017 11:30Meyer 861

 

Securing Internet Routing from the Ground Up

Michael Schapira

CS, Hebrew University of Jerusalem

The Internet's communication infrastructure (TCP/IP, DNS, BGP, etc.) is alarmingly insecure, as evidenced by many high-profile incidents. I will illustrate the challenges en route to securing the Internet, and how these can be overcome, by focusing on the Internet's, arguably, biggest security hole: the vulnerability of Internet routing to traffic hijacking attacks.

Bio: Michael Schapira is an associate professor at the School of Computer Science and Engineering, the Hebrew University of Jerusalem. He is also the scientific co-leader of the Fraunhofer Cybersecurity Center at Hebrew University. His research interests focus on the protocols/mechanisms that make the Internet tick (e.g., for routing, traffic management). He is interested in the design and analysis of practical (Inter)network architectures and protocols with provable guarantees (failure-resilience, optimality, security, incentive compatibility, and beyond). He is also broadly interested in the the interface of computer science, economics, and game theory (e.g., dynamic interactions in economic and computational environments, incentive-compatible computation, computational auctions, and more).

05/07/2017 11:30Taub 301

 

TBA

Trevor Brown

Computer Science, Technion

TBA

Bio: TBA