Quick links

Talk

Shaping the Future Society with ICT Convergence

Date and Time
Monday, May 20, 2013 - 12:00pm to 1:00pm
Location
E-Quad, Room B327
Type
Talk
Speaker
Dr. James Won-Ki Hong, from CTO KT (Korea Telecom)
Host
Princeton EDGE Lab and Princeton KSA
Korea, also known as the nation of Psy, is one of the most dynamic countries in the world. Thanks to advanced information and communications infrastructures and unique culture, people in Korea are said to be living in the future of many other countries. It is this customer environment that facilitated the emergence of instant messaging and social platforms such as Kakaotalk and Line. It may be surprising to know that the entire country of Korea was actually devastated by the Korean War more than 60 years ago. Many factors drove Korea’s development for the past 60 years. Not only did its high education fever and speedy culture but also the government's strong will and drive on ICT (Information and Communications Technology) development and decisive investments by private sector businesses, such as KT (formerly Korea Telecom), contributed to the Korea’s development.

In fact, KT led the deployment of the most advanced wireline/wireless Internet infrastructure in the world during the last few decades. Now, KT is continuing to shape the society we live in with further advances in ICT. To bring the future into reality, KT is taking on numerous challenging and interesting problems in ICT convergence areas. In this talk, we will provide an outline of Korea’s ICT status, KT’s efforts in infrastructure deployment, and peek into future lifestyles based on emerging technologies. We also offer opportunities for talented students and researchers to collaborate with KT.

Designing Energy-Efficient Microprocessors in the Era of Unpredictable Transistors

Date and Time
Tuesday, May 14, 2013 - 12:00pm to 1:30pm
Location
Computer Science 402
Type
Talk
Host
Margaret Martonosi
Energy efficiency is now a principal design constraint across all computing markets - from supercomputers to smartphones. Performance growth in these markets can only be sustained going forward through significant improvements in the energy efficiency of computation. A very effective approach to improving microprocessor energy efficiency is to lower supply voltage to very close to the transistor's threshold voltage, into the so-called near-threshold region. Near-threshold operation reduces energy consumption by an order of magnitude but comes at the cost of reduced frequency, degraded reliability and increased parameter variability.

In this talk I will describe hardware and software co-design techniques that address and alleviate the challenges of low-voltage computing, bringing it closer to commercial feasibility. These include techniques for reducing the effects of process and voltage variability and a new approach for runtime reduction of voltage margins prototyped in a state-of-the-art server processor.

Radu Teodorescu is an Assistant Professor in the Department of Computer Science and Engineering at The Ohio State University where he leads the Computer Architecture Research Lab. He received his PhD from University of Illinois at Urbana-Champaign in 2008. His research interests include computer architecture with a focus on energy efficient microprocessor design and the impact of technology scaling on reliability and process variability. He received a CAREER award from the National Science Foundation in 2012, the W. J. Poppelbaum award from University of Illinois in 2008 and an Intel Fellowship in 2007.

Implications of Non Volatile Memory on Software Architectures

Date and Time
Tuesday, May 7, 2013 - 12:00pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Nisha Talagala, from Fusion-io
Host
Vivek Pai
Flash based non volatile memory is revolutionizing data center architectures, improving application performance by bridging the gap between DRAM and disk. Future non volatile memories promise performance even closer to DRAM. While flash adoption in industry started as disk replacement, the past several years have seen data center architectures change to take advantage of flash as a new memory tier in both servers and storage.

This talk covers the implications of nonvolatile memory on software. We describe the stresses that non volatile memory places on existing application and OS designs, and illustrate optimizations to exploit flash as a new memory tier. Until the introduction of flash, there has been no compelling reason to change the existing operating system storage stack. We will describe the technologies contained in the upcoming Fusion-io Software Developer Kit (ioMemory SDK) that allow applications to leverage the native capabilities of non-volatile memory as both an I/O device and a memory device. The technologies described will include new I/O based APIs and libraries to leverage the ioMemory Virtual Storage Layer, as well as features for extending DRAM into flash for cost and power reduction. Finally, we describe Auto-Commit-Memory, a new persistent memory type that will allow applications to combine the benefits of persistence with programming semantics and performance levels normally associated with DRAM.

Nisha Talagala is Lead Architect at Fusion-io, where she works on innovation in non volatile memory technologies and applications. Nisha has more than 10 years of expertise in software development, distributed systems, storage, I/O solutions, and non-volatile memory. She has worked as technology lead for server flash at Intel - where she led server platform non volatile memory technology development and partnerships. Prior to Intel, Nisha was the CTO of Gear6, where she developed clustered computing caches for high performance I/O environments. Nisha also served at Sun Microsystems, where she developed storage and I/O solutions and worked on file systems. Nisha earned her PhD at UC Berkeley where she did research on clusters and distributed storage. Nisha hold more than 30 patents in distributed systems, networking, storage, performance and non-volatile memory

Profiling Latency in Deployed Distributed Systems

Date and Time
Wednesday, May 1, 2013 - 3:30pm to 4:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Understanding the sources of latency within a deployed distributed system is complicated. Asynchronous control flow, variable workloads, pushes of new backend servers, and unreliable hardware all can make significant contribution to a job's performance. In this talk, I'll present the work of the Weatherman effort to build a profiling tool for deployed distributed systems. The method uses distributed traces to estimate the code control flow and predict/explain observed performance. I'll then illustrate how this method has been applied to understand and tune large distributed systems at Google and how it has been used in a differential profiling fashion to understand the sources of latency changes.

To provide another view of latency, I'll quickly discuss our recent work on distributed convex optimization with an emphasis on the interface between the algorithm and the computing substrate performing the computation. In particular, I'll show that data center architecture, in particular network architecture, should have a significant impact on machine learning algorithm design.

Gideon is a Staff Research Scientist at Google NY. He attended Brown University as an undergraduate where he hung out in the AI lab and drank too much Mountain Dew. He then attended graduate school at Johns Hopkins University, worked in CLSP, and graduated in 2006 with a Ph.D. He still misses Charm City. He then spent a post-doc at the UMass/Amherst with Andrew McCallum working on weakly-supervised learning. In 2007, he joined Google.

At Google, his team works on applied machine learning. The Weatherman effort leverages statistical methods to data center management. The team also is responsible for the Prediction API (https://developers.google.com/prediction/). Publicly released in 2010, Prediction was an early machine learning as a service offering and remains an ongoing research project.

A Tensor Spectral Approach to Learning Mixed Membership Community Models

Date and Time
Wednesday, April 24, 2013 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Modeling community formation and detecting hidden communities in networks is a well studied problem. However, theoretical analysis of community detection has been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we remove this restriction, and consider a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Aioroldi et. al. 2008. This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. We propose a unified approach to learning these models via a tensor spectral decomposition method. Our estimator is based on low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is fast and is based on simple linear algebra operations, e.g. singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements in the special case of the stochastic block model. This is joint work with Rong Ge, Daniel Hsu and Sham Kakade and will appear at COLT 2013.

Anima Anandkumar has been a faculty at the EECS Dept. at U.C.Irvine since Aug. 2010. Her current research interests are in the area of high-dimensional statistics and machine learning with a focus on learning probabilistic graphical models and latent variable models. She was recently a visiting faculty at Microsoft Research New England (April-Dec. 2012). She was a post-doctoral researcher at the Stochastic Systems Group at MIT (2009-2010). She received her B.Tech in Electrical Engineering from IIT Madras (2004) and her PhD from Cornell University (2009). She is the recipient of the Microsoft Faculty Fellowship (2013), ARO Young Investigator Award (2013), NSF CAREER Award (2013), and Paper awards from Sigmetrics and Signal Processing Societies.

Automated Formal Analysis of Internet Routing Systems

Date and Time
Tuesday, April 23, 2013 - 12:00pm to 1:30pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford
The past twenty years have witnessed significant advances in formal modeling, system verification and testing of network protocols. However a long-standing challenge in these approaches is the decoupling of formal reasoning process and the actual distributed implementation. This talk presents my thesis work on bridging formal reasoning and actual implementation in the context of today’s Internet routing. I will present the Formally Safe Routing (FSR) toolkit, that combines the use of declarative networking, routing algebra, and SMT solver techniques, in order to synthesize faithful distributed routing implementations from verified network models. Next, I will describe our work on scaling up formal analysis of lnternet-scale configurations. Our core technique uses a configuration rewriting calculus for transforming large network configurations into smaller instances, while preserving routing behaviors. Finally, I conclude with a discussion of my ongoing and future work, on synthesizing provably correct network configurations for the emerging Software Defined Networking (SDN) platforms.

Toward Programmable High-Performance Multicores

Date and Time
Tuesday, April 30, 2013 - 12:00pm to 1:30pm
Location
Computer Science 302
Type
Talk
Host
Margaret Martonosi
One of the biggest challenges facing us today is how to design parallel architectures that attain high performance while efficiently supporting a programmable environment. In this talk, I describe novel organizations that will make the next generation of multicores more programmable and higher performance. Specifically, I show how to automatically reuse the upcoming transactional memory hardware for optimized code generation. Next, I describe a prototype of Record&Replay hardware that brings program monitoring for debugging and security to the next level of capability. I also describe a new design of hardware fences that is overhead-free and requires no software support. Finally, if time permits, I will outline architectural support to detect sequential consistency violations transparently.

Josep Torrellas is a Professor of Computer Science and Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign (UIUC). He is a Fellow of IEEE and ACM. He is the Director of the Center for Programmable Extreme-Scale Computing, a center funded by DARPA, DOE, and NSF that focuses on architectures for extreme energy and power efficiency. He also directs the Intel-Illinois Parallelism Center (I2PC), a center created by Intel to advance parallel computing in clients. He has made contributions to parallel computer architecture in the areas of shared-memory multiprocessor organizations, cache hierarchies and coherence protocols, thread-level speculation, and hardware and software reliability. He received a Ph.D. from Stanford University.

NetFPGA: The Flexible Open-Source Networking Platform

Date and Time
Wednesday, April 10, 2013 - 4:30pm to 5:30pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford
The NetFPGA is an open platform enabling researchers and instructors to build high-speed, hardware-accelerated networking systems. The NetFPGA is the de-facto experimental platform for line-rate implementations of network research and it continues with a new generation platform capable of 4x10Gbps.

The target audience is not restricted to hardware researchers: the NetFPGA provides the ideal platform for research across a wide range of networking topics from architecture to algorithms and from energy-efficient design to routing and forwarding. The most prominent NetFPGA success is OpenFlow, which in turn has reignited the Software Defined Networking movement. NetFPGA enabled OpenFlow by providing a widely available open-source development platform capable of line-rate and was, until its commercial uptake, the reference platform for OpenFlow. NetFPGA enables high-impact network research.

This seminar will combine presentation and demonstration; no knowledge of hardware programming languages (eg Verilog/VHDL) is required.

A NetFPGA 10G card will be awarded as a door-prize amongst the seminar attendees.

ANDREW W. MOORE is a Senior Lecturer at the University of Cambridge Computer Laboratory in England, where he is part of the Systems Research Group working on issues of network and computer architecture. His research interests include enabling open-source network research and education using the NetFPGA platform, other research pursuits include low-power energy-aware networking, and novel network and systems data-center architectures. He holds B.Comp. and M.Comp. degrees from Monash University and a Ph.D. from the University of Cambridge. He is a chartered engineer with the IET and a member of the IEEE, ACM and USENIX.

Universal and affordable Computational Integrity, or, succinctly, from C to PCP

Date and Time
Thursday, March 14, 2013 - 12:30pm to Friday, February 8, 2013 - 1:30pm
Location
Friend Center Convocation Room
Type
Talk
Public key cryptography, invented in the 1970's, revolutionized the world of computation by displaying a tool-box that builds trust in authenticity and integrity of {em data}, even when it is transmitted from unknown computers and relayed via untrusted and possibly corrupt intermediaries. A celebrated line of theoretical works dating back to the 1980's envisions a similar revolution, offering a level of trust in the integrity of arbitrary {em computations} even when executed on untrusted and possibly malicious computers. A particularly surprising aspect of these results is {em succinctness} which means that the running-time needed to verify a computation is only as costly as reading the program that describes it, even when this program is exponentially shorter in length than the computation's running time!

Common belief has been that it is impractical to build a truly succinct computational-integrity protocol. We challenge this conception by describing the first full-scale implementation of it. Our system compiles programs in standard C into succinctly-verifiable programs, with asymptotic parameters that match the best-possible bounds predicted by theory.

Joint work with Alessandro Chiesa, Daniel Genkin and Eran Tromer.

Flow-Cut Gaps and Network Coding

Date and Time
Tuesday, February 5, 2013 - 3:00pm to 4:00pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford
The classic max-flow min-cut theorem states that in a network with one source and one sink, the amount of information that can be sent from the source to the sink is equal to the minimum capacity of a set of edges separating the source from the sink. This equivalence allows for efficient computation of both parameters.

Modifying traffic demands, constraints, and optimization functions yields many variations of the max-flow problem. In all of these variations there is a corresponding cut minimization problem whose optimal value serves as an upper bound on the max-flow. But unlike the single source-sink version, the cut upper bound is rarely equal to the max-flow. The flow-cut gap, the worst possible ratio between the max-flow rate and the cut upper bound, is an important area of study for developing approximation algorithms and gaining a better understanding of network flow.

Related to the max-flow problem is the network coding rate, the maximum transmission rate of information in networks whose nodes can perform nontrivial encoding and decoding operations on incoming messages. The network coding rate is always as big as the max-flow rate and can be much larger. A spectacular result of network coding theory is that in the multicast problem, a setting with a large flow-cut gap, the coding rate is always equal to the cut bound. This talk considers the relations between the flow rate, coding rate, and cut bounds for multicommodity flow problems. I present new coding-cut gaps, the worst-case ratio between the coding rate and cut upper bound. Further, I show that there exist paradigms apart from multicast in which coding can bridge the flow-cut gap. Specifically, in the network that provides the largest known separation (Saks et al.) between the maximum multicommodity flow and multicut, the coding rate is equal to the multicut.

This is joint work with Robert Kleinberg (Cornell University) and Eyal Lubetzky (Microsoft Research Redmond).

Follow us: Facebook Twitter Linkedin