Quick links

Colloquium

Leakage resistant public key cryptography

Date and Time
Wednesday, March 18, 2009 - 4:20pm to 5:50pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Moni Naor, from Weizmann Institute, Israel
Host
Sanjeev Arora
Most of the work in the formal analysis of cryptographic schemes traditionally concentrated in abstract adversarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. In light of the prevalence of such attacks there are several attempts to model them and suggest schemes that are resistant to some of these attacks. I will describe recent developments in the area, especially those inspired by the ``cold boot attacks" of Halderman et al (Usenix Security 2008) and the model suggested by Akavia, Goldwasser and Vaikuntanathan (TCC 2009) in which adversarially chosen functions of the secret key are leaked to the attacker. In particular I will show a new simple construction of a public-key cryptosystem resistant to leakage of almost all the key. I will also discuss directions for future research. Joint work with Gil Segev

50 Years of C++

Date and Time
Wednesday, March 4, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Bjarne Stroustrup, from Texas A&M
Host
Brian Kernighan
This is a talk about the design and evolution of C++. As all programming languages, C++ owes a lot to earlier languages and evolves through the blending of ideas. This talk tries to answer some common questions about that success of C++: Why did it succeed? At what did it succeed? How did it maintain a steady course over more than 25 years? At what did it not succeed? The scale of C++ success was unanticipated and its continuing strength has left many language and business theorists puzzled, so explanations are required. Given the long time span involved and because no large system can be designed with 20-20 hindsight, a historical perspective is an essential part of any answer. A combination of technical, philosophical, and sociological issues must be considered. This talk focuses on the design aims of C++ and my philosophy of incremental language evolution relying on feedback loops. It goes into some detail on the ISO standards process which has been central to the stability and evolution of C++. It gives a few -- but only very few -- illustrative code examples.

Grad get together

Date and Time
Wednesday, February 25, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Larry Peterson
Host
Larry Peterson

Google and the Vapnik-Chervonenkis Dimension

Date and Time
Wednesday, February 11, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Stuart Geman, from Brown University
Host
Fei-fei Li
Google engineers routinely train query classifiers, for ranking advertisements or search results, on more words than any human being sees or hears in a lifetime. A human being who sees a meaningfully new image every second for one-hundred years will not see as many images as Google has in its libraries, all of which are available for training object detectors and image classifiers. Yet by human standards the state-of-the-art, in computer understanding of language and computer-generated image analysis, is primitive. What explains the gap? Why can’t learning theory tell us how to make machines that learn as efficiently as humans? Upper bounds on the number of training samples needed to learn a classifier as rich and competent as the human visual system can be derived using the Vapnik-Chervonenkis dimension, or the metric entropy, but these suggest that not only does Google need more examples, but all of evolution might fall short. I will make some proposals for efficient learning and offer some mathematics to support them.

STAIR: The STanford Artificial Intelligence Robot Project

Date and Time
Wednesday, February 4, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Andrew Ng, from Stanford
Host
David Blei
This talk will describe the STAIR home assistant robot project, and the satellite projects that led to key STAIR components such as (i) robotic grasping of previously unknown objects, (ii) depth perception from a single still image, and (iii) multi-modal robotic perception.

Since its birth in 1956, the AI dream has been to build systems that exhibit broad-spectrum competence and intelligence. STAIR revisits this dream, and seeks to integrate onto a single robot platform tools drawn from all areas of AI including learning, vision, navigation, manipulation, planning, and speech/NLP. This is in distinct contrast to, and also represents an attempt to reverse, the 30 year old trend of working on fragmented AI sub-fields. STAIR's goal is a useful home assistant robot, and over the long term, we envision a single robot that can perform tasks such as tidying up a room, using a dishwasher, fetching and delivering items, and preparing meals.

In this talk, I'll describe our progress on having the STAIR robot fetch items from around the office, and on having STAIR take inventory of office items. Specifically, I'll describe: (i) learning to grasp previously unseen objects (including unloading items from a dishwasher); (ii) probabilistic multi-resolution maps, which enable the robot to open/use doors; (iii) a robotic foveal+peripheral vision system for object recognition and tracking. I'll also outline some of the main technical ideas---such as learning 3-d reconstructions from a single still image, and reinforcement learning algorithms for robotic control that played key roles in enabling these STAIR components.

Andrew Ng is an Assistant Professor of Computer Science at Stanford University. His research interests include machine learning, reinforcement learning/control, and broad-competence AI. His group has won best paper/best student paper awards at ACL, CEAS, 3DRR and ICML. He is also a recipient of the Alfred P. Sloan Fellowship.

Games in Networks: the price of anarchy and learning

Date and Time
Thursday, December 11, 2008 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Eva Tardos, from Cornell
Host
Sanjeev Arora
Network games play a fundamental role in understanding behavior in many domains, ranging from communication networks through markets to social networks. Such networks are used, and also evolve due to selfish behavior of the users and owners. In light of these competing forces, it is surprising how efficient these networks are. It is an exciting challenge to understand the operation and success of these networks in game theoretic terms: what principles of interaction lead selfish participants to form such efficient networks? We will focus on congestion games, and study the degradation of quality of solution caused by the selfish behavior of users. We model users as learning algorithms, and show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the load-balancing game. We use tools from the theory of dynamical systems and algebraic geometry to show when players use a class of natural learning algorithms the distribution of play converges to the set of weakly stable equilibria, and that the set of weakly stable equilibria are the pure Nash equilibria with probability 1 when congestion costs are selected at random independently on each edge (from any monotonically parametrized distribution). The talk is a survey and self-contained. ----- About the speaker: Eva Tardos is a Professor of Computer Science at Cornell University where she is currently chair. Her research is in Algorithm Design and Algorithmic Game Theory. Algorithmic game theory is an emerging new area of designing systems and algorithms for selfish users. She is a winner of the Fulkerson Prize and the Dantzig Prize.

Computer Graphics as a Telecommunication Medium

Date and Time
Wednesday, December 3, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Vladlen Koltun, from Stanford University
Host
Michael Freedman
I will argue that the primary contribution of computer graphics in the next decade will be to enable richer social interaction at a distance. The integration of real-time computer graphics and large-scale distributed systems will give rise to a rich telecommunication medium, currently referred to as virtual worlds. The medium provides open-ended face-to-face communication among ad-hoc groups of people in custom environments and decouples shared spatial experiences from geographic constraints.

I will outline a research agenda for enabling and advancing the medium. The three driving themes are system architectures, content creation, and interaction. System architectures aim to support the medium at planetary scale. Content creation aims to enable untrained participants to create high-quality three-dimensional content. Interaction aims to make virtual world communication seamless and natural. I will demonstrate preliminary results in each area.

Bio: Vladlen Koltun is an Assistant Professor of Computer Science at Stanford University. He directs the Virtual Worlds Group, which explores how scalable virtual world systems can be built, populated, and used. His prior work in computational geometry and theoretical computer science was recognized with the NSF CAREER Award, the Alfred P. Sloan Fellowship, and the Machtey Award.

What Should the FCC Do About Net Neutrality

Date and Time
Thursday, November 20, 2008 - 4:30pm to 6:00pm
Location
Woolworth Center 202
Type
Colloquium
Speaker
Phil Weiser
Host
Edward Felten
“Net Neutrality” means that Internet users get treated equally, no matter which web sites or services they use. Does this rule matter? How can government enforce it without hurting innovation? Phil Weiser, a law professor and veteran of the Department of Justice, will offer an answer. Abstract: Institutional Design, Network Neutrality, and FCC Reform

The Federal Communications Commission’s management of the Comcast affair and its ultimate decision in that case sanctioning Comcast’s behavior underscore several critical points about the future direction of Internet policy in the U.S. First, the broadband providers are unlikely to, without any form of effective oversight, refrain from either strategic behavior or innocent intended, but unfortunate resulting conduct. Second, the era of the “unregulation of the Internet” is over. And, third, the FCC has failed to develop a thoughtful institutional strategy for making the necessary judgments and implementing an effective oversight regime to govern the behavior of broadband providers.

To address the above points, the FCC should develop a new institutional strategy for confronting network neutrality issues like the one raised in the Comcast case and others likely to arise in the future. In particular, the FCC should authorize a private body to play a self-regulatory role and thereby enlist the traditional Internet ethos of collaboration as a policymaking tool. Such a self-regulatory effort, however, cannot succeed without effective oversight and involvement from the FCC, meaning, among other things, that its own adjudicative processes should be designed and managed in a far more effective manner than employed in the Comcast case. Notably, the FCC should develop processes more like a true adjudicative model, where it can find facts through evidence submitted under oath and subject to cross-examination, and use the deterrent value of such processes to encourage honest and good faith collaboration among the relevant stakeholders as well as compliance with the relevant standards of conduct.

While there is a role for some front-end guidance and definition of the relevant policy goals, the FCC cannot simply announce a set of rules or principles and declare victory. Indeed, as the Comcast case underscores, broadband providers will, for any number of reasons, test the bounds of principles like “reasonable network management” and the FCC will thus need to develop an institutional response for addressing such issues. To be sure, putting such a model into practice presents a number of challenges, but there are valuable guideposts on the road and enlightened broadband providers, applications developers, and end users alike recognize that it is in their collective interest to avoid overly intrusive or unduly politicized regulatory processes that have historically not governed the Internet. Over time, moreover, the model suggested herein may be suitable for a number of telecommunications policy issues that have yet to generate a robust institutional response from the FCC, including managing property rights in spectrum, overseeing interoperability between IP-IP communications, and ensuring interconnection between Internet backbone providers.

Biographical Information:

Professor Phil Weiser is a professor of law and telecommunications at the University of Colorado. At CU, he has worked to establish a national center of excellence in telecommunications and technology law, founding the Journal on Telecommunications & High Technology Law and the Silicon Flatirons Center for Law, Technology, and Entrepreneurship as well as writing and teaching in the areas of telecommunications and information policy. Over the last ten years, Weiser has co-authored two books (Digital Crossroads: American Telecommunications Policy in the Internet Age (MIT Press 2005) and Telecommunications Law and Policy (Carolina Academic Press 2006)), numerous articles, and has testified before both houses of Congress.

Prior to joining the CU faculty, Professor Weiser served as senior counsel to the Assistant Attorney General in charge of the Antitrust Division at the United States Department of Justice, advising him primarily on telecommunications matters. Before his appointment at the Justice Department, Weiser served as a law clerk to Justices Byron R. White and Ruth Bader Ginsburg at the United States Supreme Court and to Judge David Ebel at the Tenth Circuit Court of Appeals. Weiser graduated with high honors from both the New York University School of Law and Swarthmore College.

Professor Weiser is also a past LAPA Fellow.

Sponsored by CITP, LAPA and Microsoft. Laura Cummings-Abdo Program Assistant Center for Information Technology Policy Princeton University Sherrerd Hall, Room 303 Princeton, NJ 08544 Tel: 609-258-9658 http://citp.princeton.edu/

GPU Computing

Date and Time
Thursday, October 9, 2008 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Ian Buck, from NVIDIA
Host
Adam Finkelstein
Many researchers have observed that general purpose computing with programmable graphics hardware (GPUs) has shown promise to solve many of the world's compute intensive problems, many orders of magnitude faster the conventional CPUs. The challenge has been working within the constraints of a graphics programming environment to leverage this huge performance potential. GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems, transforming the GPU into a massively parallel processor. The NVIDIA C-compiler for the GPU provides a complete development environment gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics. In this talk, I will provide a brief history of computing with GPUs, how CUDA can solve compute intensive problems. Looking forward, the shift to heterogeneous based computing is exciting new territory for languages, operating systems, compilers and architectures and will share some of NVIDIA's vision of that future.

Bio:

Ian Buck completed his Ph.D. at the Stanford Graphics Lab in 2004. His thesis was titled "Stream Computing on Graphics Hardware," researching programming models and computing strategies for using graphics hardware as a general purpose computing platform. His work included developing the "Brook" software toolchain for abstracting the GPU as a general purpose streaming coprocessor. Ian received is B.S.E from Princeton and was a proud former member of the Princeton CS Graphics Group. He is currently Software Director for GPU Computing.

Games With a Purpose

Date and Time
Wednesday, October 8, 2008 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Luis von Ahn
Host
Edward Felten
http://citp.princeton.edu/events/lectures/luis-von-ahn/
Follow us: Facebook Twitter Linkedin