Quick links

Talk

Changing the Game: A Different Approach to Cybersecurity

Date and Time
Thursday, April 19, 2018 - 4:30pm to 6:00pm
Location
Robertson Hall, Arthur Lewis Auditorium
Type
Talk
Speaker
J. Michael Daniel 92, from Cyberthreat Alliance

OPEN TO THE PUBLIC

Mr. Daniel's keynote address launches the policy conference, Cyber Security and Warfare in the 21st Century, co-sponsored by the Center for International Security Studies and the Center for Information Technology Policy. The conference will take place on April 20 from 8:45am-5:00pm in Arthur Lewis Auditorium.  Please register here

Michael Daniel currently serves as the President of the Cyber Threat Alliance (CTA). CTA works to improve the cybersecurity of our global digital ecosystem by enabling real-time, high-quality cyber threat information sharing among companies and organizations in the cybersecurity field. Prior to joining the CTA in February 2017, Michael served from June 2012 to January 2017 as Special Assistant to President Obama and Cybersecurity Coordinator on the National Security Council Staff. In this role, Michael led the development of national cybersecurity strategy and policy, and ensured that the US government effectively partnered with the private sector, non-governmental organizations, and other nations.

Speaker(s): 
J. Michael Daniel, President, Cyberthreat Alliance
Former Special Assistant to President Obama and White House Cybersecurity Coordinator

Network-Centric Computing for Online Services

Date and Time
Monday, March 12, 2018 - 4:30pm to 6:00pm
Location
Engineering Quadrangle B205
Type
Talk
Speaker

Modern datacenters offer an abundance of online services to billions of daily users. The most demanding online services come with tight response latency requirements, forcing service providers to keep their massive datasets memory-resident, distributed across the datacenter’s servers. Every user request accesses the memory of hundreds of servers; therefore, fast access to the aggregate memory pool is of crucial importance for service quality. The entity binding all the memory resources together is the datacenter network. Unfortunately, despite the dramatic evolution of datacenter fabrics over the past decade, networking still incurs a latency overhead of tens of microseconds to every remote memory access, making memory pooling impractical. In this talk, I will propose a holistic network-centric system redesign to enable memory pooling in future datacenters, which includes (i) a specialized lightweight network stack; (ii) on-chip integration of the network interface logic; and (iii) new network operations with richer, end-to-end semantics. I will highlight the role and demonstrate the effect of each of these three key features with systems built throughout my dissertation.

Bio:
Alexandros (Alex) Daglis is a sixth-year PhD student at EPFL, advised by Prof. Babak Falsafi and Prof. Edouard Bugnion. His research interests lie in rack-scale computing and datacenter architectures. Alex advocates tighter integration and co-design of network and compute resources as a necessary approach to tackling the performance overheads associated with inter-node communication in scale-out architectures. He has been a founding member of Scale-Out NUMA, an architecture, programming model, and communication protocol for low-latency, distributed in-memory processing. Scale-Out NUMA has been prototyped and awarded a US patent. As an intern at HP Labs, Alex worked on the design of The Machine’s unique memory subsystem. More details can be found on his CV.

The Rise of Artificial Intelligence: Who Will Have Jobs in the Future?

Date and Time
Thursday, March 1, 2018 - 4:30pm to 6:00pm
Location
McCosh Hall 50
Type
Talk

A conversation with Brad Smith, President and Chief Legal Officer of Microsoft.
 

Please register here: Eventbrite
 
Info about the new book from Microsoft, The Future Computed: Artificial Intelligence and its role in society
 
Brad Smith’s bio and photos

Women's Interactive Networking Event (WINE)

Date and Time
Wednesday, November 15, 2017 - 4:30pm to 6:00pm
Location
Computer Science Tea Room
Type
Talk
Host
Marshini Chetty

At this meeting, we will have a short talk by Margaret Martonosi, Professor of CS at Princeton followed by small group discussions to share personal experiences of success and failure at creating change to advance diversity. We will then join together for a large group discussion to share key insights and brainstorm topics for future WINE meetings. 

Open to graduate students, post docs and faculty of all genders!

Global Measurements of Internet Censorship

Date and Time
Friday, April 14, 2017 - 12:30pm to 1:30pm
Location
127 Corwin Hall
Type
Talk
Host
The Program for Quantitative and Analytical Political Science, Department of Politics

Photo of Professor Nick Feamster

Photo by David Kelly Crow

Internet users in many countries around the world are subject to various forms of censorship and information control. Despite its widespread nature, however, measuring Internet censorship on a global scale has remained an elusive goal. Internet censorship is known to vary across time and within regions (and Internet Service Providers) within a country. To capture these complex dynamics, Internet censorship measurement must be both continuous and distributed across a large number of vantage points.  To date, gathering such information has required recruiting volunteers to perform measurements from within countries of interest; this approach does not permit collection of continuous measurements, and it also does not permit collection from a large number of measurement locations; it may also put the people performing the measurements at risk. Over the past four years, we have developed a collection of measurement techniques to surmount the limitations of these conventional approaches. In this talk, I will describe three such techniques: (1) Encore, a tool that performs cross-origin requests to measure Web filtering; (2) Augur, a tool that exploits side-channel information in the Internet Protocol (IP) to measure filtering using network-level access control lists; and (3) a tool to measure DNS filtering using queries through open DNS resolvers. These three tools allow us—for the first time ever—to characterize Internet censorship continuously, from hundreds of countries around the world, at different layers of the network protocol stack. each of these techniques involves both technical and ethical challenges. I will describe some of the challenges that we faced in designing and implementing these tools, how we tackled these challenges, our experiences with measurements to date, and our plans for the future. Long term, our goal is to collaborate with social scientists to bring data to bear on a wide variety of questions concerning Internet censorship and information control; I will conclude with an appeal to cross-disciplinary work in this area and some ideas for how computer scientists and social scientists might work together on some of these pressing questions going forward.

This research is in collaboration with Sam Burnett, Roya Ensafi, Paul Pearce, Ben Jones, Frank Li, and Vern Paxson.

Interrupts in OS code: let’s reason about them. Yes, this means concurrency.

Date and Time
Monday, May 16, 2016 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Speaker

Existing modelled and verified operating systems (OS’s) typically run on uniprocessor platforms and run with interrupts mostly disabled. This makes formal reasoning more tractable: execution is mostly sequential. The eChronos OS is a real-time OS used in tightly constrained devices, running on (uniprocessor) embedded micro-controllers. It is used in the DARPA-funded HACMS program, where it runs the flight control software of a high-assurance quadcopter. To provide low latency, the eChronos OS runs with interrupts enabled and provides a preemptive scheduler. In terms of verification, this means that concurrency reasoning is required, which significantly increases the complexity: application and OS instructions may be interleaved with interrupt handler instructions.

In our work we explicitly model the effect of interrupts and their handling by the hardware and OS. We provide a general formal model of the interleaving between OS code, application code and interrupt handlers. We then instantiate this model to formalise the scheduling behavior of the eChronos OS, and prove the main scheduler property: the running task is always the highest-priority runnable task.

June Andronick is a Senior Researcher at Data61|CSIRO (formerly NICTA). Her research focuses on increasing the reliability of critical software systems, by mathematically proving that the code behaves as expected and satisfies security and safety requirements.  She contributed to the seL4 correctness proof and now focuses on concurrency reasoning for OS code. She leads the concurrency software verification research in Data61, and is deputy leader of the Trustworthy Systems group. She was recognised in 2011 by MIT's Technology Review as one of the world's top young innovators (TR35). She holds a PhD in Computer Science from the University of Paris-Sud, France.

Towards Socially-aware Artificial Intelligence: Human Behavior Understanding at Scale

Date and Time
Wednesday, April 20, 2016 - 1:30pm to 2:30pm
Location
Computer Science 402
Type
Talk

Over the past sixty years, Intelligent Machines (IM) have made great progress in playing games, tagging images in isolation, and recently making decisions for self-driving vehicles. Despite these advancements, they are still far from making decisions in social scenes and assisting humans in public spaces such as terminals, malls, campuses, or any crowded urban environment. To overcome these limitations, we need to empower machines with social intelligence, i.e., the ability to get along well with others and facilitate mutual cooperation. This is crucial to design smart spaces that adapt to the behavior of humans for efficiency, or develop autonomous machines that assist in crowded public spaces (e.g., delivery robots, or self-navigating segways).

In this talk, I will present my work towards socially-aware machines that can understand human social dynamics and learn to forecast them. First, I will highlight the machine vision techniques behind understanding the behavior of more than 100 million individuals captured by multi-modal cameras in urban spaces. I will show how to use sparsity promoting priors to extract meaningful information about human behavior from an overwhelming volume of high dimensional and high entropy data. Second, I will introduce a new deep learning method to forecast human socialbehavior. The causality behind human behavior is an interplay between both observable and non-observable cues (e.g., intentions). For instance, when humans walk into crowded urban environments such as a busy train terminal, they obey a large number of (unwritten) common sense rules and comply with social conventions. They typically avoid crossing groups and keep a personal distance to their surrounding. I will present detailed insights on how to learn these interactions from millions of trajectories. I will describe a new recurrent neural network that can jointly reason on correlated sequences and simulate human trajectories in crowded scenes. It opens new avenues of research in learning the causalities behind the world we observe. I will conclude my talk by mentioning some ongoing work in applying these techniques to social robots, and the first generation of smart hospitals. 

Alexandre Alahi is currently a postdoctoral fellow at Stanford University and received his PhD from EPFL. His research interests span visual information processing, computer vision, machine learning, robotics, and are focused around understanding and forecasting human social behaviors. In particular, he is interested in sparse approximation, deep learning, big visual data processing, real-time machine vision, and multi-modal reasoning. He was awarded the Swiss NSF early and advanced researcher grants. He won the CVPR 2012 Open Source Award for his work on Retina-inspired image descriptor, and the ICDSC 2009 Challenge Prize for his sparsity driven algorithm to track sport players. His PhD was nominated for the EPFL PhD prize. His research has been covered by the Wall Street journal, and PBS TV channel in the US, as well as European newspapers, Swiss TV news, and Euronews TV channel in Europe. He has also co-founded the startup Visiosafe, transferring his research on understanding human behavior into an industrial product. He was selected as the Top 20 Swiss Venture leaders in 2010 and won several startup competitions.

The New ABCs of Research: Achieving Breakthrough Collaborations

Date and Time
Thursday, April 21, 2016 - 8:00pm to 9:00pm
Location
Computer Science Large Auditorium (Room 104)
Type
Talk
All Princeton ACM / IEEE-CS meetings are open to the public. Students and their parents are welcome.

How should we organize research programs in computer science and technology? Research helps us produce innovative new products (the next iPhone), and research also delivers “foundational theories” (new algorithms and supporting technologies).

Solving the immense problems of the 21st century will require ambitious research teams that are skilled at producing practical solutions and foundational theories simultaneously – that is the ABC Principle: Applied & Basic Combined. Then these research teams can deliver high-impact outcomes by applying the SED Principle: Blend Science, Engineering and Design Thinking, which encourages use of the methods from all three disciplines. These guiding principles (ABC & SED) are meant to replace Vannevar Bush’s flawed linear model from 1945 that has misled researchers for 70+ years. These new guiding principles will enable students, faculty, business leaders, and government policy makers to accelerate discovery and innovation. The examples in the talk will emphasize how these guiding principles can be applied to reinvigorate computing research. 

Ben Shneiderman is a Distinguished University Professor in the Department of Computer Science at the University of Maryland. He is also the Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/), and a Member of the University of Maryland Institute for Advanced Computer Studies (UMIACS). He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His contributions include the direct manipulation concept, clickable highlighted web-links, touchscreen keyboards, dynamic query sliders for Spotfire, development of treemaps, novel network visualizations for NodeXL, and temporal event sequence analysis for electronic health records.

Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. Shneiderman’s latest book is The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, February 2016).

 

Abstractions for Programming the Data Plane at Line Rate

Date and Time
Tuesday, December 1, 2015 - 3:00pm to 4:00pm
Location
Sherrerd Hall 306
Type
Talk
Speaker

The evolution of network switches has been driven primarily by performance. Recently, thanks in part to the emergence of large datacenter networks, the need for better control over network operations, and the desire for new features, programmability of switches has become as important as performance. In response, researchers and practitioners have developed reconfigurable switching chips that are performance-competitive with line-rate fixed function switching chips. These chips provide some programmability through restricted hardware primitives that can be configured with software directives.

This talk will focus on abstractions for programming such chips. The first abstraction, packet transactions, lets programmers express packet processing in an imperative language under the illusion that the switch processes exactly one packet at a time. A compiler then translates this programmer view into a pipelined implementation that processes multiple packets concurrently. The second abstraction, a push-in first-out queue allows programmers to program new scheduling algorithms using a priority queue coupled with a program to compute each packet's priority in the priority queue. Together, these two abstractions allow us to express several packet-processing functions at line rate including in-network congestion control, active queue management, data-plane load balancing, measurement, and packet scheduling.

This talk includes joint work with collaborators at MIT, Barefoot Networks, Cisco Systems, Microsoft Research, Stanford, and the University of Washington.

 

Anirudh Sivaraman is a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory. He is broadly interested in computer networking and his recent research work is in the area of programmable forwarding planes.

Behavioral Non-Portability in Scientific Numeric Computing

Date and Time
Friday, December 4, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Aarti Gupta

The precise semantics of floating-point arithmetic programs depends on the execution platform, including the compiler and the target hardware. Platform dependencies are particularly pronounced for arithmetic-intensive scientific numeric programs and infringe on the highly desirable goal of software portability (which is in fact promised by heterogeneous computing frameworks like OpenCL): the same program run on the same inputs on different platforms can produce different results. So far so bad.

Serious doubts on the portability of numeric applications arise when these differences are behavioral, i.e. when they lead to changes in the control flow of a program. In this work I will present an algorithm that takes a numeric procedure and determines an input that is likely to lead to different decisions depending merely on how the arithmetic in the procedure is compiled. Our implementation of the algorithm requires minimal intervention by the user. I will illustrate its operation on examples characteristic of scientific numeric computing, where control flow divergence actually occurs across different execution platforms.

Time permitting, I will also sketch how to prove the /absence/ of inputs that may lead to control flow divergence, i.e. how to prove programs /stable/ (in one of the many senses of this word). This is ongoing work.

Thomas Wahl is an Assistant Professor at Northeastern University. This is joint work with Yijia Gu, Mahsa Bayati, and Miriam Leeser at Northeastern.

Follow us: Facebook Twitter Linkedin