Independent Work Seminar Offerings - Spring 2016
Meeting time: Monday 7:30-8:50PM - Room 402 Computer Science Building
Tuesday, 7:30-8:50PM - Room 402 Computer Science Building
Abstract: Deep Learning is the fastest growing area of Machine Learning. As highlighted in the New York Times, Deep Learning is the core technique to enable the latest breakthroughs in Computer Vision, Speech Recognition, Robotics, Natural Language Processing, Artificial Intelligence, and Big Data. It uses neural networks with many layers and large datasets to teach computers how to solve perceptual problems, such as detecting recognizable concepts in data, translating or understanding natural languages, interpreting information from input data, and more. Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition and natural language processing. Practical examples include vehicle, pedestrian and landmark identification for driver assistance; image recognition; speech recognition; natural language processing; neural machine translation and cancer detection. Major high tech companies such as Google, Facebook, Tesla, Microsoft, Intel, Yahoo, Baidu, Apple, NVIDIA, Qualcomm, NEC, Toyota, Huawei, all invest significantly in the area. Students in the seminar will focus on developing core components for deep learning algorithms, or applying deep learning algorithms, such as ConvNet and LSTM to a target application.
Meeting Time: Tuesday 7:30-8:50 PM - Room 302 Computer Science Building
Abstract: The theme of this seminar is to investigate algorithms and applications where computers utilize sensors to understand the world around them. Today's computers are equipped with numerous sensors, including cameras, microphones, and radio antennae, and new sensors are being developed at an amazing rate. For example, cameras that capture 3D depth (like Microsoft's Kinect) are just now becoming available for tablets and soon will be available for cell phones (e.g., Google's Tango and Intel's RealSense, Apple's acquisition of PrimeSense, Occipital's Structure Sensor, etc.). These sensors provide great opportunity for computers to gather information and behave intelligently in their environments. Classical applications include human-computer interfaces, tracking, localization, communication, etc. However, new applications are now possible too -- including smart rooms, mobile navigation, scene recognition, etc. Students in the seminar will choose a target application and then develop and test a prototype system in which one or more sensors is used to help a computer understand the world for that application. It is expected that new RGB-D cameras and/or other sensors could be made available to students for these projects.
Meeting Time: Monday 7:30- 8:50 PM Room 302 Computer Science Building
Abstract: How does an idea for an invention actually become an innovation in the marketplace? You may be a computer programming wizard, but there is a lot more to it than just fingers on the keyboard. This seminar, in concert with your developing an independent project of your choice, introduces some of the elements of thinking and developing an idea into a going concern. Your project will include a software prototype, and a presentation and paper that captures the feasibility of your idea as a business. To help you frame and complete your project, we will discuss distinctions between invention and innovation, various brainstorming and invention methodologies, the DARPA methodology for idea screening, an introduction to intellectual property including patents, a simple business plan, and the elements of an “elevator” pitch. For the more adventurous, the possibility exists for you to share your idea in a real startup pitch event and report on the results. Students may pair up in these projects, creating a joint idea for an enterprise, with each student concentrating on some aspect of the software with a division of labor of frontend, backend, mobile app, data analysis, etc.
Meeting Time: Tuesday 10-11 AM - Room 301 Computer Science Building
Abstract: Recent years have seen a tremendous upsurge in both the interest and deployment of online learning platforms. However, the efficacy of training that consists primarily of recorded videos has been questioned. There is some thought that people need training that combines a variety of modes of learning. In this vein, it is sometimes suggested that effective visualizations of complex computer science concepts might facilitate learning of these concepts. In this seminar, students will choose some computer science concept from COS 126, 217, 226 or other Princeton Computer Science courses. Examples might be concepts that some students find difficult such as 1) the dynamic composition and visualization of the operation of various gates and circuits and 2) visualizing function calls and the run-time stack frame for different functions (return types, parameters, optimizations on/off). For their projects, students will design and build an online learning experience that includes a visualization of that concept. The project should also include a testing mechanism by which mastery of the concept may be assessed. A bonus would be utilizing the system to compare learning with it to a conventional online video approach. Students may pair up on these projects, creating a joint idea for a learning environment, with each student concentrating on some aspect of the software with a division of labor of frontend, backend, assessment, data analysis, etc. The learning and use of open source tools, including tools such as Open EdX, Django and the DD visualization library, etc. is encouraged in order that students may create the most effective online learning environments.
Meeting time: Tues 9:30-10:50AM - Room 302 Computer Science Building
Abstract: The so-called big data revolution has led to the creation of data sets of various sizes that provide information about real world situations. Datasets of significant size are available in a variety of domains. These domains range from information about the operations of cities (including, for example, housing data, transportation data and police data in New York city among other urban centers) to health data (including epidemiological data on the spread of diseases and genomic data from thousands of individuals) to sports data (including information about virtually every pitch thrown in a baseball game since 1987). Given this wide availability of data, a challenge for the data scientist is to find effective ways to use the data to extend our knowledge of the situations represented by the data. This task involves exploring datasets, cleaning data, asking good questions, and presenting results in the most compelling fashion. The typical project will begin either with a question or with a dataset. In the former case, the goal will be to find datasets that help to answer the question and to explore. In the latter case, the goal will be to explore the data set to learn new and interesting things.
Meeting Time: Thursday 9:30-10:50AM - Room 301 Computer Science Building
Abstract: Today there are more than 2.5 billion smartphone users globally, and by 2020, some estimates project over 6 billion smarthphone subscriptions. This is an incredible number of mobile computers, each with mobile broadband connections, and a host of sensors, including cameras, GPS, accelerometers and barometers. The overall goal of this IW seminar is to design, develop and experiment with mobile technology that can be used to help individuals or communities. The goal is not just to "write an app, " but rather to produce some innovative approach to a problem and demonstrate/evaluate its utilty and benefits. Application areas include, but are not limited to: environment & climate, social activism, civic computing, health care, philanthropy and crowdsourcing. In general, IW projects must have an impact - locally, nationally or even globally. Students are highly encouraged to use and/or extend open source platforms. Projects can utilize any combination of mobile devices (e.g., Android smartphones), cloud-based backends (e.g., AWS), open APIs/data (many), hardware sensors (e.g., Raspberry Pi), augmented reality (e.g., Google Cardboard) and programmable UAVs.