Fall 2023

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Eunpyul Joy Cho '24. FitAD: A Framework for Enhancing YouTube Workout Videos with Audio Description.

Spring 2023

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Morgan Teman '23. A Music Visualization Framework Inspired by Chromesthesia.

Senior Thesis Adviser: Nicholas Sudarsky '23. Designing, implementing, and evaluating approaches to automating development of polylingual software.

Senior Thesis Adviser: Jennifer Secrest '23. JoinMe: An On-the-Spot Hangout Coordination Framework.

Fall 2022

Co-instructor. COS 126: General Computer Science.

Spring 2022

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Olivia Kane '22. Applying a Social Reputation Network to Determine the Impact of Twitter Influencer Sentiment on NFT Market Activity.

Fall 2021

Co-instructor. COS 126: General Computer Science.

Spring 2021

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Haley Zeng '21. Classroom Visual Assistant: A Tool to Help Visually Impaired Students See Lesson Material in the Classroom.

Senior Thesis Adviser: Matthew Fastow '21. Leveraging Deep Learning to Provide Automated Assistance for Physical Rehabilitation Exercises.

Independent Work Adviser: Elise Colter '21. MemoryLane: A Framework For Geospatial Visualization of User-Generated Image Collections

This paper describes the development and evaluation of MemoryLane, a framework for the spatial visualization of user-generated collections of images. It uses a map-based web interface to allow users to upload and geotag their personal images. It enables crowdsourcing the task of individually attaching geospatial and contextual metadata to photographs in a collection. The interface offers a unique way to visualize images within the context of their location and place in history. The goal of this framework is to provide a better spatial and temporal interface for the way photographs are visualized and increase the accessibility of compiling and presenting collections of images. Evaluation found that MemoryLane offers a unique interface with more spatial and temporal context than is typically found on map-based platforms, and that it in particular would beef use to historical organizations to present and gather information on their collections

Fall 2020

Co-instructor. COS 126: General Computer Science.

Spring 2020

Instructor / Adviser. Independent Work: Random Apps of Kindness - Assistive Technologies and User Interfaces.

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Zoya Shoaib '20. Increasing Access to Prenatal Care in Developing Regions.

Spring 2020 - Random Apps of Kindness (click to show project abstracts)

Jamie Guo '21. A mobile framework for closed captioning user generated videos.

Abstract: Closed captioning makes audiovisual media accessible to deaf and hearing-impaired individuals.Currently, closed captioning is widely available in media produced and distributed by broadcasters,television service providers, and other organizations that provide services open to the public;however, a growing proportion of audiovisual media is being created and viewed on mobile devices by individual users who do not have readily-available tools to create or view captions. This paper describes the development and evaluation of a new mobile framework for closed captioning of user-generated videos using automatic speech recognition (ASR) technology. The framework is demonstrated through a proof-of-concept application that outperforms a similar existing mobile app and presents a simple and approachable way for mobile users to closed-caption their videos.

Lauren Tang '21. blink pattern detection to activate commands.

Abstract: This paper describes a head position and eye-blink type detection algorithm used to control the desktop mouse. In this mouse control system, head position is used to control the cursor position and particular types of blinks are connected to certain mouse control commands. The motivation for this control system stems from the fact that the use of a computer is a vital tool for many tasks in everyday life and so the goal of this project is to provide an assistive tool for those with physical disabilities to use a computer or laptop device. The eye-blink detection algorithm utilizes the EyeAspect Ratio (EAR) formula and the technologies used for the implementation of this system include OpenCV, dlib, and pyautogui libraries in Python.

Anna Qin '21. Single-switch web browsing with Nomon.

Abstract: Users with severely limited mobility often rely on single-switch mechanisms to navigate computers.“Single-switch” refers to the idea of using exactly one type of signal for all user input. This invaluable to users who do not have the motor range for a computer’s expected input types. However,a computer must be specially equipped with mechanisms to handle such input. In this paper, we implement the single-switch selection method “Nomon” as a Google Chrome extension for browsing. The primary focus is to enable basic hyperlink navigation with only one input key.Although our extension does not offer full browser functionality, we demonstrate the viability of the Nomon method for use in web browsing and compare its advantages and disadvantages to established single-switch methods.

Labib Hussain '21. Enhancing COS 126 Guitar Hero for hearing-impaired students.

Abstract: This paper describes a general framework for the COS 126 Guitar Hero assignment aimed at helping hearing-impaired students design and analyze their solutions. By using this framework,students can use graphical user interfaces (GUI) and haptic-based extensions that provide alternative ways to intuitively visualize their solutions. The framework aims to provide hearing-impaired students an opportunity to assess their programs using visual and sensory cues, while maintaining the original auditory cues that the original application comes with. The framework extends the COS 126 Standard Draw library, which is based on Java’s AWT and Swing GUI libraries, and uses the Arduino C library to implement a haptic feature.

Skyler Liu '21. Automatic detection of computer anxiety.

Abstract: Despite the ubiquity of computers, computer anxiety (CA) still plagues many members of society.Computer anxiety, which can be ascribed to feelings of fear, hostility, and worries of embarrassment or stupidity towards using computers, inhibits people from accessing important and useful technological tools, and is most common among elderly adults. In this paper, we present a way to improve and automate the detection of CA through a single browser session. The tool described in the paper combines established CA detection testing methods with the convenience that comes from open source browser extensions. In addition, the paper suggests the need for further integrated CA detection methodologies.

Karen Ying '21. Color charts.

Abstract: This paper describes a Chrome extension called Color Charts that aims to increase the readability of charts, graphs, and diagrams on the web. Visuals that depend on colors to convey information may be hard to read for people with color blindness. This extension aims to improve upon existing extensions by using researched color blind friendly palettes. It allows for filtering of visuals on awe page, with the option to choose between 4 different color blind friendly palettes. Built with vanilla JavaScript, Color Charts interacts with HTML elements in the Chrome browser,utilizing HTML canvas and base64 encoding to filter the images. This ultimately improves the web experience for color blind users by allowing them to better interpret charts, graphs, and diagrams.

Brandy Chen'21. Exploration and situational awareness for the visually impaired.

Abstract: This paper describes a mobile application framework, Explore, that is aimed at helping the visually impaired interact with their surroundings. Specifically, the goal of Explore is to allow visually impaired users to more freely explore and develop situational awareness using audio announcements of their current locations and points of interest while they walk around towns and neighborhoods. The framework is based on the Google Maps SDK and other Google Maps APIsand uses a user’s preferences and location to recommend relevant points of interest. This ultimately allows visually impaired users to better understand what is around them and make more informed decisions. Based on limited evaluations and testing, Explore is successful in notifying users of points of interests and helping with situational awareness development to some level. Improvement scan be further made with announcements and filters to better serve vision impaired users so they can explore more safely and be even more situationally aware.

Haley Zeng '21. Improving accessibility of public transit service information for visually impaired users.

Abstract: Visually impaired people are often unable to drive and as such rely more heavily on public transit for transportation compared to the average American. Yet, transit service information is not always provided by public transit agencies in formats that are accessible to visually impaired people. This project strives to improve the accessibility of transit service information by providing mobile framework for displaying transit schedules and arrival estimates in a format that is catered to the needs of visually impaired users. The project uses Princeton University’s transit system, TigerTransit, as an instance of this framework. The framework is evaluated based on the features it provides compared to other transit information apps as well as its generality and ease of implementation with other transit systems.

Olivia Kane '22. Fall Detection for Multiple Sclerosis using Apple Watch

Abstract: This project aims to train an effective machine learning algorithm using the SisFall data and develop an Apple Watch application that uses the trained model to detect falls in real time. Given the near perfect accuracy rates of the SVM implementation in the UET study, this project follows the same feature extraction procedure as that outlined in the UET research paper[4]. Similarly, this project trains six different tabular classifier models using 6 different algorithms, and assesses which of the models achieves the highest accuracy rate on the validation data

Ioana Teodorescu '21. Sound detector for the hearing impaired.

Abstract: This paper is intended to serve as the final report for my Independent Work project for the Springiness semester. In this paper I will be proposing a design for a smartwatch-based application that alerts the user through a vibrating notification upon detecting some important environmental cues. I will also be presenting my progress on creating an algorithm for detecting these cues, using an Apple Watch Series 4 to continuously collect the environmental sounds cape.

Justin Yi '21. Automatic Captioning of Image-Based Memes.

Abstract: People with visual impairments can have difficulty navigating and engaging on social media platforms. Communication on these platforms is largely dominated by humorous Internet memes,most often consisting of an image macro, and existing technologies such as screen readers are not fully capable of interpreting the contents of the image to the user. This paper proposes tackling this issue by storing the image templates of popular memes in a central database and automatically generating a descriptive caption. When given a meme, the program checks if the input is in the database, and if so, retrieves the caption of the image. It also extracts any text found in the meme.After some evaluation with thirty different memes found on the Internet, we found that the image recognition and text extraction is not reliable with all memes and requires improvement if the program is to preserve the humor of the meme for visually impaired users.

Alex Dipasupil '21. Hearing detector for the hearing impaired.

Abstract: This paper details the process behind the design, development, and evaluation of a iPhone application that records nearby sounds and returns a prediction of what they might be. In this application, users record audio and upon recording completion, the application sends the recording to IBM’s Audio Classifier [1], which analyzes the recording and returns a sound prediction, which then pops up in application. This application aims to assist those who have hearing impairments with identifying nearby, potentially crucial sounds that may have otherwise gone unnoticed, as this can assist them by improving both their safety and quality of life.

Raymond Part '21. Text editor for the visually impaired.

Abstract: This paper details the design and development of an application for the visually impaired to more easily create and edit text documents. The application uses Google’s Doc API and Speech to TextAPI to eliminate the need of keyboards and easily process and write ideas. The problem right nowise that most text editing applications and word processors are not built for the visually impaired,and therefore, using such tools may be challenging. Even applications that provide additional tools for accessibility may not be optimal since those applications are still largely built for the visually unimpaired and the tools may not be enough. This project aims to make the writing process easier for those who may have trouble typing because of visual impairments by accounting for the inconveniences that many of the current text editing applications and word processors do not solve.Using the two APIs mentioned previously, we built an application that allows users to create/edit Google Documents using their voice. The application requires no manual typing and only occasional button clicks, where the buttons are built sufficiently large such that distinguishing between them is made easy. At the end, we believe that we have created an application that is more convenient for the visually impaired than most of the current available word processors, including those that provide tools for increased accessibility.

Daniel Lee '21. Web framework for closed captioning.

Abstract: This paper provides a framework for analyzing and evaluating current closed captioning systems. By first surveying existing closed captioning services in multiple forms, Give context for the framework swell as an analysis of these services. The framework covers many important criteria that closed captioning services must meet, and these criteria are also divided into several different categories. I also propose my own implementation of a potential closed captioning service that, by the proposed framework, does a better job of providing a comprehensive service than many preexisting services.

Jonathan Salama '21. Prescription identification and tracking on smartphones.

Abstract: Prescription mismanagement is common among older patients and patients with visual impairments. This paper presents a potential solution to this problem through the development of a mobile application framework called SeeScript, which helps patients identify and manage prescription medications more efficiently. The framework uses near field communication (NFC) to telecommunications with prescription bottles enabled with radio frequency identification (RFID) tag sand "scan" the prescription information, and uses text-to-speech technology in order to help patients quickly and confidently identify prescriptions. SeeScript analyzes the prescription data to automatically schedule medication doses through timely notification alerts for each dose. Finally,the framework provides an optical identification feature to identify prescription drugs with 94.4% accuracy, using the device’s camera, optical character recognition, and approximate string searching. SeeScript helps patients with visual impairments confidently identify all types of prescription medications, and provides automatic dose scheduling for RFID-enabled prescriptions. By combining identification with scheduling, this application framework is a comprehensive solution to the problem of prescription mismanagement for the visually impaired.

Lily Zhang '21. Campus building navigation and map accessibility for the visually and physically impaired.

Abstract: Visually and physically impaired students face unique navigation challenges on college campuses.This paper describes the motivation, design, and development of an indoor navigation and accessibility information application that seeks to provide those who are visually and physically impaired tools to be able to navigate and understand their surroundings within campus buildings. Using the Google Maps Android SDK as a foundation, this framework aims to increase the accessibility and availability of indoor building information by presenting the information in both visual and audible formats.

Christy Lee '21. A framework for music visualization using fractals.

Abstract: This paper describes a music visualization application inspired by the fractal properties of 1/f music to create a live fractal design that adapts to musical input. The application uses MIDI (Musical Instrument Digital Interface) file components in order to parameterize a given note’s pitch, onset,offset, and velocity, and uses Processing, an open source graphics library, to create the resulting display. The goal of this visualizer is to effectively communicate the elements of a musical piece through an appealing visual medium. A potential use of this application could be to serve as an extension of music therapy for those who are deaf or hard of hearing and cannot experience musician the same way as those with typical hearing.

Sophie Kader '20. Improving access to accessibility information on Princeton University's campus.

Abstract: Navigating the world can be difficult for people with physical disabilities due to the many obstacles they may face in their path. Thus, it is vitally important to provide accessibility information to them to aid them in their way finding approaches. Given that these issues persist, navigating a physical landscape should not be a barrier to higher education for any individual. Thus, universities should be more proactive about providing accessibility information to their students and visitors with disabilities. This paper proposes a general framework for developing an Android mobile application containing accessibility information about university campuses, using Princeton University as an example. The application displays accessible routes and building entrances. The mobile frameworks key, because it allows users to view accessibility information in transit and to visualize their own location in reference to the information. Other universities can leverage the results of this project to implement similar solutions on their own campuses.

Ally Dallman '20. Adaptive user interfaces for colorblind users.

Abstract: Color has become an integral element of desktop user interfaces, and yet there are over million computer users that have some form of colorblindness [1]. Over time, designing for accessibility online has become more of a priority for companies, but many websites still fall short.Most existing solutions that attempt to improve the browsing experience for colorblind users focus on developing a general solution that can work for all different types of users in all different types of scenarios. The reality of the situation, however, is that for a solution to be maximally effective,the user must be given the power to change the colors on any page based on what specifically helps them the most. Thus, this project focuses on developing a Chrome extension that adapts interfaces based on manually-inputted user preferences. The extension uses a pop up interface to display all the background colors used by a page and allows the user to change any color to a new color of their choosing. Using this approach, the resulting manually-colored interfaces are easier to use and understand compared to existing solutions. The emphasis on complete customization allows the user to adjust the interface as needed, whether that be replacing one color, increasing the contrast,or adding more color to the page.

Fall 2019

Co-instructor. COS 316: Principles of Computer System Design.

Independent Work Adviser: Kimora Kong '21. IRIS: An Instant Relief Interconnected System

This paper details the process of creating IRIS, a device that operates as a wireless access point and provides users with an online messaging platform independent of traditional cellular and wireless systems. The creation of IRIS is motivated by the damage often done to communication infrastructure in the wake of natural disasters like hurricanes. IRIS seeks to provide people affected by the devastation of a hurricane with the means to communicate with other people in their surrounding area by programming a Raspberry Pi with open source projects like Rocket.Chat and RaspAP.

Spring 2019

Co-instructor. COS 126: General Computer Science.

Fall 2018

Instructor / Adviser. Independent Work: Random Apps of Kindness - First Response.

Co-instructor. COS 126: General Computer Science.

Fall 2018 - Random Apps of Kindness (click to show project abstracts)

Dominick Lam ’19. MapCamV2: Geospatial Multimedia Content for Situational Awareness in Disaster Scenarios. Fall 2018.

Abstract: In disaster situations such as hurricanes and earthquakes, situational awareness is crucial to the safety of those involved in the disaster and in helping disaster relief personnel respond to events in the affected areas. During times of disaster, an abundance of multimedia content is produced and published to internet platforms. However, this content lacks geospatial metadata, thus limiting its overall utility to first responders and those in the disaster scenario. Humans often provide descriptions to their uploaded multimedia content that provide some context; however, these descriptions must be produced manually, which is tedious and error-prone. MapCamV1 is a mobile framework that allows for the creation and rendering of multimedia content that includes embedded geospatial information. Core features include multimedia content and geospatial context capture, synchronized playback of multimedia and geospatial content, and before and after context visualization. However, MapCam has limitations in its ability to share content created by the mobile framework, thus limiting the effectiveness of the framework in disaster scenarios. Additionally, while MapCamV1 excels in capturing geospatial embedded video, it is mediocre in its playback for users. In MapCamV2, we seek to create a web framework that improves upon MapCamV1 and addresses its limitations. MapCamV2 is a feature-rich framework that includes dynamic playback, crowd mapping, heat mapping, a user identification system, and compatibility with MapCamV1 and the GeoUGV dataset. Evaluation of MapCamV2 showed its compatibility with multiple forms of output from MapCamV1 and the GeoUGV dataset as well as a noisy data filter with a positive filter rate of 73.9% and a negative filter rate of 95.6%.

Jay Lee '19. ViewShare: Capturing and Distributing Kite-Smartphone Aerial Photography. Fall 2018.

Abstract: Natural disasters can damage or destroy roads, buildings, and other existing infrastructure, posing a threat to both the first responders and the victims of a disaster. In such times, having access to up-to-date maps or real-time imagery of the surroundings is paramount to ensuring the safety and efficiency of the people involved. But due to cost issues, real-time aerial imagery via satellites or drones may not always be available. This paper suggests a low-cost alternative that uses a kite-smartphone apparatus to both capture and distribute aerial imagery. We first describe in-depth the architecture of both the physical apparatus and the software involved, and then evaluate it against currently existing alternative solutions.

Seho Young '19. Project SafeWater: a Mobile App and Portable Device to Measure Water Potability. Fall 2018.

Abstract: Project SafeWater involves a mobile app that pairs with a Bluetooth Arduino-based device that has several sensors that can analyze water samples and determine whether or not it’s safe to drink. The goal of this project is to make the task of analyzing water content and determining its safety more efficient and accessible, especially for users in disaster situations, such as Flint, Michigan or Puerto Rico after Hurricane Maria.

Daniel Greenburg '19. A Smartwatch and Smartphone Based Realtime CPR Aid. Fall 2018.

Abstract: The goal of this project project is to increase the effectiveness of bystander administered CPR, specifically by providing the bystander realtime feedback about the timing and depth of the chest compressions being administered, in addition to providing a means of reviewing CPR related data. If implemented correctly, this goal has immediate real world applications; bystanders will be able to be given real life feedback and coaching to better administer CPR to a person in need, and the CPR that the victim receives will be of higher quality and effectiveness.

Helen Zhang '19. ShareProof: Seamless and Secure Photo Sharing for People in Disaster Situations. Fall 2018.

Abstract: Given the unpredictable nature of disaster situations, reliable information flow is important for increasing the public’s awareness and the effectiveness of relief efforts. Photos are a useful way for the people to share information and visuals with authorities and other members of the public. However, with social media and malicious parties, it is easy for false photos to spread online and create widespread misinformation. This document details the design, development, and evaluation of ShareProof. ShareProof attempts to increase public confidence in photos shared during and post- disaster. It achieves this by providing a way to verify the identity of the photo sender and securely shares sensor data with the photo. ShareProof is implemented as an extension of the open-source platform ProofMode, using modules from Steganography for Android and OpenKeyChain. In a qualitative evaluation, users reported increased confidence in sensor data embedded within the photo when compared to existing methods. Users also reported that ShareProof’s digital signature verification was much easier to use as a built-in function rather than manually verifying outside the application. A quantitative evaluation found that embedding sensor data via steganography increases file size minimally when compared to overall file size and does not affect the usability of the photo.

Emmanuel Teferi '20. DoorWays: A Mobile App for Locating Egresses and Ingresses in Large Complexes During Emergencies. Fall 2018.

Abstract: This paper describes the research and development of DoorWays, an Android application designed to assist first responders and emergency personnel in locating doors in large complexes during emergencies. All data is collected through Survey Mode features, which use Android Location Services and the APIs of different geocoding services, such as What3Words and GooglePlusCodes, to populate DoorWays’s Firebase Realtime Database with points. Geocoded points are displayed on a simple to use map GUI. This work leverages the effectiveness of various geocoding systems and Android technologies to provide human friendly geocodes that can help users gain valuable situational awareness.

Spring 2018

Instructor / Adviser. Independent Work: Random Apps of Kindness - First Response.

Spring 2018 - Random Apps of Kindness (click to show project abstracts)

Dominick Lam ’19. MapCam: Geospatial Multimedia Content for Situational Awareness in Disaster Scenarios. Spring 2018.

Abstract: In disaster scenarios such as hurricanes, earthquakes, and terror situations, situational awareness is key in keeping those involved in the disaster safe and in helping disaster relief personnel respond to individual events in the affected areas. During times of disaster, vast amounts of multimedia content are produced and published to the internet. However, this content lacks geospatial metadata, thus limiting its overall utility to first responders. Humans often provide descriptions to their uploaded multimedia content that provide location information; however, these descriptions must be produced manually, which is tedious and error-prone. In this project, we seek to create a mobile framework that allows for the creation and rendering of multimedia content that includes embedded geospatial information. Core features include multimedia content and geospatial context capture, synchronized playback of multimedia and geospatial content, before and after context visualization, content sharing, and crowd mapping. This project will be evaluated through self-testing of the mobile framework to determine the accuracy of the geospatial metrics recorded.

Simisola Olofinboba '19. GovCom: Government to Civilian Communication. Spring 2018.

Abstract: In recent years, with the growth of social media use, social media sites like Facebook and Twitter have become primary modes of communication for millions of users. During disasters– social, environmental, or political–these platforms are inundated with both government and citizen responses. In this flood of information, it can be hard for citizens to manage and organize relevant communication from government agencies attempting to provide aide. In addition to searching for relevant information, citizens are burdened with identifying and ignoring misinformation that tends to spread rapidly during an emergency. The government, on the other hand, is aware of the misinformation that spreads. In response to notifications of rumors and misinformation, many government agencies use their social media accounts to report retractions and corrections. However, as their fixes normally do not gain as much traction as the original sensationalist stories, the truth does not spread as quickly or as far as the rumors. GovCom attempts to fix these barriers to effective and efficient government to civillian communication by creating a framework that allows government agencies to organize their social media feeds into one convenient database based on the locations for whom the feeds are most relevant. GovCom’s Android application then organizes and displays those feeds by location, so that citizens can quickly subscribe to the feeds that are most relevant to their needs. It finally displays the posts from those different feeds into one aggregated feed.

Erica Wu '18. Relief Router: Android Route Mapping for Disaster Relief. Spring 2018.

Abstract: In the aftermath of a natural disaster, time is of the essence. The fate of many people’s lives depends on how quickly they can reach a hospital or how quickly relief supplies can reach them. Yet too often, victims are left stranded during the critical hours and days after a disaster because damaged infrastructure prevents them and first responders from easily navigating the area. This paper presents the development of Relief Router, an Android application which aims to create a map of road obstacle locations and provide drivers and first responders with up to date, obstacle-free routes to help them reach people in need more quickly and efficiently.

Priscilla Bushko '19. Creating Interactive 3D Building Models from 2D Floor Plans and User Input: Emergenc3D. Spring 2018.

Abstract: To come up with the best plan for a situation, the more information, the better. This is especially crucial when people’s lives are on the line. Every day, first responders have to make do with the information available when they get to a scene. This project is meant to create a 3D visualization of floor plans with custom points of interest that will give first responders a new tool from which to gather information.

Rachana Balasubramanian '19. Rescue Router - A Route Optimizer for Disaster Situations. Spring 2018.

Abstract: In the wake of numerous devastating natural disasters, the need for mobilizing and organizing first responders and resources for aid has become increasingly critical. In examples such as Hurricane Maria and Hurricane Harvey, the high demand for aid made it important for first responders to be taking optimal paths through damaged or blocked paths in order to accommodate the dispersed flood victims. However, the map information that dispatchers use to route first responders is often out of date due to various road blockages created by emergency situations, from downed power lines to fallen trees. While past research has been done into both path optimization and emergency resource allocation, it has not been thoroughly applied to this domain. Other work in the domain of emergency response has yielded valuable information regarding the placement and distribution of supplies, but research on creating paths for first responders is more focused on their daily demands, or on the general division of resources, rather than a wide-scale emergency situation. For this project, we created Rescue Router, a platform that translates point based disaster data to road sections, and updates these sections accordingly as more information is received. This information is then presented to dispatchers in a map interface to make sure first responders can be distributed optimally. We found that this project is a useful alternative to traditional map interfaces like Google Maps, and made it open sourced so that it can be a useful resource in future disaster situations.

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Jake Levin '18. PantryMate: A Mobile Application to Reduce Household Food Waste.

Senior Thesis Adviser: Luisa Fernanda Goytia Pomeo '18. Amazona: A Framework for Mobile Context-Aware Personal Security.

Fall 2017

Instructor / Adviser. Independent Work: Random Apps of Kindness.

Fall 2017 - Random Apps of Kindness (click to show project abstracts)

David Prilutsky ’18. DriveSafe: Data Driven Safe Navigation. Fall 2017.

Abstract: Not all roads are created equal. Some roads and intersections are distinctly more dangerous and challenging than others. However, despite the troves of data on auto accidents and road safety, no popular navigation tool considers road safety when calculating routes. Google Maps, for instance, simply finds the shortest path to the destination. This exacerbates the dangers of driving for inexperienced drivers who not only lack developed driving instincts but also don’t have the experience to know which roads should be avoided. This research aims to develop a routing application (DriveSafe) that fills this void by providing routes that balance speed with safety. DriveSafe uses the GraphHopper routing engine to route on OpenStreetMaps maps, leveraging crash data from NYCOpenData to estimate the danger of routes. The project provides a prototype safety-conscious router that has been tested to work within NYC. The code is written to be extremely modular to allow for easy modification of the routing algorithms and to provide a simple platform for future research.

Austin Williams '18. SwipeAwayHate: Crowdsourcing Data for a Hate Speech Classifier. Fall 2017.

Abstract: Hate is surprisingly common in society, especially online. Large social media companies are faced with the problem of moderating massive volumes of content, including hate speech. Flagging these posts manually is impractical given the enormous amount of online content, and machine classification for this problem is not yet perfected. In this paper, we seek to set up a framework to help better identify online hate speech. The project combines ideas that worked well in other approaches, like crowdsourcing, neural networks, and ensemble learning. We create an Android app that allows users around the world to contribute to a data set used to train hate speech classifiers, and train an ensemble of neural network classifiers.

Roopa Ramanujam '19. FoodFad: An Android Approach to Simplified Nutrition. Fall 2017

Abstract: Smartphones are becoming increasingly essential to modern life, and with them comes an upswing in applications dedicated to improving people’s lives. One of the most popular categories of apps is diet and nutrition. There are several of these, including widely-used ones such as MyFitnessPal and My Diet Coach which help users track their food intake and nutrition goals. However, many of these apps require a dizzying amount of user input and interaction. Furthermore, although many of these apps do track information, very few of them show the data in a simple visual format. The objective of FoodFad is to simplify the process of getting information about one’s diet by combining the power of innovative technologies such as image recognition and machine learning with the tried and true advantages of presenting data visually. The basic architecture of FoodFad involves an image recognition aspect, analysis of nutrition data, and the presentation of it. Clarifai was chosen as the RESTful API service to do the image recognition, and Nutritionix was used to determine the nutritional value of the item. The most important parts of the nutrition data were then stored using the Android Room library, and were presented over time in the form of graphs and charts using the open source tool MPAndroidChart. The app was evaluated in both a quantitative and qualitative fashion. The quantitative evaluation tested the accuracy of the image recognition process by seeing how well the application was able to classify photos of items in varied conditions, as well as its ability to classify single items versus composite items. The qualitative evaluation was based around how likely people would be to use the app, as well as its helpfulness in presenting objective data that users might not know. It was found that FoodFad had middling classification accuracy, but the confidence levels of the correct tags were still quite high. In the qualitative evaluation, it was discovered that the users enjoyed the simplicity of the app and were surprised by how they perceived their food versus what the app told them. There are several opportunities to continue with this project into further work. The accuracy could be improved by leveraging multiple API services and somehow combining them to get better classification rates. The app could also have a strategy for estimating the portion size from the photo.

Kristy Yeung '18. FoodFriend: Conversing the way to better food knowledge. Fall 2017.

Abstract: FoodFriend is a conversational agent focused on increasing food nutrition awareness. With a greater understanding of the nutritional make-up of food, people can be better informed in their food choices. Built with accessibility, clarity, and friendliness in mind, FoodFriend can serve as an approachable resource for a mobile user’s quest to learn more about their food. When a user opens their FoodFriend, they are instantly welcomed with a greeting and a guided conversation that will lead them to the nutrition information they seek as well as the percent of the daily recommended value of that nutrient. FoodFriend can respond to typed responses but was designed to operate on verbal conversation, allowing even children to use it and learn about the food they eat

Manisha Sivaiah '18. Popo: Effectively Mitigating Issues Within the American Emergency-Response System. Fall 2017.

Co-instructor. COS 126: General Computer Science.

Spring 2017

Instructor / Adviser. Independent Work: Random Apps of Kindness.

Spring 2017 - Random Apps of Kindness (click to show project abstracts)

Isaac Resendes ’18. SnowOnMyStreet: Safer Travel through Crowdsourcing and Accident Data Analysis. Spring 2017.

Abstract: With technology advancing daily and algorithms being deconstructed and reconstructed, it is not surprising that routing can be done from anywhere, at anytime, on a smartphone . Yet currently, the most popular mapping applications do not take winter road surface conditions into account. As a result of this, the fastest routes, taking you through side streets to cut heavier traffic, could actually present you with more hazardous conditions. SnowOnMyStreet is a platform that uses real-time winter road surface conditions, retrieved by crowdsourcing, to route users through a safer path. The routing algorithm also has the ability to take historical accident data into account, assisting in avoiding potentially dangerous intersections. SnowOnMyStreet’s uniqueness in collecting winter road surface condition data allows it to provide a safer route during the winter season (and beyond) to a greater degree than Google Maps can. Given no road surface condition or accident data, SnowOnMyStreet has provided a route that is as fast as the default Google Maps route in test cases.

Brandon Lanchang '18. Map the Masses: Crowdsourcing Pedestrian Movement to Visualize Foot Traffic and Augment Map Features. Spring 2017.

Abstract: This paper describes the Map the Masses project which provides a framework for crowdsourcing pedestrian movement data using mobile technology. The project also provides a clear visualization of this data in the form of heatmaps. Map the Masses then explores applications of this data by looking at an algorithmic approach to identifying useful features and details such as crosswalks and sidewalks. A crowd sensitive route finding feature is also included. The application attempts to keep pedestrians as informed of their surroundings as possible in order to maximize safety and utility.

Tyler Kaye '18. RipeOrWrong: Using Deep Learning Networks to Determine the Quality of Fruits and Vegetables Using Thermal Imaging. Spring 2017.

Abstract: This paper details the design, development, and evaluation of RipeOrWrong, an Android ap- plication that identifies bruised and under/over-ripened fruits and vegetables. The application utilizes the FLIR One thermal imaging camera in order to measure thermal variations below the skin of the fruit / vegetable. Additionally, the application employs a deep convolutional neural network designed with the TensorFlow framework in order to implement the classifier. Although this application focusses on apples, it has been designed and documented so that the implementation of other fruits and vegetables is seamless. RipeOrWrong leverages both thermal imaging and modern machine learning to prevent consumers from purchasing undesirable fruits and vegetables.

Mun Yong Jang ’18. UpAndDown: Visualization of Bipolar Disorder through Text-Sentiment Analysis and Activity Tracking. Spring 2017.

Abstract: This research project explores the possibility of a visualization aid as a way of monitoring bipolar disorder. More specifically, the application of the research project, UpAndDown, offers a way in which bipolar disorder patients can monitor their own physical activity levels as well as social sentiment level. UpAndDown achieves this by monitoring the physical activity of the patient using accelerometer data as well as recording the sentiment of their Short Message Service messages sent and received from his/her peers. Furthermore, UpAndDown also provides a proof of concept into predictive analytics of bipolar phases, which lays the beginning groundworks for phase detection.

Zhan Chen '18. A General Framework for Mobile Environmental Visualization and Sensing. Spring 2017.

Abstract: This project involves the design and implementation of a general framework for mobile environ- mental visualization and sensing. The project leverages the Android platform to build a mobile application that can utilize both environmental data with high spatial resolution and interpolated data from existing weather stations and air monitors to create visualizations of the environment through augmented reality techniques and temperature gradient maps. This application aims to increase climate change awareness by showcasing the changes in the environment through these visualization techniques that are absent in interpolated or generalized data visualizations.

Caleb Gum '18. WellVision: A Modular Analytic Framework for Characterizing the Impact of Oil and Gas Wells. Spring 2017.

Abstract: WellVision is a software a tool for researchers and policymakers to apply various analyses to existing oil and gas well databases. WellVision was built as a modular, plug-and-play framework that supports the development of analysis modules in any language. Well data is stored in a local database and accessed through a standardized data model. Two initial modules for estimating methane emissions were developed as the initial WellVision analyses. These analyses were used to create emissions estimates for well data sets from West Virginia and Pennsylvania.

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Kelly Zhou '17. Enhancing Physical Therapy Through Motion Tracking: A Kinect-Based System.

Fall 2016

Instructor / Adviser. Independent Work: Random Apps of Kindness.

Fall 2016 - Random Apps of Kindness (click to show project abstracts)

Annie Lu '17. NAVAX: Improving Mobile Navigation for the Disabled. Fall 2016.

Abstract: Mapping technology developed in recent years has enabled many people to easily navigate unfamiliar places. However, it often lacks the type of customization that the disabled may find useful, especially the ability to avoid path hazards such as steep inclines or steps. We propose the project NAVAX to address this need. NAVAX is implemented as an Android mobile application with full offline functionality. Its main advantages are that it provides users with greater customization of the routing experience than do most popular navigation applications, and it gives users the ability to contribute to more accurate routing by using open source map data from OpenStreetMaps. Experimental results show that NAVAX’s performance is on par with that of popular mapping applications when there are no path hazards to avoid, and varies significantly when there are path hazards to be routed around. User testing suggests it is generally easy to use, but its features could be improved upon.

Lucy Lin '18. SelfAware. Fall 2016.

Abstract: Smartphone addiction and overuse is an increasingly present problem in today’s technologically advanced society. This paper presents an Android application called SelfAware, designed to allow people to better understand their personal smartphone usage patterns and correlations. SelfAware uses the Fünf Open Sensing Framework in order to capture data from built-in smartphone sensors. Rather than focus on single and distinct datasets, SelfAware matches different smartphone sensor datasets in order to show user specific patterns. Making these usage habits visible to users allows for greater recognition of how to best change personal smartphone usage if necessary.

Caleb Gum '18. Crowd-Sourced Market Information for Small Farmers. Fall 2016.

Abstract: AgoraMob is a mobile market information provider which aims to provide useful market information to small farmers, especially in developing areas. Having greater knowledge of market conditions allows farmers to make better business decisions and protects them from predatory intermediary buyers, improving their income and general lifestyle. AgoraMob collects market information from users, crowdsourcing the data. Users can submit price information for various products in specific regions via SMS, and AgoraMob translates that information into a market value estimates. Users can then request these estimates, also via SMS.

Waqarul Islam '18. Watchman: Your Personal Drowsy Driving Guardian. Fall 2016.

Abstract: Watchman is a mobile app designed to prevent drivers from falling asleep at the wheel. Drowsy driving effects over 170 million drivers in America, yet there are no mass-adopted tools to prevent it. Leveraging the Google Vision API on Android, Watchman tracks a driver’s face to detect yawns and blinks. Using these inputs, an algorithm decides whether a driver is at risk of falling asleep, and alerts the driver as needed. This paper discusses the approaches similar vision-tracking solutions have taken and how Watchman differentiates itself to be an effective solution for drivers. Improvements on facial tracking accuracy and future work are also discussed.

Ragy Morkos '18. Texty: A Tool for Easy Video Subtitles and Benchmarking Framework for Speech-to-text APIs. Fall 2016.

Abstract: The trend of viewing videos via smartphones and tablets rather than through computers has been continuously rising. However, the proliferation of video content usage on tablets and smartphones leaves out many individuals who suffer from a form of hearing impairment as well as language learners who are not fluent in the video’s language. In this paper, we explore the possibility of an Android application that uses off-the-shelf speech recognition technologies to produce an SRT (subtitle file) that can show subtitles during video playback using almost any well-known Android media player. The bulk of this paper is dedicated to the subtitle alignment algorithm that is proposed, since off-the-shelf speech recognition technologies only provide a mere transcript without any timecodes. Moreover, this application can simultaneously offer a system that will exploit real user feedback regarding the accuracy of the transcription and subtitle timing. This feedback can be valuable for the speech recognition APIs, as well as to further enhance and tweak the subtitle alignment algorithm. Our experimental results show great potential for this Android application, providing completely automatic subtitles with decent quality and performing better than the currently available automatic subtitle generation alternatives.

Andrew Zhou '18. Noisee: A General Framework for Tracking Impact of Noise Pollution. Fall 2016.

Abstract: Noise pollution is a growing environmental problem that is both destructive and pervasive. There is a general lack of awareness of noise pollution and its impact on human health. This paper introduces Noisee, an Android mobile application designed to test out a general framework to monitor the impact of noise pollution. Noisee behaves comparably to a fitness tracker for noise exposure. Noisee tracks and processes sound data collected through the system microphone. Then, it displays useful health information through visualization tools in order to help users understand the impact of their daily noise exposure. The goal of Noisee is to educate user on the harmfulness of noise, so that more attention would be brought to regulating noise pollution.

Nico Toy '18. Sonilize: A way to let the visually impaired “hear” their surroundings. Fall 2016.

Abstract: This paper describes the Sonilize application for Android, whose purpose is to help the visually impaired sense their physical environment through musical tones. The hope behind Sonilize is to provide a more comprehensive and general idea of the user’s surroundings than a very specialized application, such as one that reads text out loud, could provide. Using Sonilize in conjunction with such specific applications could hopefully provide a more complete experience. Sonilize is based on depth-sensing cameras and Google’s new Tango API which interprets the data. The application processes the data in order to identify and track nearby objects, and play distinguishable tones for each one, whose volume and panning evolve as a function of the position of the object. This could create the illusion that the objects themselves are emitting tones.

Jonathan Yang '18. WatchOut: Mobile Framework to Help Reduce Collisions between Pedestrians and Bicyclists. Fall 2016.

Abstract: This paper details the design, development, and evaluation of an Android mobile framework to help reduce collisions between pedestrians and bicyclists. The project utilizes the Android device’s location services and hardware sensors to collect its location and travel direction data. Then using Android’s WiFi P2P capabilities, the project transmits the user’s data to a connection with a different party and sends alerts if necessary to both parties through a collision detection algorithm. This project is designed to offer as an open-source mobile framework which targets pedestrians and bicyclists in campus and metropolitan settings to reduce potential collisions in a practical matter.

Ji Won Shin '18. Play to Learn: Korean-GO! Fall 2016.

Abstract: This paper details the design, development, and evaluation of a place-based augmented reality game called Korean-GO! whose purpose is teaching Korean to the players of the game. Utilizing Global Positioning System(GPS), the game focuses on creating augmented reality game environment by using the actual familiar physical location around the player. The game also emphasizes social situated learning through contextual cues provided during various conversations with virtual characters in the game. Players are given opportunities to apply the phrases learned through the game and are rewarded based on their performance. Although the game Korean-GO! is specifically for learning Korean, the overarching idea of creating a place-based augmented reality game similar to Korean-GO! aims to address the difficulties associated with learning a second language in general.

Harry Heffernan '18. P2P Connectivity to Encourage Mental Health Transparency. Fall 2016.

Abstract: This paper explores and explains Kuka, an iOS application intended to alleviate some of the anxieties faced by mental health patients. This is a group project, completed with the help of Mitch Hamburger. Mental health and emotional disorders affect a large and growing portion of society, and in taking an interpersonal approach, Kuka hopes to break down external stigmas placed upon these individuals. This paper will focus on the network that Kuka employs to connect users. It uses Apple’s Multipeer Connectivity Framework, a new framework used for establishing peer-to-peer connections between a set of nearby nodes. Using a protocol built on top of this framework, Kuka anonymously tells nearby users how many people in their approximate area are feeling similarly at a given time. This peer-to-peer approach allows for this potentially personal data to be untraceable at the network level, providing an extra level of security for its users.

Mitchell Hamburger '18. Kuka: A Virtual IPAD. Fall 2016.

Abstract: This paper discusses the development of an iOS Mobile Application called Kuka, built on a Peer To Peer backend, that simulates the functionality of an IPAD (or “Interpersonal Awareness Device”) geared toward mental health transparency. Given the rising awareness of mental health issues and the continued stigma preventing many people from opening up about it, the goal of this project was to show people that they are not alone in suffering from issues of mental health, thereby encouraging them to be more transparent about their mental health. In this paper I will discuss the motivation behind this goal, other works, scholarly and otherwise, that have attempted to address this same problem, the unique approach that we chose to achieve this goal and the process of its implementation as an iOS mobile application, and finally whether Kuka functions as intended and where it has weaknesses and room for improvement.

Jelani Denis '18. Course Q: A Mobile Application for Rating and Recommendation of University Courses. Fall 2016.

Abstract: In this paper we outline the ideation, implementation, and innovation that went into the development of Course Q, a mobile application for university-level course recommendation. The novelty of this tool lies at the intersection of mobile technology and recommendation theory to tackle a modern challenge with a new and unique approach. We hope that Course Q will become a familiar tool for all college students, and for this reason it was designed with scalability and adaptability in mind at every stage of development. This paper will go into detail to examine the logic behind the tool’s collaborative filtering model, and the design decisions that were made to create an easy-to-use, student-facing application.

Co-instructor. COS 126: General Computer Science.

Spring 2016

Instructor / Adviser. Independent Work: Random Apps of Kindness.

Spring 2016 - Random Apps of Kindness (click to show project abstracts)

Maddie Clayton ’17. A Framework for Safe Bicycle Navigation. Spring 2016.

Abstract: Bicycle riding is becoming increasingly more important for many towns and municipalities. Increased bicycle riding has the potential to reduce traffic on roads, improve the environment by reducing the carbon footprint caused by cars, and improve the health of Americans in a time where obesity is one of the greatest health concerns for the American people. Unfortunately, there are no digital maps on the market today that provide bicycle routes optimized for safety, for time or distance. Additionally, road assessment data for bicycles is available for many cities and towns, but there is currently no easy way for bicyclists to navigate using these color-coded bike safety maps. This paper describes a new approach to bicycle routing based upon road assessment data made available by towns and cities. The approach involves a framework that integrates road assessment data with underlying map data to formulate an algorithm that calculates safer routes than are currently on the market. The structure of this project is to convert the road assessment data into a form that can be easily manipulated, and to integrate this into a GraphHopper weighting class to create a safe route planner that will encourage more commuters to choose bicycles over cars.

Dong Wook Chung '17. A Modular Framework for Mobile Food Technology. Spring 2016.

Abstract: In order to have a healthy diet, it is important to know the nutritional value of the food we eat. However, due to the diversity of food, estimating such value is often difficult. Also, even when the nutritional information is available, people rarely take the information into account for their diet. In this paper, we propose a modular framework for using mobile technology to better understand what we eat. We divide the framework into three modules: identifying the food in the photo, retrieving the nutritional information of the food, and using the user’s physical information to calculate the nutritional value relative to the user’s daily needs and display it to the user. The framework is implemented as an Android application using RESTful APIs. So by taking a food photo with an Android device, the user can understand how much nutrition is in the food. By modularizing the framework, we suggest the possibility that other developers can test their own API for one module, while still being able to use other two modules as they are.

Philip Adams '17. QuickCap: An Automatic Closed Captioning Framework for User-Generated Videos. Spring 2016.

Abstract: This paper details the development of a lightweight, modular, cross-platform framework to add subtitles to user-generated videos both accurately and automatically, by leveraging leading automatic speech recognition and natural language processing services. The framework includes a novel caption grouping algorithm that exports subtitles to a range of formats.

Danielle Pintz ’17. Mobile Language Learning Using Word Association. Spring 2016.

Abstract: This project provides a novel way for a person to learn a language through a mobile framework that helps a user build their vocabulary by carefully selecting new words for the user to learn. This project is particularly relevant for refugees who find themselves in a new country where they need to learn the language; however, it can be used by anyone who wishes to learn a new language. The first component of the project is a Word Generator, which uses Word Association Norms to determine the most related word to a list of words. Given a word list, the algorithm runs each word through the Word Association Norms database, returning a list of all associated words to that word list. It then selects one of the most frequently occurring words from that list, and returns that word. This algorithm is then used in an iOS language learning application which suggests new words to the user to learn based on the words he already knows. The app also uses Google Translate to translate the user’s word list. Upon evaluation using five sample users, it was found that the more words in the user’s word list, the more accurate the Word Generator will be. Overall, the project was very successful, and the resulting iOS application can be freely used by anyone wishing to learn a language

Kelly Zhou '17. Physical Therapy Motion Tracking: A Mobile Framework. Spring 2016.

Abstract: Physical therapy helps people of all ages improve their physical health, whether through improving balance and flexibility, strengthening muscles, or recovering from injury. It has become a common practice in the United States, and millions of physical therapy exercises are prescribed to patients every year. Unfortunately most patients do not perform their exercises properly at home and as a result require hands on guidance from a physical therapist. To address this shortcoming to effective physical therapy practices, we propose a mobile framework that allows users to record themselves performing exercises using an Android device, track their motion and assess the accuracy of their movements in relation to the proposed exercise, and provide feedback accordingly. In this paper, we present the implementation and evaluation of this framework as a means of improving regular physical therapy practices.

Ann-Elise Siden '17. American Sign Language to English Translation Using Parameter Distinction. Spring 2016.

Abstract: Automated sign language translation is becoming a popular field of study in the category of image recognition. The development of automated sign language translators would benefit the deaf and hearing impaired by eliminating the need for a human translator, allowing translation with English speakers to be more seamless and natural. The existing technology available to accomplish this has limitations in terms of accessibility, cost, and portability, and little work has been done to study the effectiveness of such translators on more user-friendly platforms. This paper describes an alternative method to American Sign Language translation by computing Hu moments and distinguishing different parameters of each user-given sign. The application that is built based on these principles is tested an a set of minimal pairs, and similarity values between the sign given by the user and each candidate in the dictionary are compared with each other to determine the best result. Although the capabilities of this application do not cover the entirety of the language, the results suggest that further development could prove to be promising.

Odunayo Kusoro '16. Improving First Responder Communication using Seamless Vertical Handoff Between Peer-to-Peer Networks and RESTful Client-Server interfaces. Spring 2016.

Abstract: For first responders, adequate communication is vital in order to succeed in response to emergencies. In the past, first responders utilized communication systems such as radios, however, advancing technology has led to a push towards LTE and broadband communication for phones. There are multiple situational awareness applications for first responders already available. However, the problem with these systems is that they rely on the client-server framework. The client-server framework leaves first responders vulnerable when connection to a remote server cannot be reached. Under inadequate conditions, connection between a client and a server can be hampered, preventing successful data transfer. In contrast, under the peer-to-peer framework, data can be transferred between multiple clients without the need for a central remote server. This project integrates the features of both the client-server and peer-to-peer framework by creating a vertical handoff system to allow continued seamless transfer of data when connection to a remote server cannot be established.

Co-instructor. COS 126: General Computer Science.

Senior Thesis Adviser: Stephanie Marani '16. InfraShare Mobile: Crowdsourcing Plant Health Using Near-Infrared Photography.

Fall 2015

Instructor / Adviser. Independent Work: Apps for the Environment.

Fall 2015 - Apps for the Environment (click to show project abstracts)

Sharon You '17, Garden Guru: An Automated Guide for the Amateur Gardener, Fall 2015.

Abstract: This project addresses the rising dependence on home and community gardens throughout the nation. The goal is to create a mobile framework to help amateurs plan gardens with maximal chances of a high crop yield. This has been done by developing custom recommendation features and a property-surveying tool focused on optimizing conditions critical to small-scale garden success. The project seeks to further automate and add onto existing features in the near future to become a more comprehensive guide for any first-time gardener.

Jessie Chen '16. Money Down the Drain. Fall 2015.

Abstract: This paper details the development and evaluation of Money Down the Drain (MDD), an Android application that monitors how much water from sinks, showers, and toilets the user uses throughout a day, month, and year without an external device. During initial setup of the application, the user inputs the number of water sources he would like to track, each source’s water flow rate, and a recording of the water source, done within the app using a microphone. After setup, the microphone listens in the background for these sources, using an artificial neural network and Wi-Fi signal strength for detection, and calculates and displays the amount of water used in real-time.

Jack O’Brien '17. Impact: The Daily Environmental Impact Calculator Application. Fall 2015.

Abstract: Impact is an environmental calculator application that allows users to comprehend how their everyday activities affect the planet. The goal of Impact is to invite users to view their daily routine in a new light and incentivize them to decrease their consumption habits. Each day, Impact prompts users to input their activities related to water, food, trash, energy, and transportation usage. Then, the application calculates a tangible representation of their environmental impact with regards to gallons of water consumed, carbon footprint, and an "Impact score" unique to this application that takes all activities into account. Users can then see leaderboards in these three categories to check how they rank against others. The application was developed for Android and tested by 11 users, mostly from Princeton University

Adam Gallagher '16. Abita: Crowdsourcing Geo-tagged Environmental Experiences. Fall 2015.

Abstract: Abita (Haitian Creole for “habitat”), is a mobile application that harnesses the convenience of mobile devices and the power of crowdsourcing to collaboratively collect and share data about experiences in environment. This paper details the motivation, design principles, and implementation of Abita, with an emphasis on the design and functionality of the Web API which supports the Android application.

Aqeel Phillips '17. Abita: Environmental Appreciation Through Smartphone-based Educational Exploration. Fall, 2015

Abstract: In an effort to create a simple method for sharing location-specific data regarding wildlife sightings, foliage documentation, and other environmentally relevant information, we utilized the popular smartphone platform to construct a tool for the collection and presentation of this data. This paper documents the creation of an Android application created with the intention of promoting environmental awareness and appreciation by collecting and exposing environmentally relevant and location-specific information through a crowd-sourced data collection model.

Graham Turk '17. SolarSource: A general framework for evaluating rooftop solar potential. Fall 2015.

Abstract: Homeowners don’t have a simple and accurate method to determine whether their homes are good candidates for a solar installation. This is problematic because a solar installation can both yield savings on electricity cost and reduce greenhouse gas emissions, recognized as the primary driver of global climate change. In this paper, we propose SolarSource, a universally applicable framework for evaluating rooftop solar potential. The framework is implemented as an Android mobile application and public RESTful API. The key insights are to provide homeowners with tools to construct a roof mapping themselves, to use a crowdsourcing platform (retrieving production statistics from actual solar arrays) to inform our analysis, and to implement the back end of the framework as a public API with an adaptable, open-source architecture. The main advantages of this approach are flexibility and adaptability: by providing tools for the homeowner to map her own roof, we enable universal coverage; a decoupled API provides software developers with access to our analysis tools; and an adaptable and open source architecture enables the open source community to augment the framework. Experimental results demonstrate that our framework produces reasonable estimates for solar potential compared to existing tools. A general-purpose and accurate framework helps uncover the financial benefits of solar for the widest audience possible, thereby facilitating the transition to a carbon-free energy future.

Gregory Magana '17. SolarSource: A Mobile Application for Determining Solar Energy Feasibility. Fall 2015.

Abstract: As the United States turns towards renewable energy solutions, case-specific information about renewable energy feasibility becomes increasingly important for everyone. As such, this paper provides the design, development, and evaluation of SolarSource, an application framework meant to offer the user the ability to determine whether or not solar energy would be financially feasible in their location given their current electricity usage costs, geographic location, roof dimensions, and the weather statistics in their location. Specifically, this paper addresses the user interface design, electricity data gathering functionality, and general information flow of the app. In addition, this paper examines some of the work already done concerning calculating the feasibility of solar energy and will enumerate some opportunities for future improvement to this application framework.

Emily Speyer '17. SolarSource: A Mobile Rooftop Survey Application to Evaluate Solar Energy Harvesting Potential. Fall 2015.

Abstract: In order to help combat climate change, SolarSource is a framework created to encourage consumers to transition from fossil fuel to solar energy electricity generation. Since solar energy is rapidly decreasing in cost, SolarSource provides homeowners with a simple method to determine the cost-effectiveness of a solar panel installation. This section of the framework creates an mobile application that uses Android sensor and location data to capture relevant information on the dimensions of a potential rooftop, its surrounding area and the sun’s location. This data is used in calculations to predict the rooftop’s sunlit area over the course of a year and to conclude the ideal size and specific location within the rooftop for an installation. While most solar installations are in a large rectangular form, this framework provides the opportunity to determine the largest solar installation of alternative shapes and forms. Ultimately, such data is provided as input into an algorithm to calculate the cost effectiveness of a potential solar installation with the hope that it will encourage users to transition to using solar energy for their electricity generation.

Andrea Malleo '16. ComPost: A Mobile App for the Compost Sharing Economy, Fall 2015.

Abstract: Compost the Most is an Android app that addresses the issues of conventional landfill disposal of food scraps and sparsity of composting practices by creating a digital marketplace for the exchange of organic scraps between the producers and consumers. Composting option information is stored on a cloud database, and continually grows through a user input interface wherein app users who wish to advertise their composting site can fill out a form and submit their information for other app users to see. Users who have organic matter to compost can search for local options to bring their compost to, and gain access to contact information as well as environmental impact of choosing to compost or continue disposing in a landfill. This app builds off of existing composting facility databases but leverages knowledge crowdsourcing to facilitate local, community based solutions to food waste.

Zachary Stecker '16. Footprints: Motivating Energy Reduction and Awareness Using Mobile Sensors and Public Data. Fall, 2015.

Abstract: This paper describes the motivation, design, and development of Footprints, a mobile application for Android devices that provides users with an automated, estimated energy footprint score on a daily basis. The application uses common mobile sensors and geofence technology to dynamically track a user’s location, and it accesses public data to inform the user how much energy is being consumed in a given building at the current time. Footprints records each daily energy score, allowing the user to track the correlation between campus activity and energy use over the course of any time interval. The application seeks to raise a level of awareness about the amount of energy consumed daily on a college campus by making the data personal and trackable.

Brent Read '16. Disease Detective: A Mobile Application For Agricultural Pathogen Classification. Fall, 2015.

Abstract: This paper describes the design and evaluation of Disease Detective, an Android application that allows users to identify diseased plants using their device’s built-in camera. Images are taken by the user and then instantly analyzed using machine learning techniques to return a likelihood that the query plant has some kind of abnormal disease. This work builds on existing computationally intensive classification techniques that are ill-fit for mobile devices, as well as on mobile applications that can classify images, but lack the domain-specificity needed to differentiate diseased plants.

Serena Zheng '17. NYC Park Events: A Location-Based Notification Application for the Environment. Fall, 2015.

Abstract: This paper details the preliminary design, development, and evaluation of NYC Park Events, an Android application that notifies users of free, nearby, public events happening in New York City public parks. Taking advantage of the public datasets available on NYC Open Data, the application parses all event information from the NYC Parks Public Events dataset, sets up geofences for each of the events, and relies on the Android device’s GPS and location sensing capabilities to trigger notifications that inform users of nearby events. The paper also addresses related work on the efficient geofencing techniques and explores future work for developing and evaluating the efficient use of geofences in NYC Park Events.

Co-instructor. COS 126: General Computer Science.

Spring 2015

Instructor / Adviser. Independent Work: Civic Computing Projects.

Spring 2015 - Civic Computing (click to show project abstracts)

Michael Buono '16. ProtectYourself: A Mobile Application for Victims of Abuse. Spring 2015.

Abstract: ProtectYourself is a protection framework for mobile applications that run the Android Open Source Project (AOSP) operating system. It is designed to assist victims of stalking and / or abuse, especially those whose mobile phones have been compromised by their adversary. There are many applications that allow malicious individuals to track their target, and the goal of ProtectYourself is to provide a framework to easily alert users when such an application is compromising their privacy.

Michael Hauss '16. An SMS Framework for Collaborative Transportation of Goods in Rural Areas. Spring 2015.

Abstract: In this paper, we present a Short Message Service (SMS) framework that connects individuals in rural communities to each other, and allows them to collaborate to help improve the transportation of goods. The system is deployable immediately, and provides a means by which users can effectively share portage tasks. In areas where roads do not exist and communities are separated by multi-day treks, reducing the aggregate number of delivery trips is extremely valuable. The framework also facilitates faster transportation times, so perishable goods can make it to market with more time to sell, and deadlines can be met. The system we propose is simple and cost effective to use, and only requires that a user has a cellular phone with access to SMS service.

Charlie Wu '16. CrisisStack: A Mobile Communications Platform for Crisis and Conflict, Spring 2015.

Abstract: Communication is vital to first responders, law enforcement, and volunteers, as they need to be able to share information to coordinate their activities and effectively do their jobs. Unfortunately, communication infrastructure often becomes unavailable during crises and conflicts, especially in under-developed countries. Solutions to address damaged or inadequate Internet infrastructure in developed countries are often not scalable to the rest of the world due to issues of cost and transportation. An open source software package called CrisisStack, made for a platform known as BRCK+Pi, seeks to change this. By utilizing open source software and commercial-off-the shelf hardware, CrisisStack provides an alternative communications platform that can be deployable in any part of the globe. This project seeks to help with the design, implementation, and evaluation of CrisisStack.

Mckervin Ceme '16. Increasing Municipal Transparency through Council-Monitoring Applications.

Abstract: Knowledge of municipal government decisions amongst the average citizen is not thoroughly understood by many individuals. Furthermore, even for those individuals who feel particularly inclined to get involved in local government find getting information about recent legislative activity done by city council and other governing bodies can be difficult and cumbersome to access. As a result, this project aims to facilitate the process of finding legislative actions performed by a governing body. In particular, an older, open-source application called CouncilMatic was updated to provide a richer user interface for individuals to be able to search and browse their local municipality for recent legislation. This project is written for individual towns, and the premier test case is the township of Princeton, New Jersey. The goal of this project was to create a richer front-end experience for the front end user, as well as provide an easy-to-use backend service for local governments to host for their citizens, in an attempt to increase transparency in municipal government between average citizens and city officials.

Julia Johnstone '16. Clarence: A Mobile Application for Behavior and Wellness Tracking. Spring 2015.

Abstract: This paper details the design, development and evaluation of Clarence, an Android application that allows users easy access to data pertaining to their physical and social behavior. All data is collected through smartphone sensors without any user effort beyond installing the application and turning on sensing capabilities. Data is then displayed with progress bars and line graphs that can be easily exported by email. Users can also choose to have a list of contacts alerted if they meet or fall below self-set goals and thresholds. This work builds on existing tools that have allowed smartphone users to track their daily behavior in order to better monitor their health, activities, and mood as well as on applications that have sought to give medical professionals more insight into their patients’ behavior at home.

Gabriel Ambruso '15. Finding Computer Time: Load-Balancing of Public Terminals. Spring 2015.

Abstract: Ease of access to public terminals is a concern for the 77 million Americans that use them. These individuals rely on these terminals for the Internet connectivity and productivity tools they provide. Factors that hinder access to public terminals include heavy competition for their use and an inability to determine what time is best to attend a locale with such terminals. This paper outlines a framework for load-balancing public terminals. This framework focuses on load-balancing terminals at a location by providing users with information on the expected number of available terminals at any given time and the peak usage hours on any given day. Using this information, users can align their trips with non-peak hours and lessen the amount of competition for public terminals by spreading out their visits. Once this framework is deployed at multiple locations with terminals in a single area, its data can be used to direct users to the location with the most available terminals and increase the efficiency with which these terminals are used.

Junya Takahashi '15. Visualization of Vacant Properties: A Geospatial Data Visualization Framework for City Data. Spring 2015.

Abstract: The prevalence of vacant properties in a city is both a negative indicator and a harmful presence. The mission of the Trenton Neighborhood Restoration Campaign (TNRC) is to address the problem of the large number of vacant properties in the city of Trenton, NJ. Currently, CartoDB is used to store and visualize geospatial data collected on Trenton land parcels. However, the basic services provided by CartoDB have various shortcomings, including the lack of an advanced search mechanism, a way for a user to provide feedback on specific data points on the map, and a way to output data in an aggregated, downloadable format. These shortcomings are addressed through writing original code and leveraging various open source packages to extend the functionality of the TNRC web application. In developing an ad hoc software solution for the TNRC, the principle of creating a generalizable system was a priority. Although the application developed for this project is specific to vacant properties in Trenton, the process of collecting and displaying geospatial data is extensible to any city with similar needs. Given preliminary positive feedback from the TNRC, cities similar to Trenton should also benefit from having a similar web application customized for their own municipalities. Thus, modularity and code reusability were high priorities in developing the extensions to the TNRC application. A framework composed of a collection of different tools is proposed in order to facilitate the development process of a city geospatial data visualization application.

Alan Zhou '16. CharityMatch: A Mobile Framework for Matching Donations with Charities. Spring 2015.

Abstract: The system of donating currently in place is ineffective in allocating resources to those in need. Research has shown that trust, transparency, and convenience are three key factors that contribute to whether a citizen donates to a charity or not. The recent developments in mobile technology make it possible now to develop a framework that utilizes these mobile devices to enhance the number of donations and maximize the utility gained through these donations. CharityMatch is a framework that allows users to easily match their donations to nearby charities who need the items. Some components of the framework include affinity matching, a notification system, user and charity profiles, and inventory management. The matching is done by considering a variety of factors, including location, level of need, etc. These factors are encoded by sorting the list of matched charities by order of affinity, and then letting the user swipe right to pledge to that charity and left to look at the next possible charity match. After a user pledges to donate the specific item, the user will then have a ’to do’ list that facilitates the physical donation process via reminders and maps. Finally, once a user physically donates, the user will then have a history tab allowing him/her to view all previous donations.

Stephanie Marani '16. PanTweet: Improving Communication Between Donors and Food Pantries. Spring 2015.

Abstract: Food pantries in America are generally low-cost operations that cannot afford to spend much time or money on community outreach. This poses a problem, as food pantries rely heavily on donations from the community and need to be able to communicate their needs with potential donors in order to get these donations. PanTweet serves as a way to help solve this problem, bridging the gap between food pantries and their donors in a way that is not only quick and easy for the food pantry to use, but also also convenient for their donors. PanTweet is a web application that integrates with the Twitter platform to automate communication between food pantries and donors, requiring very little effort from either party. Food pantries only need to upload their latest inventory and customize their settings and PanTweet will automatically publish tweets to the pantry’s Twitter feed, allowing followers to see what items are needed and where these items can be donated.

Glenna Yu '16. Remote Physical Therapy Monitoring Using Smartphone Sensors. Spring 2015.

Abstract: Conventional physical therapy rehabilitation programs require physical therapists to monitor patients as they repeatedly perform sets of exercises to ensure proper form. As this places increased load on the providers of physical therapy and can be costly for many patients, new solutions are needed for easy, low-cost remote physical therapy monitoring systems. Some approaches use gaming interfaces and external sensors; however, the increasing pervasiveness of smartphones have led researchers to try physical activity recognition using only the smartphone sensors. This project attempts to demonstrate the feasibility of using the approach taken for physical activity recognition to determine whether a patient is correctly performing their physical therapy exercises. Standing side leg lifts are used as an example exercise. A classifier is trained to distinguish between side leg lifts done correctly and incorrectly. Different classifiers, combined with different window sizes and sampling rates, are evaluated using 10-fold cross validation and compared against each other; accuracy rates of about 95% are achieved offline, but the accuracy decreases when used in online recognition on a prototype mobile app. Future work may attempt to use a more comprehensive dataset with more variations on the exercises to train the classifier to improve the online recognition accuracy. Despite the limitations, the results suggest that it is possible to distinguish between these subtle variations on movement using only the sensors in the smartphone.

Co-instructor. COS 126: General Computer Science.

Fall 2014

Instructor / Adviser. Independent Work: Civic Computing Projects.

Fall 2014 - Civic Computing (click to show project abstracts)

Emanuel Castaneda '16. CrisisWatch: A Mobile Platform to Integrate Disaster Information Resources and Social Media to Enhance Situational Awareness. Fall 2014.

Abstract: Situational awareness during times of crisis requires the integration of information from numerous sources. Many of these sources are fragmented across several points of access from emergency alerting services, to news media outlets, and even to social media. In recent years the use of social media to support humanitarian efforts in collecting assessments and relevant information of disasters has been increasingly researched. Social media such as Twitter, Facebook, and YouTube are used by victims to help disseminate disaster information as well as by activists to help this propagate information. Increased situational awareness can save the lives of victims, help first responders, and support fundraisers for aid in disaster relief. Relief Web, a humanitarian website managed by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), serves as the foundation to for the mobile platform developed under this project. This platform is designed to integrate disaster information with relevant crowdsourcing from tweets through the use of Twitter’s Search API. CrisisWatch, built as the first prototype of this platform, integrates disaster information from ReliefWeb and Twitter to distribute time sensitive information to help aid disaster relief.

Preceptor. COS 217: Introduction to Programming Systems.