About Me

Hi! I'm a computer science PhD student at Princeton University. My broad interests are statistical machine learning and deep learning.

In Summer 2017 I'll be interning with Google Brain, working on automated model construction and generative models. In Summer 2016 I did a research internship at Google New York, where I worked with Pedro Moreno's group on transfer learning for low-resource speech recognition.

I did my undergratuate studies at the University of Canterbury in New Zealand, where I worked on signal processing algorithms for the intensive care unit. In between undergrad and grad school I worked at a startup (now acquired by Telstra) on stochastic process models of crime and distributed network security software.


Publications

Conference
  1. Blind Attacks on Machine Learners.
    Alex Beatson, Zhaoran Wang and Han Liu.
    Neural Information Processing Systems (NIPS), 2016. [PDF]
  2. Automated Logging of Inspiratory and Expiratory Non-Synchronized Breathing (ALIEN) for Mechanical Ventilation.
    Yeong Shiong Chiew, Christopher Pretty, Alex Beatson, Daniel Glassenbury, Vinny Major, Simon Corbett, Daniel Redmond, Ákos Szlávecz, Geoffrey M Shaw, and J Geoffrey Chase.
    IEEE Engineering in Medicine and Biology Society (EMBS), 2015. Oral. [IEEE]
  3. Assessing Respiratory Mechanics of Reverse-Triggered Breathing Cycles – Case Study of Two Mechanically Ventilated Patients.
    Vincent Major, Simon Corbett, Daniel Redmond, Alex Beatson, Daniel Glassenbury, Yeong Shiong Chiew, Christopher Pretty, Thomas Desaive, Ákos Szlávecz, Balázs Benyó, Geoffrey M Shaw, J Geoffrey Chase.
    IFAC Symposium on Biological and Medical Systems (BMS), 2015. Oral. [ScienceDirect]
  4. Pressure Reconstruction by Eliminating the Demand Effect of Spontaneous Respiration (PREDATOR) Method for Assessing Respiratory Mechanics of Reverse-Triggered Breathing Cycles.
    Daniel Redmond, Vincent Major, Simon Corbett, Daniel Glassenbury, Alex Beatson, Akos Szlavecz, Yeong Shiong Chiew, Geoffrey M Shaw, and J Geoffrey Chase.
    IEEE Conference on Biomedical Engineering and Sciences (IECBES), 2014. Oral. [IEEE]
Journal
  1. Respiratory mechanics assessment for reverse-triggered breathing cycles using pressure reconstruction.
    Vincent Major, Simon Corbett, Daniel Redmond, Alex Beatson, Daniel Glassenbury, Yeong Shiong Chiew, Christopher Pretty, Thomas Desaive, Ákos Szlávecz, Balázs Benyó, Geoffrey M Shaw, and J Geoffrey Chase.
    Biomedical Signal Processing and Control, 2016. [ScienceDirect]
  2. The Clinical Utilisation of Respiratory Elastance Software (CURE Soft): a bedside software for real-time respiratory mechanics monitoring and mechanical ventilation management.
    Ákos Szlávecz, Yeong S Chiew, Daniel Redmond, Alex Beatson, Daniel Glassenbury, Simon Corbett, Vincent Major, Christopher Pretty, Geoffrey M Shaw, Balazs Benyo, Thomas Desaive, and J Geoffrey Chase.
    Biomedical Engineering Online, 2014. [PubMed]

Work Experience

  • 2016 Summer. Google - Speech Recognition team. Intern.
    Transfer learning for speech recognition with deep recurrent neural networks
  • 2014 - 2015. Cognevo - Research Assistant.
    Scaling up network security software with Apache Spark
    Point process modelling of crime on street network graphs

Education

  • 2015 - Now. Ph.D. in Computer Science
    Princeton University, USA
    Advisor: Han Liu
    Awarded the Gordon Wu Fellowship in Engineering
  • 2010 - 2014. Bachelors in Mechatronics Engineering
    University of Canterbury, New Zealand
    Thesis: Optimizing Mechanical Ventilator Therapy
    First Class Honors

Recent Projects

Google
  • Transfer learning for speech recognition with deep recurrent neural networks
    Speech recognition with deep learning works incredibly well when a lot of data is available (e.g. for English). However, relatively little data is available for languages which are less common or whose speakers are less well connected to the internet. Furthermore, data collection for such languages is very expensive. I worked on transfer learning methods to improve our ability to train models for low-resource languages by using signal from higher-resource language, including cotraining models, adapting models, using residual adaptation layers, and explicit training of domain agnostic representations with domain adversarial networks.
Princeton
  • Blind attacks on machine learners
    Machine learning systems are increasingly used in security-sensitive applications, making it important to understand their vulnerability to different attacks. Much recent work has been done identifying optimal data injection attacks (the addition of falsified data to a training set) when an attacker knows both the full training set and the estimator or algorithm used by the learner. A large body of work also exists on estimators which are resilient to attack by an omniscient attacker. However, in many real world scenarios an attacker is not omniscient and does not have full knowledge of the learner.

    Our work identifies an attacker's ability to hurt a learner in precisely this scenario: when the attacker does not observe the learner's training set, their estimator/learning algorithm or the parameters of the distribution of interest. In this setting, we frame 'hurting' a learner as lower-bounding the learner's minimax rate and thus reducing their effective sample size. We present some simple attacks which effectively reduce the learner's sample size and provide concrete examples of their effects in some classical statistical estimation problems.

Contact

Email
  • abeatson AT princeton DOT edu
  • alexub AT gmail DOT com
Social Media