About Me

I’m a Ph.D. candidate in Information Science at Cornell University, where I am a member of the People-Aware Computing Lab under the supervision of Tanzeem Choudhury. My research focuses on using mobile sensing and machine learning techniques to identify digital markers that can help predict people’s well-being and cognitive performance. I’m also interested in designing intervention technologies that can help improve people’s cognitive performance. Apart from my research, I also love traveling, reading historical fiction books, playing piano, ballroom dancing, and wine tasting.


Recent Projects

UpTime: Overcoming Distractions during Transitions from Break to Work

Transition from break to work is the moment when people are susceptible to digital distractions. To support workers' transitions, we developed a conversational system that senses the transitions and automatically blocks distracting websites temporarily, which helps user avoid distractions while still gives them control to take necessary digital breaks. UpTime system entails two major components, a Chrome extension and a Slack chatbot. The browser extension is used to collect information about user’s computer inactivity, sites visited, CPU usage, etc, and controls the access to distracting sites. The chatbot communicates with the browser extension in the background, interacts with user, and “negotiates” with user when user attempts to access distracting sites.


We conducted a 3-week in-situ study with 15 participants in a corporate, comparing UpTime with baseline behavior and with a state-of-the-art system. The results from our quantitative data showed that the participants were significantly less likely to visit a distracting site after returning from a break. The survey data showed that the participants had significantly lower level of perceived stress due to internal coercion when using UpTime. And there was no significant difference in participants’ sense of control among the three conditions. The findings suggest that automatic, temporary blocking at transition points can significantly reduce digital distractions and stress without sacrificing workers’ sense of control. Please see our paper for more details.

UpTime

Using Behavioral Rhythms and Multi-task Learning to Predict Fine Grained Symptoms of Schizophrenia

Schizophrenia is a severe and complex psychiatric disorder with heterogeneous and dynamic multi-dimensional symptoms. Early intervention is the most effective approach to preventing the symptoms from deteriorating. We aimed to use passive mobile sensing data to build machine learning models that can predict fine-grained symptoms of schizophrenia and provide interpretable results, which clinicians can make clinical decisions upon. We first extracted a variety of rhythm features that correspond to patients’ ultradian, circadian, and infradian rhythm. Then we trained prediction models using multi-task learning, which enables structural coherence across different patients’ models while accounts for individual differences.


The results suggest that models trained with multi-task learning can achieve better prediction accuracy compared to models trained with single-task learning. Moreover, the results provide insights into the relationship between rhythms and the trajectory of different symptoms. For example, the ultradian rhythm of ambient sound is associated with symptoms of hallucination, such as seeing things and hearing voices, while the ultradian rhythm of patients’ text messaging pattern is associated with their thinking clearly and feeling social. The findings shed light on designing intervention tools that can trigger interventions based on changes in patients’ rhythms.


EUREKA

Deterministic Binary Filters for Convolutional Neural Networks

Neural network architectures have achieved breakthrough performance on various applications, such as image and speech recognition. However, these models tend to have more parameters and therefore require more memory space to store the parameters, which makes it hard for these models to be deployed on embedded systems or IoT platforms that have limited memory resource. Recent research has been focusing on approaches that can reduce models’ on-device memory footprint while still maintain models’ performance, network compression techniques and novel layer architectural designs for instance.


We proposed Deterministic Binary Filters (DBF), a new approach that learns the weighting coefficients of predefined orthogonal binary bases instead of learning convolution filters directly. Essentially, each filter in a convolution network is a linear combination of orthogonal binary vectors that can be generated using orthogonal variable spreading factor (OVSF). This not only reduces the memory footprint but also allows efficient runtime on embedded devices. Moreover, the amount of memory reudction is tunable based on the desired level of accuracy.


Our DBF technique can be integrated into well-known architectures, such as ResNet and SqueezeNet, which reduces the model size by 75% with only 2% of accuracy loss when evaluated on CIFAR-10 dataset. In addition, we designed a new architecture called DBFNet using DBF, which is aimed to perform more challenging classification tasks on datasets like ImageNet. DBFNet achieves 75% of memory reduction with 40.5% top-1 accuracy and 65.7% top-5 accuracy on ImageNet. The results suggest that convolution filters comprised of weighted combination of orthogonal binary bases can offer significantly reduction in the number of parameters with comparable levels of accuracy, which provides insights for future filter design. Please see our paper for more details.

DBFNet