Panels & Special Sessions

  • Panelists:

    • Jonathan Stange is an Associate Professor of Psychology and Psychiatry & the Behavioral Sciences at the University of Southern California, where he directs the Cognition and Affect Regulation Lab. Jon’s research focuses on understanding mechanisms of affect regulation, as they relate to vulnerability for depression and suicide. His work integrates several levels of analysis, including fMRI, the autonomic nervous system, and behavior, both in the lab in people’s everyday lives with ambulatory assessment. Jon’s work has particularly focused on elucidating how people vary over time, and in what contexts might their risk for dysregulation may be greatest. The goal of his work is to identify personalized targets for intervention to help people to successfully self-regulate, and in doing so, to reduce risk for problems such as depression and suicide.

     

    • Dr. Monica Kelly is an Assistant Professor of Medicine at UCLA, a researcher at the VA Greater Los Angeles Geriatric Research, Education and Clinical Center (GRECC), and a licensed clinical psychologist, board-certified in Behavioral Sleep Medicine. Her research program centers on improving sleep as a critical contributor to physical and mental health, with a particular focus on older adults and individuals with posttraumatic stress disorder or spinal cord injury. Dr. Kelly’s work frequently leverages sleep wearables such as actigraphy and home sleep apnea testing to advance remote measurement of sleep in both research and clinical practice. She is the lead author of a recently published book chapter, “Actigraphy and Behavioral Assessments of Sleep and Circadian Disorders,” in the Oxford Handbook of Sleep and Sleep Disorders, underscoring her ongoing commitment to advancing remote sleep health monitoring in both research and clinical practice. Her research is currently funded by the National Heart, Lung, and Blood Institute (NHLBI) and the Craig H. Neilsen Foundation.

     

    • Professor Sara C. Mednick is a cognitive neuroscientist at the University of California, Irvine and author of The Power of the Downstate (Hachette Go!, pub date: April, 2022) and Take a Nap! Change Your Life. (Workman). She is passionate about understanding how the brain works through her research into sleep and the autonomic nervous system. Dr. Mednick’s seven-bedroom sleep lab works literally around-the-clock to discover methods for boosting cognition by napping, stimulating the brain with electricity, sound and light, and pharmacology. Her lab also investigates how the menstrual cycle and aging affect the brain. Her science has been continuously federally funded (National Institute of Health, National Science Foundation, Department of Defense Office of Naval Research, DARPA).

     

    Moderator:

    • Dr. Simon is an Assistant Professor in the Department of Pediatrics, School of Medicine, at UC Irvine and the Assistant Director of Research in Pediatric Sleep at the Children’s Hospital of Orange County. She is a licensed clinical psychologist with specialties in pediatric health and behavioral sleep medicine. Her research lab, Sleep, Learning, and Emotions in Pediatrics (SLEEP) investigates the mechanisms underlying sleep, memory, and mental health across development in healthy and patient populations, memory modification, and developing new sleep-based interventions. She developed digital platforms, HowRU, and integrates wearables to track youth. Her work is currently funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) and the Children’s Hospital of Orange County.

     

    • Prof. Brandon Oubre is an Assistant Professor in the Department of Computer Science at the University of Alabama at Birmingham. His research interests include mobile and digital health, with a focus on quantitative behavioral assessment of neurologic disease signs. His work employs data science and machine learning methodologies to model time-series sensor data representing human movement. In the context of digital and behavioral phenotyping and disease assessment, these data have the potential to 1) enable identification of subtle, early disease signs, 2) form the basis of more sensitive and ecologically valid measures of disease progression to support clinical trials, and 3) support patient-centric and personalized care.
  • Venue: IEEE BSN 2025 Conference, UCLA Meyer and Renee Luskin Conference Center

    Sponsored by: UCI Institute for Future Health

    Personalized Conversational Health Agents (CHAs) are emerging as a transformative technology in digital health, capable of providing tailored, context-aware insights and guidance through natural dialogue. They promise to move beyond static dashboards by turning raw data into actionable feedback that supports self-awareness, adherence, and preventive health.

    In this tutorial, we will delve into the core components of Conversational Health Agents and examine how they enable personalized care. We will begin by discussing personal multimodal health data—including signals from wearables, mobile devices, and ecological momentary assessments—as the foundation for understanding individual health in context. We will show how digital and mobile health platforms are critical for integrating these diverse data streams, managing participant profiles, enabling real-time interventions, and ensuring scalability and privacy.

    Next, we will discuss the role of knowledge bases, such as medical knowledge graphs and retrieval-augmented generation systems, and how they allow CHAs to apply domain knowledge in an accurate and explainable way. We will then explore multimodal analytic models, which make it possible to interpret diverse signals and combine subjective and objective data into meaningful insights. Together, we will demonstrate how these elements form the backbone of a new generation of adaptive, user-centered health support systems.

    Finally, through examples and live demonstrations using the open-source CHA platform openCHA (opencha.com), we will show participants how to integrate multimodal data, digital health platforms, and conversational agents into study designs—providing a practical, reproducible framework for building and evaluating engaging, participant-centered digital health solutions. An optional, next-day Bootcamp will follow the tutorial, offering hands-on experience implementing these components end-to-end in openCHA.

  • Unobtrusive, ubiquitous, and cost-effective wearable sensors have demonstrated the potential to revolutionize real-time monitoring of health and wellness by enabling the detection of various physical and mental health states. Machine learning models, in particular, play a crucial role in unlocking the full potential of this data, for example in the development of personalized and targeted therapy, enabling just-in-time interventions to aid in adherence and retention, or discovering of hidden behavior biomarkers to predict disease progression. However, training machine learning models on wearable sensor data remains challenging due to several key limitations: high susceptibility to noise and artifacts, frequent data missingness, the excessive computational resources required to train large neural networks, and the scarcity of large, high-quality public datasets.

    The emergence of Foundation Models (FM) offers a new opportunity to address these challenges and accelerate our progress. In other domains such as natural language processing and computer vision, the foundation model paradigm has transformed the development of machine learning solutions to real-world problems. The key property of an FM is that it is pre-trained on a large-scale dataset to ensure that the resulting feature representation encompasses all of the complexity of the data domain, which is then validated by demonstrating that the FM can solve multiple downstream tasks without additional representation learning. In computer vision, the field has moved away from collecting individual special-purpose datasets and training task-specific models to leveraging existing foundation model representations, such as DINOv2 in solving a variety of perceptual tasks. A key enabling property is that, while the training datasets are private and not publicly available, the model weights are released to the research community, enabling everyone to benefit from its powerful representation. This transition in utilizing publicly available FMs has not yet occurred for mHealth, and it is a crucial next step.

    There is a critical need to unite our community to foster strong collaborations to solve these challenges. While other domains have rapidly integrated breakthroughs in generative AI and large-scale foundation models, the health time-series domain has classically lagged behind, requiring careful reconciliation of these advances with domain-specific challenges. This tutorial aims to teach researchers working on wearable sensors how to utilize our LSM-2 foundation model in order to help with their research. LSM-2 is pre-trained on 40,000,000 hours of wearable sensor data from 60,440 persons with 128 Google v5e TPUs for 100k steps. LSM-2 achieves state-of-the-art performance on a range of downstream tasks from insulin resistance regression to activity recognition. Not only this, it is flexible enough to be used with varying sensor configurations, differing time windows, and specifically designed to be robust to missingness and noise.

    • Program:
    1. Introduction & Landscape (40 min)

    – Welcome & Workshop Goals (10 min) – Organizers

    – Keynote: “Foundation Models for Wearables: Opportunities and Challenges” (30 min)

    Invited speaker discusses the state of FM research in health sensing, open problems, and ethical considerations (beyond any single model)

    1. Technical Deep Dives (70 min)

    – Tutorial 1: “LSM-2 as a Case Study” (30 min)

    Architecture, pre-training, and lessons learned (positioned as one example of FM development)

    – Tutorial 2: “Alternative Approaches to FMs for Wearables” (30 min)

    Guest speaker covers other FM paradigms (e.g., contrastive learning, modular architectures) or datasets

    – Q&A Lightning Round (10 min)

    1. Panel + Breakouts (80 min)

    – Panel: “What Does the Field Need Next?” (40 min)

    Experts debate: Do we need larger FMs? Better benchmarks? Federated learning?

    – Breakout Discussions (40 min)

    *Groups rotate through topics:

    (1) Data-sharing incentives,

    (2) Edge deployment challenges,

    (3) Validation standards for FMs*

    1. Research Spotlights & Closing (50 min)

    – Rapid Fire Talks (4×5 min)

    Short presentations on FM applications (e.g., “Few-shot learning for rare diseases,” “Cross-modal sensor fusion”)

    – Synthesis Activity: “Priorities for the Community” (20 min)

    Audience votes on top challenges (live poll) + organizers summarize

    – Wrap-up & Resources (10 min)

    Bio:

    Xin Liu (Google) 

    Xin Liu is a Research Scientist at Google Health, where he leads efforts in building frontier AI foundation models for personal health. He directed the research and development behind Google’s Large Sensor Model, Personal Health LLM, SensorLM, and the Personal Health Agent (Google Health AI Models), and is a core contributor to Gemini 2.5’s tabular and data science reasoning capabilities. 

    His research lies at the intersection of large-scale machine learning, AI for health and science, and language model reasoning. He has published over 60 peer-reviewed papers across leading venues in machine learning (NeurIPS, ICLR, ACL, EMNLP), ubiquitous computing (CHI, IMWUT/Ubicomp), and health (Nature Medicine, Nature Communications). His work has been featured in oral presentations at NeurIPS, ICLR, and EMNLP, and has received coverage from public media outlets including The Verge, CNET, Google Keyword, IEEE Spectrum, ACM TechNews, and GeekWire. He received his  PhD in Computer Science from the University of Washington Seattle in 2023 (supported by Google PhD Fellowship) and bachelor’s with highest honors from the University of Massachusetts Amherst in 2018.

     

    Max Xu (UIUC / Google) 

    Maxwell A. Xu’s research interests are in developing machine learning methodologies for foundation models on wearable sensor data. Max has been able to achieve real-world impact by applying his research to be directly used with Apple’s Airpods Pro devices (https://www.apple.com/newsroom/2025/09/introducing-airpods-pro-3-the-ultimate-audio-experience/) and with Google’s Fitbit devices (https://research.google/blog/lsm-2-learning-from-incomplete-wearable-sensor-data/). As a University of Illinois Urbana-Champaign PhD candidate, he works with James M. Rehg as his advisor and has received the NSF Graduate Research Fellowship, Georgia Tech Presidential Fellowship, as well as being a finalist for the Apple AIML Fellowship.

     

    Girish Narayanswamy (UW / Google) 

    Girish Narayanswamy is a final year PhD Student at the University of Washington Ubiquitous Computing Lab, where he is advised by Professor Shwetak Patel. Girish’s research focuses on novel Learning (ML/AI) and Time-Series Modeling methodologies. He is particularly interested in applications which improve health sensing and expand health access.

     

     

    Samy Abdel-Ghaffar (Google) 

     

    Samy Abdel-Ghaffar is a Senior Research Scientist at Google. His research aims are to build technologies and do basic science that improve the mental and physical health of people through the application of data science, machine learning and psychological science to sensor data. He received his PhD at the University of California, Berkeley in cognitive neuroscience in 2018 and bachelor’s degree in computer science from the University of Southern California in 2001.

  • Panelists: Erika Ellison, Michelle Khine, Amanda Watson, Huining Li

    Moderator: Jessilyn Dunn

More panels to be announced!