Workshops & Tutorials

    • Organizers: Xin Liu (Google Health Research, USA), Maxwell A Xu (University of Illinois Urbana-Champaign, USA), Girish Narayanswamy (University of Washington, USA) and Daniel McDuff (Google and University of Washington, USA)
    • Location and Schedule: Luskin main conference room; Wednesday, November 05 2025, 1:00 PM
    • Requirements: *TUTORIAL* Please bring your own portable computer.
    • Preamble:

    Unobtrusive, ubiquitous, and cost-effective wearable sensors have demonstrated the potential to revolutionize real-time monitoring of health and wellness by enabling the detection of various physical and mental health states. Machine learning models, in particular, play a crucial role in unlocking the full potential of this data, for example in the development of personalized and targeted therapy, enabling just-in-time interventions to aid in adherence and retention, or discovering of hidden behavior biomarkers to predict disease progression. However, training machine learning models on wearable sensor data remains challenging due to several key limitations: high susceptibility to noise and artifacts, frequent data missingness, the excessive computational resources required to train large neural networks, and the scarcity of large, high-quality public datasets.

    The emergence of Foundation Models (FM) offers a new opportunity to address these challenges and accelerate our progress. In other domains such as natural language processing and computer vision, the foundation model paradigm has transformed the development of machine learning solutions to real-world problems. The key property of an FM is that it is pre-trained on a large-scale dataset to ensure that the resulting feature representation encompasses all of the complexity of the data domain, which is then validated by demonstrating that the FM can solve multiple downstream tasks without additional representation learning. In computer vision, the field has moved away from collecting individual special-purpose datasets and training task-specific models to leveraging existing foundation model representations, such as DINOv2 in solving a variety of perceptual tasks. A key enabling property is that, while the training datasets are private and not publicly available, the model weights are released to the research community, enabling everyone to benefit from its powerful representation. This transition in utilizing publicly available FMs has not yet occurred for mHealth, and it is a crucial next step.

    There is a critical need to unite our community to foster strong collaborations to solve these challenges. While other domains have rapidly integrated breakthroughs in generative AI and large-scale foundation models, the health time-series domain has classically lagged behind, requiring careful reconciliation of these advances with domain-specific challenges. This tutorial aims to teach researchers working on wearable sensors how to utilize our LSM-2 foundation model in order to help with their research. LSM-2 is pre-trained on 40,000,000 hours of wearable sensor data from 60,440 persons with 128 Google v5e TPUs for 100k steps. LSM-2 achieves state-of-the-art performance on a range of downstream tasks from insulin resistance regression to activity recognition. Not only this, it is flexible enough to be used with varying sensor configurations, differing time windows, and specifically designed to be robust to missingness and noise.

    • Program:
    1. Introduction & Landscape (40 min)

    – Welcome & Workshop Goals (10 min) – Organizers

    – Keynote: “Foundation Models for Wearables: Opportunities and Challenges” (30 min)

    Invited speaker discusses the state of FM research in health sensing, open problems, and ethical considerations (beyond any single model)

    1. Technical Deep Dives (70 min)

    – Tutorial 1: “LSM-2 as a Case Study” (30 min)

    Architecture, pre-training, and lessons learned (positioned as one example of FM development)

    – Tutorial 2: “Alternative Approaches to FMs for Wearables” (30 min)

    Guest speaker covers other FM paradigms (e.g., contrastive learning, modular architectures) or datasets

    – Q&A Lightning Round (10 min)

    1. Panel + Breakouts (80 min)

    – Panel: “What Does the Field Need Next?” (40 min)

    Experts debate: Do we need larger FMs? Better benchmarks? Federated learning?

    – Breakout Discussions (40 min)

    *Groups rotate through topics:

    (1) Data-sharing incentives,

    (2) Edge deployment challenges,

    (3) Validation standards for FMs*

    1. Research Spotlights & Closing (50 min)

    – Rapid Fire Talks (4×5 min)

    Short presentations on FM applications (e.g., “Few-shot learning for rare diseases,” “Cross-modal sensor fusion”)

    – Synthesis Activity: “Priorities for the Community” (20 min)

    Audience votes on top challenges (live poll) + organizers summarize

    – Wrap-up & Resources (10 min)

    • Organizers: Miguel Martins (University of Porto, Portugal) and Francesco Renna (University of Porto, Portugal)
    • Location and Schedule: Tannas Room, Engineering VI; Wednesday, November 05 2025, 1:00 PM
    • Requirements: *TUTORIAL* Please bring your own portable computer.
    • Preamble: Cardiovascular disease remains the world’s leading cause of mortality, yet access to expert auscultation is often limited by geography or clinician availability. Automated analysis of phonocardiogram (PCG) signals — capturing heart sounds via digital stethoscopes — offers a transformative route to scalable, early detection of valvular and structural heart abnormalities. This tutorial intends to equip attendees with both the theory and practical skills needed to build and deploy end-to-end deep-learning pipelines for PCG segmentation and classification.
    • Program
    1. Fundamentals of PCG Signal Processing: From time- and frequency-domain feature extraction to denoising and normalization.
    2. Deep-Learning Architectures for Segmentation: Designing and training convolutional neural networks to identify S1, systole, S2, and diastole phases.
    3. Dataset Curation & Management: Hands-on use of the newly released CirCor DigiScope dataset on the HuggingFace Hub, including preprocessing pipelines and data augmentation strategies.
    4. Practical, Interactive Notebooks: Guided exercises in Google Colab (or locally), covering end-to-end model training, evaluation, and error analysis.
    • Organizers: Ana Pedroso (Plux Biosignals, Portugal)
    • Location and Schedule: 37-124 Engineering IV; Wednesday, November 05 2025, 1:00 PM
    • Requirement: TUTORIAL Please bring your own portable computer.
    • Preamble:

    Modern biosignal research is no longer confined to the lab. From fundamental neuroscience to real-world human performance studies, the demand for flexible, multimodal sensor setups continues to grow. Yet recording clean, reliable sensor data outside controlled environments remains a significant challenge.

    In this workshop, PLUX Biosignals draws on over a decade of experience developing wearable systems for academic, clinical, and industry researchers. They’ll explore common pitfalls in adapting lab-based biosignal setups for real-world use and share proven strategies to overcome them.

    Through practical demonstrations, we’ll focus on system configurations, synchronization methods, and key lessons learned when scaling from pilot projects to longitudinal, in-the-wild studies.

    Participants will gain hands-on experience with sensor placement, artifact detection, and hardware design trade-offs specific to field applications, equipping them to extend their research beyond the lab with confidence.

    • Program:

    Mix of general experience sharing, hands-on practice. Invitations for researchers to participate with real-world use cases still pending.

    • Organizers: Zeineb Bouzid (Georgia Institute of Technology, Atlanta, GA, USA) and Omer T. Inan (Georgia Institute of Technology, Atlanta, GA, USA)
    • Location and Schedule: 289 – Engineering VI; Wednesday, November 05 2025, 1:00 PM
    • Maximum Number of Participants: ~25
    • Preamble: Clinical automation is an overarching aim of extensive research focusing on achieving vigilant monitoring and treatment of patients. In particular, the automatic regulation of physiological variables – also known as physiological closed-loop control – has received keen interest for a few decades. Novel physiological closed-loop control systems can achieve continuous, close medical monitoring and accurate treatment delivery autonomously, which reduces clinician workload and eliminates unintentional variations in therapies. In this workshop, there will focus on recent advances in physiological sensing and closed-loop control for healthcare, including wearable and AI-enabled technologies. In addition, it is going to discuss the main challenges and gaps for clinical integration of physiological closed-loop control systems.
    • Program

    (1:00 – 1:10 pm) Opening Remarks

    (1:10 – 1:40 pm) “AI-Enabled Point of Care Sensing” – Speaker: Aydogan Ozcan, Chancellor’s Professor and The Howard Hughes Medical Institute Professor at UCLA

    (1:40 – 2:10 pm) “Continuous Adaptive Physics-Informed Neural Networks for Windkessel-Driven Hemodynamic Parameter Estimation” – Speaker: Dr. Kaan Sel, postdoctoral associate at Laboratory for Information & Decision Systems, Microsystems Technology Laboratories, MIT

    (2:10 – 2:40 pm) “Medical Cyber-Physical Systems for Mental Health” – Speaker: Rose Faghih, Associate Professor of Biomedical Engineering at NYU

    (2:40 – 2:55 pm) Break

    (2:55 – 3:25 pm) “In-Ear Unobtrusive EEG Interfaces for Closed-Loop Auditory Neurofeedback” – Speaker: Dr. Yuchen Xu, researcher at Institute for Neural Computation, UC San Diego.

    (3:25 – 3:55 pm) Panel Discussion – Panel Moderator: Zeineb Bouzid; Topic: Challenges and gaps for clinical integration of physiological closed-loop control systems

    (3:55 – 4:00 pm) Closing Remarks

    • Speakers:
    • Dr. Kaan Sel, a Postdoctoral Associate in Dr. Jafari’s lab, is going to give the talk on his behalf. His affiliation is: Laboratory for Information & Decision Systems, Microsystems Technology Laboratories, MIT.
    • Dr. Yuchen Xu, a Researcher who works closely with Dr. Cauwenberghs, is going to present on his behalf. His affiliation is: Institute for Neural Computation, UC San Diego.
  • Organizers: Eungjoo Lee (University of Arizona, USA) and Hee-Sup Shin (University of Missouri-Kansas City, USA)

    Location: Maxwell 57-124 Engineering IV

    Workshop Schedule: 1:00 – 4:30 pm

    Preamble:

    Advances in digital healthcare demand multidisciplinary collaboration across materials science, embedded systems design, and artificial intelligence, emphasizing seamless connections among researchers from diverse backgrounds. As wearables, implantable sensors, and AI systems continue to evolve, the need to bridge these traditionally fragmented fields becomes increasingly critical.

    This workshop will highlight innovations in detecting physiological and biochemical signals, advanced multimodal data fusion of these biomarkers, and the development of lightweight AI models for real-time, on-device healthcare applications. During the workshop, we will focus on three main areas:

    1. Bioelectronics

    Bioelectronics is rapidly moving from concept to reality, reshaping the ways biomarkers are sensed, recorded, and analyzed. Breakthroughs in fully implantable and epidermal electronics are creating continuous streams of physiological insight while remaining unobtrusive and biocompatible. This session will explore how merging electronics with biology is paving the way for seamless health monitoring, precision therapies, and digitally connected bodies.

    1. Foundational Biomedical Science and Engineering

    Advances in materials science and chemistry are establishing new foundations for biomedical science and engineering. These approaches are enabling more effective therapeutic strategies, enhanced regenerative capacity, and innovative pathways for health monitoring. This session will highlight the state of the art in materials- and chemistry-driven innovations shaping the future of translational medicine.

    1. Machine learning for digital healthcare

    This session focuses on the translation of machine learning technologies into clinical settings. Machine learning leverages multimodal data to enable earlier disease detection and more effective treatment. By converting complex, disparate information into clear clinical insights, it enhances diagnostic accuracy in challenging cases and supports personalized, evidence-based patient care.

    Program

    Opening: Brief introduction (10 min)

    Eungjoo Lee, Ph.D.

    Assistant Professor of Electrical and Computer Engineering,

    University of Arizona

     

    Session 1: Bioelectronics (1:10 pm- 2:20 pm)

    Talk 1a: Integrated Wearable and Implantable Systems for Multimodal Physiological and Neural Data Fusion

    Abstract: Advances in materials and fabrication concepts for soft electronics coupled with miniaturization of wireless energy transfer enables the creation of high-performance electronic and optoelectronic systems with footprint and physical properties matched to biology. This talk explores the creation of such systems and discusses applications in the context of imperceptible body-worn devices for the assessment of physiology.

    Speaker

    Philipp Gutruf, Ph.D.

    Bio: Philipp Gutruf is an Associate Professor, Associate Department Head in the Biomedical Engineering Department and Craig M. Berge Faculty Fellow at the University of Arizona. His research group focuses on creating devices that intimately integrate with biological systems by combining innovations in soft materials, photonics and electronics to create systems with broad impact on health diagnostics, therapeutics and exploratory neuroscience

    Talk 1b: Translational Multimodal Bioelectronics: From Biosymbiotic Devices to Clinically Integrated Digital Health

    Abstract: Advances in soft materials, low-power circuits, and wireless energy harvesting now allow imperceptible, long-lived bioelectronic systems that are practical beyond the lab bench. This talk presents translational pathways for multimodal bioelectronics that bridge engineering innovation with clinical impact.

    Speaker

    Yayun Du, Ph.D.

    Bio: Yayun Du is an Assistant Professor of Electrical & Computer Engineering at Vanderbilt University, where she leads a research group at the interface of multimodal wearable/implantable bioelectronics, low-power wireless systems, and edge-AI analytics. Her SYMBIO-X lab’s mission is to translate week-scale, 24/7 biosignal fusion into actionable, patient-centered care.

     

    Session 2: Foundational biomedical science and engineering (2:30 – 3:40 pm)

    Talk 2a: Conductive and Tough Bioadhesive Hydrogel for Tissue Engineering and Biosensing

    Abstract: Hydrogels have been extensively used for various biomedical applications, ranging from tissue engineering to matrices for drug delivery, as well as substrates for biosensing, due to their versatility in structure and physical properties. Although significant progress has been made towards designing hydrogels with tunable properties, engineering tough and elastic hydrogels that resemble native tissue mechano-physical properties is still considered a great challenge. In this presentation, I will outline our recent works on the design of tough bioadhesive hydrogels for soft tissue sealing and regeneration as well as their application as wearable biosensors.

    Speaker

    Nasim Annabi, Ph.D.

    Bio: Nasim Annabi is an Associate Professor in the Department of Chemical and Biomolecular Engineering at the University of California, Los Angeles (UCLA). Her team has pioneered engineering innovative multifunctional biomaterials with tunable properties for tissue regeneration and ultra-strong bioadhesives.

    Talk 2b: Optimizing Lactation Outcomes with Wearable Biosensors

    Abstract: Despite significant advancements in wearable sensors for on-body chemical analysis of biofluids like sweat, devices designed specifically for real-time analysis of human milk to support maternal and infant health are almost nonexistent. During this talk, we will discuss our novel strategies for chemical modification of sensor surface to enhance robustness and accuracy within the complex breast milk matrix. We will also explore new, low-burden methods for frequent milk sampling and share our approaches for controlling against biofouling and calibration drifts to ensure meaningful data from on-body recordings.

    Speaker

    Maral Mousavi, Ph.D.

    Bio: Maral Mousavi is WiSE Gabilan Assistant Professor of Biomedical Engineering at the University of Southern California (USC). Maral’s research is focused on development of point-of-care diagnostics, wearable devices, neural probes, and tools for precision medicine.

     

    Session 3: Machine learning for digital healthcare (3:50 – 4:30 pm)

    Talk 3a: AI Integration in Precision Oncology

    Speaker

     William Hsu, Ph.D.

    Bio: William Hsu is Professor of Radiological Sciences and Bioengineering and Director of the Medical Informatics Ph.D. program at the University of California, Los Angeles, affiliated with the Medical & Imaging Informatics group. He is a biomedical informatician focused on maximizing the utility of multimodal data to detect and treat cancers earlier. His lab harnesses artificial intelligence/machine learning techniques to integrate and transform multimodal data into actionable knowledge.

     

    Closing (10 min)

    Hee-Sup Shin, Ph.D.

    Assistant Professor of Mechanical Engineering,

    University of Missouri-Kansas City

  • Time: November 5th, 1:00 PM to 5:00 PM

    Location: 364 Engineering VI

    Summary of motivation for the workshop:

    The IEEE BSN community, traditionally focused on technological advances, is increasingly recognizing the importance of stakeholder engagement in developing and applying digital health technologies. To reflect this shift, a proposed interactive workshop at BSN 2025 will emphasize user-centered and stakeholder-informed approaches. Drawing on examples such as STRN’s quality frameworks and patient monitoring in low-resource settings, the workshop will provide both conceptual foundations and practical methods for evaluating technologies beyond technical performance, considering usability, relevance, and stakeholder perspectives. Aligned with BSN’s goals of interdisciplinary collaboration and context-specific deployment, the workshop aims to foster dialogue and offer actionable strategies for responsibly integrating digital health tools into practice.

    Progam of the workshop:

    The proposed workshop is designed as an interactive session that balances expert input from invited talks with hands-on group engagement during breakout discussions.

    • (1:00 – 1:10 pm) Workshop Welcome
      • Jean-Marie Aerts (Professor at KU Leuven, Belgium)
    • (1:10 – 1:30 pm) Development of a Sports Technology Quality Framework
      • Speakers: Dhruv Seshadri (Assistant professor at Lehigh University, USA)
    • (1:30 – 1:50 pm) Frameworks Beyond Sports Technology
      • Speaker: Garrett Ash (Assistant Professor of Medicine at Yale University, USA)
    • (1:50 – 2:10 pm) Integrating Stakeholder Priorities into Evaluation Frameworks
      • Speakers: Jasper Gielen (Postdoctoral researcher at KU Leuven, Belgium)
    • (2:10 – 2:30 pm) Case: Designing Wearable Technology with the Quality Framework in Mind
      • Speaker: Dhruv Seshadri (Assistant professor at Lehigh University, USA)
    • (2:30 – 2:50 pm) Case: Wearable Technology Use in Sub-Saharan Africa for Diabetes monitoring
      • Speakers: Genet Aboye Tadese (PhD researcher at KU Leuven, Belgium and Jimma University, Ethiopia)
    • (2:50 – 3:30 pm) Roundtable Breakouts
      • Guided by BSN conference themes, with topics introduced and shaped by participants
    • (3:30 – 4:00 pm) Plenary Wrap-Up

    For your information, please find the contact infomation below:

  • Time: Wednesday, November 5, 2025, 13:00–16:00 AM

    Location: 6764 Boelter

    Sponsored by: UCI Institute for Future Health

    Personalized Conversational Health Agents (CHAs) are redefining digital health by coupling multimodal patient data with conversational interfaces to provide individualized, context-sensitive support. They supersede static reporting by translating heterogeneous data streams into actionable feedback for self-management, adherence, and preventive care.

    Building on the tutorial’s foundations, this hands-on workshop moves from concepts to code using the open-source CHA platform openCHA (opencha.com). We will focus on turning rich, multimodal health signals (e.g., wearables, sensors, EMAs) into personalized, conversational support.

    We will begin by walking through a guided demonstration of a CHA that integrates multimodal health data and provides personalized feedback through conversation. Participants will then replicate this demo step by step, gaining hands-on experience with the key components of a working CHA system.

    Following the replication, participants will work in teams to design and prototype their own CHA-powered health tools. We will encourage experimentation with new data sources, conversation flows, and feedback strategies, highlighting how different design choices can shape user engagement and personalization.

    By the end of the hands-on workshop, attendees will be able to:

    • Connect multimodal health data streams to a conversational interface.
    • Replicate and extend a sample demo CHA.
    • Prototype and test novel CHA-driven digital health applications.

    This workshop complements the tutorial by moving from conceptual understanding to practical implementation, equipping participants with the skills to create and experiment with personalized conversational health agents in real-world contexts.