Degree

Doctor of Philosophy (PhD)

Department

Division of Electrical & Computer Engineering

Document Type

Dissertation

Abstract

This dissertation presents a cohesive research effort aimed at advancing machine learning systems in resource-constrained environments, with a focus on self-supervised learning for Human Activity Recognition (HAR) and federated learning frameworks. The work is structured around three key areas: addressing labeled data scarcity in HAR, improving training efficiency in wireless federated learning, and designing incentive mechanisms for large-scale federated learning.

First, we introduce a novel self-supervised learning framework called Temporal Contrastive Learning in Human Activity Recognition (TCLHAR). This framework leverages temporal relationships among adjacent time windows, enabling a deeper understanding of human activities as dynamic processes, while reducing the reliance on labeled data. TCLHAR enhances the model’s ability to capture complex patterns in time-series data, a crucial step in addressing data scarcity in HAR.

Next, we tackle the challenge of training efficiency in federated learning. We propose a latency-minimization approach that optimizes client selection and training procedures, taking into account both data and system heterogeneity. This method ensures balanced client participation while guaranteeing model convergence, offering a solution for improving training efficiency in environments with significant data and latency diversity.

Furthermore, we present a Multi-Group Transmission (MGT) scheme using Orthogonal Frequency-Division Multiple Access (OFDMA) to increase participant involvement per training round in federated learning. This approach reduces stochastic variance and accelerates model convergence through optimized device and training scheduling, making it suitable for large-scale deployment where system and data heterogeneity are prevalent.

Finally, we propose DualGFL, a federated learning framework that integrates a dual-level game-theoretical model to jointly optimize client and server utilities. The lower-level game is a coalition formation game, where clients self-organize based on an auction-aware utility function and form Pareto-optimal coalitions. The upper-level game is a multi-attribute auction, in which these coalitions compete for training participation. This design captures both cooperative and competitive dynamics in hierarchical federated learning, effectively improving model performance, incentive alignment, and system scalability in large-scale settings.

Date

4-15-2025

Committee Chair

Xiangwei Zhou

DOI

10.31390/gradschool_dissertations.6786

Available for download on Tuesday, April 13, 2032

Share

COinS