Decoding HDF5: Machine Learning File Forensics and Data Injection
Document Type
Conference Proceeding
Publication Date
1-1-2024
Abstract
The prevalence of ML in computing is rapidly expanding and Machine Learning (ML) systems are continuously applied to novel challenges. As the adoption of these systems grows, their security becomes increasingly important. Any security vulnerabilities within an ML system can jeopardize the integrity of dependent and related systems. Modern ML systems commonly encapsulate trained models in a compact format for storage and distribution, including TensorFlow 2 (TF2) and its utilization of the Hierarchical Data Format 5 (HDF5) file format. This work explores into the security implications of TF2 ’s use of the HDF5 format to save trained models, aiming to uncover potential weaknesses via forensic analysis. Specifically, we investigate the injection and detection of foreign data in these packaged files using a custom tool external to TF2, leading to the development of a dedicated forensic analysis tool for TF2 ’s HDF5 model files.
Publication Source (Journal or Book title)
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
First Page
193
Last Page
211
Recommended Citation
Walker, C., Baggili, I., & Wang, H. (2024). Decoding HDF5: Machine Learning File Forensics and Data Injection. Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, 570 LNICST, 193-211. https://doi.org/10.1007/978-3-031-56580-9_12