Doctor of Philosophy (PhD)
In this research, we investigated the application of deep reinforcement learning (DRL) to a common manufacturing scheduling optimization problem, max makespan minimization. In this application, tasks are scheduled to undergo processing in identical processing units (for instance, identical machines, machining centers, or cells). The optimization goal is to assign the jobs to be scheduled to units to minimize the maximum processing time (i.e., makespan) on any unit.
Machine learning methods have the potential to "learn" structures in the distribution of job times that could lead to improved optimization performance and time over traditional optimization methods, as well as to adapt effectively to changing distributions over time. DRL has been demonstrated in other problem areas to be effective for this type of learning.
The focus of this work was on how to design the DRL model to perform well in this type of application. A prioritization method was developed to improve the performance of the DRL model on the min max makespan problem in terms of training time and accuracy, and an importance factor was developed to address the problem of overfitting and applied to the prioritized model.
We examined performance of the prioritized DRL model across different job length distributions (uniform, beta, and normal) of task times and different problem sizes (number of units and tasks), and the prioritized DRL model demonstrated both improved solution time and solution quality against the standard DRL model.
Martinez, Jose Napoleon, "A Deep Reinforcement Learning Approach with Prioritized Experience Replay and Importance Factor for Makespan Minimization in Manufacturing" (2022). LSU Doctoral Dissertations. 5830.