Discrete Event Probabilistic Simulation (DEPS) integrated into a Reinforcement Learning Framework for Optimal Production
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
This paper introduces an innovative simulation method that significantly accelerates the training of reinforcement learning agents in manufacturing, surpassing the constraints of traditional simulations. While prior Discrete Event Simulations (DES) of production lines have a runtime complexity of $O(NMS)$ for N (batched) products, M machines, and S samples, our \gls{deps} can reduce an expected value computation complexity of $O(KM)$ for K time windows and M machines. Furthermore, unlike traditional DES, our developed DEPS is probabilistic enabling trustworthy decision-making by providing creditable uncertainty and risk bounds. By conditioning the product quality on machine parameters, DEPS enables rapid, risk-free AI agent training, paving the way for their integration into production lines. We (1) demonstrate that our simulation offers a state-of-the-art simulation speed, (2) provide an adaptable open-source framework for probabilistic production-line-level simulation and modelling, and (3) share a physically plausible benchmark environment. Our contribution seeks to provide a new standard for industrial \gls{ai} applications, allowing manufacturers and researchers to leverage \gls{rl} agents' potential for optimising production-line-level efficiency, process optimisation, and resource allocation.
Description
Keywords
Journal Title
Conference Name
Journal ISSN
2212-8271

