Repository logo
 

Discrete Event Probabilistic Simulation (DEPS) integrated into a Reinforcement Learning Framework for Optimal Production

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

This paper introduces an innovative simulation method that significantly accelerates the training of reinforcement learning agents in manufacturing, surpassing the constraints of traditional simulations. While prior Discrete Event Simulations (DES) of production lines have a runtime complexity of $O(NMS)$ for N (batched) products, M machines, and S samples, our \gls{deps} can reduce an expected value computation complexity of $O(KM)$ for K time windows and M machines. Furthermore, unlike traditional DES, our developed DEPS is probabilistic enabling trustworthy decision-making by providing creditable uncertainty and risk bounds. By conditioning the product quality on machine parameters, DEPS enables rapid, risk-free AI agent training, paving the way for their integration into production lines. We (1) demonstrate that our simulation offers a state-of-the-art simulation speed, (2) provide an adaptable open-source framework for probabilistic production-line-level simulation and modelling, and (3) share a physically plausible benchmark environment. Our contribution seeks to provide a new standard for industrial \gls{ai} applications, allowing manufacturers and researchers to leverage \gls{rl} agents' potential for optimising production-line-level efficiency, process optimisation, and resource allocation.

Description

Keywords

Journal Title

Procedia CIRP

Conference Name

57th CIRP Conference on Manufacturing Systems 2024

Journal ISSN

2212-8271
2212-8271

Volume Title

Publisher

Elsevier

Publisher DOI

Publisher URL

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)
Sponsorship
EPSRC (via University Of Lincoln) (EP/S023917/1)
Engineering and Physical Sciences Research Council [EP/S023917/1]