Home > Archive > 2024 > Volume 14 Number 1 (2024) >
IJML 2024 Vol.14(1): 18-23
DOI: 10.18178/ijml.2024.14.1.1152

An Industrial Use-Case for Reinforcement Learning: Optimizing a Production Planning in Stochastic Conditions

Paul Berhaut*, Ikhlass Yaya-Oyé, and Axelle Albot
Air Liquide SA, 75 Quai d’Orsay, 75007 Paris, France
Email: paul.berhaut@airliquide.com (P.B.), ikhlass.yaya-oye@airliquide.com (I.Y.O.), axelle.albot@airliquide.com(A.A.)
*Corresponding author

Manuscript received March 31, 2023; revised April 25, 2023; accepted May 15, 2023; published February 7, 2024

Abstract—In this paper, we investigate deep Q-learning algorithms to optimize gas production planning in stochastic conditions. To demonstrate the value of reinforcement learning for gas production planning, we model the physical behavior of an industrial asset - an Air Separation Unit – based on historical data, electricity prices and customers’ consumption patterns. We use the well-established reinforcement learning framework with non-episodic tasks and discounted rewards designed to minimize production costs and discourage insufficient production. We compare reinforcement learning agents to agents based on MILP (Mixed Integer Linear Programming) solvers. MILP solvers are currently used by energy-intensive industries to plan production based on imperfectly forecasted states. With these solvers, taking forecast uncertainty into account leads to high computational complexity (stochastic methods) or potentially conservative results (robust optimization). While demonstrating similar results in low-uncertainty scenarios, the DQN agents have shown better resilience to high amplitude uncertainties. They have demonstrated an efficient risk-averse strategy that outperforms the MILP baseline. DQN algorithms also gain advantage with their ability to be trained on infinite horizons, compared to MILP solvers where the state at the end of a finite horizon is set manually.

Keywords—reinforcement learning, stochastic environment, production scheduling, air separation

[PDF]

Cite: Paul Berhaut, Ikhlass Yaya-Oyé, and Axelle Albot, "An Industrial Use-Case for Reinforcement Learning: Optimizing a Production Planning in Stochastic Conditions," International Journal of Machine Learning vol. 14, no. 1, pp. 18-23, 2024.

Copyright © 2024 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

General Information

  • E-ISSN: 2972-368X
  • Abbreviated Title: Int. J. Mach. Learn.
  • Frequency: Quaterly
  • DOI: 10.18178/IJML
  • Editor-in-Chief: Dr. Lin Huang
  • Executive Editor:  Ms. Cherry L. Chen
  • Abstracing/Indexing: Inspec (IET), Google Scholar, Crossref, ProQuest, Electronic Journals LibraryCNKI.
  • E-mail: ijml@ejournal.net


Article Metrics in Dimensions