Optimization of Laboratory Equipment Sharing Scheduling Strategy Based on Reinforcement Learning
DOI:
https://doi.org/10.70767/jmec.v3i4.1042Abstract
The optimal scheduling of equipment resources in shared laboratories faces complex challenges including dynamic and uncertain demands, real-time changes in inventory status, and multi-objective trade-offs. To address the limitations of traditional scheduling methods in adapting to high-dimensional state spaces and dynamically evolving environments, this paper proposes an adaptive scheduling strategy optimization method based on deep reinforcement learning. First, this method constructs a dynamic scheduling model that integrates digital representations of multi-source heterogeneous equipment, time-window-constrained random demand sequences, and real-time state-aware inventory mapping, which provides structured state inputs for intelligent decision-making. Then, it designs a state space incorporating inventory features and a reward shaping function with multi-objective constraints, and it uses the proximal policy optimization algorithm to train a scheduling network, thereby generating an adaptive scheduling strategy capable of autonomous learning. Finally, it introduces a demand-aware module based on time-series forecasting and a collaborative pricing mechanism driven by remaining inventory, enabling dynamic tuning and closed-loop iteration of the scheduling strategy. Simulation experiments show that this method outperforms baseline methods in key indicators such as resource utilization, request satisfaction rate, and system stability, and it can effectively improve the operational efficiency and supply-demand matching accuracy of the laboratory equipment sharing system.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Journal of Modern Education and Culture

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.