In this paper, we investigate the problem of robot swarm control in 5G mission-critical robotic applications, i.e., in an automated grid-based warehouse scenario. Such application requires both the kinematic energy consumption of the robots and the ultra-reliable and low latency communication (URLLC) between the central controller and the robot swarm to be jointly optimized in real-time. The problem is formulated as a nonconvex optimization problem since the achievable rate and decoding error probability with short block-length are neither convex nor concave in bandwidth and transmit power. We propose a deep reinforcement learning (DRL) based approach that employs the deep deterministic policy gradient (DDPG) method and convolutional neural network (CNN) to achieve a stationary optimal control policy that consists of a number of continuous and discrete actions. Numerical results show that our proposed multi-agent DDPG algorithm achieves a performance close to the optimal baseline and outperforms the single-agent DDPG in terms of decoding error probability and energy efficiency.
Deep Reinforcement Learning for URLLC in 5G Mission-Critical Cloud Robotic Application