In this paper, we propose a deep reinforcement learning (DRL) based approach to solving the problem of joint server selection, task offloading and handover in a multi-access edge computing (MEC) wireless network. The 5G networks tend to have a large number of users and MEC servers involving large numbers of different states and actions (both continuous and discrete), in which evaluating every possible combination becomes very challenging for traditional DRL methods. In addition, user mobility in 5G requires multiple handover decisions to be made in real-time, adding a new level of complexity to this already hard problem. Based on the recursive decomposition of the action space available to each state, we propose a deep Q-network (DQN) based online algorithm for this high-complexity problem. Numerical results show the proposed algorithm significantly outperforms the traditional Q-learning method and local computation in terms of task success rate and total delay