This paper investigates an unmanned aerial vehicle (UAV)-assisted mobile-edge computing (MEC) system, in which the UAV provides complementary computation resource to the terrestrial MEC system. The UAV processes the received computation tasks from the mobile users (MUs) by creating the corresponding virtual machines. Due to finite shared I/O resource of the UAV in the MEC system, each MU competes to schedule local as well as remote task computations across the decision epochs, aiming to maximize the expected long-term computation performance. The non-cooperative interactions among the MUs are modeled as a stochastic game, in which the decision makings of a MU depend on the global state statistics and the task scheduling policies of all MUs are coupled. To approximate the Nash equilibrium solutions, we propose a proactive scheme based on the long short-term memory and deep reinforcement learning (DRL) techniques. A digital twin of the MEC system is established to train the proactive DRL scheme offline. Using the proposed scheme, each MU makes task scheduling decisions only with its own information. Numerical experiments show a significant performance gain from the scheme in terms of average utility per MU across the decision epochs.