Abstract:
The majority of the metabolic energy consumption of a lower-limb exoskeleton user comes from the upper body effort, since the lower body can be considered to be passive. However, the upper body effort of lower limb exoskeleton users is ignored during motion controller development process in the literature. In this thesis study, deep reinforcement learning is used to develop a locomotion controller that minimizes the ground reaction forces (GRF) on crutches. The rationale for minimizing the ground reaction forces is to minimize the upper body effort of the user. A model of the human-exoskeleton system with crutches is created in URDF and XML formats. Reward functions that encourage the forward displacement of the center of mass of the exoskeleton-human system without falling and extreme joint torques are shaped. The state-of-the-art methods, Twin Delayed Deep Deterministic Policy Gradient (TD3) and Proximal Policy Optimization (PPO), are employed with the RaiSim and MuJoCo physics simulators and with different algorithm specific parameters in multiple training trials. The employed networks generate the joint torques based on the joint angle and velocities along with the ground reaction forces on feet and crutch tips. These generated joint torques are directly sent to the exoskeleton model and a new state is observed after implementing the action that the deep RL framework provides. Policies trained by the TD3 and PPO methods on RaiSim are observed to fail to generate proper control commands for a stable and natural looking gait. In general, it is observed that the PPO method generated higher rewards than the TD3 method on RaiSim. After failing to develop a desired policy with RaiSim, MuJoCo is employed as the simulator. Eventually, a policy that can generate a reasonable gait with a desired crutch usage and with 35% minimization in GRFs with respect to the baseline policy is developed.