loading page

Controlling Mixed-Mode Fatigue Crack Growth using Deep Reinforcement Learning
  • Yuteng Jin,
  • Siddharth Misra
Yuteng Jin
Texas A&M University
Author Profile
Siddharth Misra
Texas A&M University

Corresponding Author:[email protected]

Author Profile

Abstract

Mechanical discontinuity embedded in a material plays an essential role in determining the bulk mechanical, physical, and chemical properties. This paper is a proof-of-concept development and deployment of a reinforcement learning framework to control both the direction and rate of the growth of fatigue crack. The reinforcement learning framework is coupled with an OpenAI-Gym-based environment that implements the mechanistic equations governing the fatigue crack growth. Learning agent does not explicitly know about the underlying physics; nonetheless, the learning agent can infer the control strategy by continuously interacting the numerical environment. The Markov decision process, which includes state, action and reward, is carefully designed to obtain a good control policy. The deep deterministic policy gradient algorithm is implemented for learning the continuous actions required to control the fatigue crack growth. An adaptive reward function involving reward shaping improves the training. The reward is mostly positive to encourage the learning agent to keep accumulating the reward rather than terminate early to avoid receiving high accumulated penalties. An additional high reward is given to the learning agent when the crack tip reaches close enough to the goal point within specific training iterations to encourage the agent to reach the goal points as quickly as possible rather than slowly approaching the goal point to accumulate the positive reward. The reinforcement learning framework can successfully control the fatigue crack propagation in a material despite the complexity of the propagation pathway determined by multiple goal points.