These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Model-Based Transfer Reinforcement Learning Based on Graphical Model Representations.
    Author: Sun Y, Zhang K, Sun C.
    Journal: IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):1035-1048. PubMed ID: 34543207.
    Abstract:
    Reinforcement learning (RL) plays an essential role in the field of artificial intelligence but suffers from data inefficiency and model-shift issues. One possible solution to deal with such issues is to exploit transfer learning. However, interpretability problems and negative transfer may occur without explainable models. In this article, we define Relation Transfer as explainable and transferable learning based on graphical model representations, inferring the skeleton and relations among variables in a causal view and generalizing to the target domain. The proposed algorithm consists of the following three steps. First, we leverage a suitable casual discovery method to identify the causal graph based on the augmented source domain data. After that, we make inferences on the target model based on the prior causal knowledge. Finally, offline RL training on the target model is utilized as prior knowledge to improve the policy training in the target domain. The proposed method can answer the question of what to transfer and realize zero-shot transfer across related domains in a principled way. To demonstrate the robustness of the proposed framework, we conduct experiments on four classical control problems as well as one simulation to the real-world application. Experimental results on both continuous and discrete cases demonstrate the efficacy of the proposed method.
    [Abstract] [Full Text] [Related] [New Search]