New RNN Algorithms for Different Time-Variant Matrix Inequalities Solving under Discrete-Time Framework

Yang Shi*, Chenling Ding, Shuai Li, Bin Li, Xiaobing Sun

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

5 Citations (Scopus)

Abstract

A series of discrete time-variant matrix inequalities is generally regarded as one of the challenging problems in science and engineering fields. As a discrete time-variant problem, the existing solving schemes generally need the theoretical support under the continuous-time framework, and there is no independent solving scheme under the discrete-time framework. The theoretical deficiency of solving scheme greatly limits the theoretical research and practical application of discrete time-variant matrix inequalities. In this article, new discrete-time recurrent neural network (RNN) algorithms are proposed, analyzed, and investigated for solving different time-variant matrix inequalities under the discrete-time framework, including discrete time-variant matrix vector inequality (discrete time-variant MVI), discrete time-variant generalized matrix inequality (discrete time-variant GMI), discrete time-variant generalized-Sylvester matrix inequality (discrete time-variant GSMI), and discrete time-variant complicated-Sylvester matrix inequality (discrete time-variant CSMI), and all solving processes are based on the direct discretization thought. Specifically, first of all, four discrete time-variant matrix inequalities are presented as the target problems of these researches. Second, for solving such problems, we propose corresponding discrete-time recurrent neural network (RNN) (DT-RNN) algorithms (termed DT-RNN-MVI algorithm, DT-RNN-GMI algorithm, DT-RNN-GSMI algorithm, and DT-RNN-CSMI algorithm), which are different from the traditional DT-RNN design thought because second-order Taylor expansion is applied to derive the DT-RNN algorithms. This creative process avoids the intervention of continuous-time framework. Then, theoretical analyses are presented, which show the convergence and precision of the DT-RNN algorithms. Abundant numerical experiments are further carried out, which further confirm the excellent properties of the DT-RNN algorithms.

Original languageEnglish
Pages (from-to)5244-5257
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume36
Issue number3
DOIs
Publication statusPublished - 2025
MoE publication typeA1 Journal article-refereed

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61906164 and Grant 61972335, in part by the Natural Science Foundation of Jiangsu Province of China under Grant BK20190875, in part by the Six Talent Peaks Project in Jiangsu Province under Grant RJFW-053, in part by Jiangsu 333 Project, in part by Yangzhou University Top-Level Talents Support Program (2021 and 2019), in part by the Qinglan Project of Yangzhou University (2021), and in part by the Postgraduate Research and Practice Innovation Program of Jiangsu Province under Grant KYCX21_3234 and Grant SJCX22_1709.

Keywords

  • Direct discretization thought
  • matrix inequalities
  • recurrent neural network (RNN)
  • residual error (RE)
  • second-order Taylor expansion

Fingerprint

Dive into the research topics of 'New RNN Algorithms for Different Time-Variant Matrix Inequalities Solving under Discrete-Time Framework'. Together they form a unique fingerprint.

Cite this