Representation learning algorithms for inferring machine independent latent features in pedestals in JET and AUG

Aaro E. Järvinen*, Adam Kit, Y. R.J. Poels, S. Wiesen, V. Menkovski, L. Frassinetti, M. Dunne, JET Contributors, ASDEX Upgrade Team

*Corresponding author for this work

    Research output: Contribution to journalArticleScientificpeer-review

    2 Citations (Scopus)

    Abstract

    Variational autoencoder (VAE)-based representation learning algorithms are explored for their capability to disentangle tokamak size dependence from other dependencies in a dataset of thousands of observed pedestal electron density and temperature profiles from JET and ASDEX Upgrade tokamaks. Representation learning aims to establish a useful representation that characterizes the dataset. In the context of magnetic confinement fusion devices, a useful representation could be considered to map the high-dimensional observations to a manifold that represents the actual degrees of freedom of the plasma scenario. A desired property for these representations is organization of the information into disentangled variables, enabling interpretation of the latent variables as representations of semantically meaningful characteristics of the data. The representation learning algorithms in this work are based on VAE that encodes the pedestal profile information into a reduced dimensionality latent space and learns to reconstruct the full profile information given the latent representation. Attaching an auxiliary regression objective for the machine control parameter configuration, broadly following the architecture of the domain invariant variational autoencoder (DIVA), the model learns to associate device control parameters with the latent representation. With this multimachine dataset, the representation does encode density scaling with device size that is qualitatively consistent with Greenwald density limit scaling. However, if the major radius of the device is given through a common regression objective with the other machine control parameters, the latent state of the representation struggles to clearly disentangle the device size from changes of the other machine control parameters. When separating the device size as an independent latent variable with dedicated regression objectives, similar to separation of domain and class labels in the original DIVA publication, the latent space becomes well organized as a function of the device size.

    Original languageEnglish
    Article number032508
    JournalPhysics of Plasmas
    Volume31
    Issue number3
    DOIs
    Publication statusPublished - 1 Mar 2024
    MoE publication typeA1 Journal article-refereed

    Funding

    This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No. 101052200—EUROfusion). The work of Aaro Järvinen and Adam Kit was partially supported by the Research Council of Finland Grant No. 355460. The authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources. The relevant CSC Project No. 2005083.

    Fingerprint

    Dive into the research topics of 'Representation learning algorithms for inferring machine independent latent features in pedestals in JET and AUG'. Together they form a unique fingerprint.

    Cite this