Matching samples of multiple views

Abhishek Tripathi (Corresponding Author), Arto Klami, Matej Orešič, Samuel Kaski

Research output: Contribution to journalArticleScientificpeer-review

12 Citations (Scopus)


Multi-view learning studies how several views, different feature representations, of the same objects could be best utilized in learning. In other words, multi-view learning is analysis of co-occurrence data, where the observations are co-occurrences of samples in the views. Standard multi-view learning such as joint density modeling cannot be done in the absence of co-occurrence, when the views are observed separately and the identities of objects are not known. As a practical example, joint analysis of mRNA and protein concentrations requires mapping between genes and proteins. We introduce a data-driven approach for learning the correspondence of the observations in the different views, in order to enable joint analysis also in the absence of known co-occurrence. The method finds a matching that maximizes statistical dependency between the views, which is particularly suitable for multi-view methods such as canonical correlation analysis which has the same objective. We apply the method to translational metabolomics, to identify differences and commonalities in metabolic processes in different species or tissues. The metabolite identities and roles in the different species are not generally known, and it is necessary to search for a matching. In this paper we show, using different metabolomics measurement batches as the views so that the ground truth is known, that the metabolite identities can be reliably matched by a consensus of several matching solutions.
Original languageEnglish
Pages (from-to)300-321
JournalData Mining and Knowledge Discovery
Issue number2
Publication statusPublished - 2011
MoE publication typeA1 Journal article-refereed


  • Bipartite matching
  • Canonical correlation
  • Consensus matching
  • Co-occurrence data
  • Multi-view learning


Dive into the research topics of 'Matching samples of multiple views'. Together they form a unique fingerprint.

Cite this