undefined

Random projections for matrix decomposition and manifold learning

Julkaisuvuosi

2019

Tekijät

Aizenbud, Yariv

Tiivistelmä

The thesis focuses on solving problems that are related to the behavior of random variables in high-dimensional spaces. The main motivation comes from the understanding that many of the scientific challenges involve large amounts of highdimensional data. It is known that there are always a small number of “hidden” parameters that encode the “interesting” part of the data. The question is, how do we identify and extract these parameters? This thesis is focused on two different aspects of data analysis: Numerical linear algebra and manifold learning. Numerical linear algebra is a major component for data analysis. It includes matrix factorization algorithms such as SVD and LU. SVD is considered to be the single most important algorithm in numerical linear algebra. However, due to the computational complexity of classical SVD algorithms, they cannot be applied in practice to huge datasets. One possible solution to this problem is to use low-rank methods. The idea of low-rank methods is the fact that in many cases there are dependencies and redundancies within the data. Therefore, the data can be well approximated and processed by utilizing its low-rank property which results in a faster processing of smaller data. In this thesis, Low-rank SVD and LU approximation algorithms are presented. They create a trade-off between accuracy and computational time. We improve on the state-of-the-art algorithms for Low-rank SVD and LU approximation. Since matrix factorization algorithms play a central central role in almost any modern computation, this part of the thesis provides general tools for many of the modern big data, and data analysis challenges. Understanding high-dimensional data via manifold learning. Many data analysis problems are formulated in the language of manifold learning. A typical assumption is that the data is on (or near) some unknown manifold embedded in high dimensions, and the goal is to “understand” the structure of this manifold. The thesis presents two result on this subject. First, a connection between two of the most classical methods in manifold learning, PCA and least squares, is presented. Secondly, a method for regression over manifold is presented. It allows to interpolate functions defined on manifolds given only the values of the function in several sampled points, without knowing the manifold on which the function is defined. The ability to solve regression problems over manifolds, can enable us to gain new insights from complex sampled data.
Näytä enemmän

Organisaatiot ja tekijät

Jyväskylän yliopisto

Aizenbud Yariv

Julkaisutyyppi

Julkaisumuoto

Erillisteos

Yleisö

Tieteellinen

OKM:n julkaisutyyppiluokitus

G5 Artikkeliväitöskirja

Julkaisukanavan tiedot

Lehti

JYU dissertations

Kustantaja

Jyväskylän yliopisto

Avoin saatavuus

Avoin saatavuus kustantajan palvelussa

Kyllä

Julkaisukanavan avoin saatavuus

Kokonaan avoin julkaisukanava

Rinnakkaistallennettu

Ei

Muut tiedot

Tieteenalat

Matematiikka; Tilastotiede; Tietojenkäsittely ja informaatiotieteet

Avainsanat

[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]

Julkaisumaa

Suomi

Kustantajan kansainvälisyys

Kotimainen

Kieli

englanti

Kansainvälinen yhteisjulkaisu

Ei

Yhteisjulkaisu yrityksen kanssa

Ei

Julkaisu kuuluu opetus- ja kulttuuriministeriön tiedonkeruuseen

Kyllä