Singular Value Decomposition
SVD Theorem
For each matrix
- The set of singular values is unique
- Left/right singular vectors (columns or
and ) can have arbitrary sign
Computation Complexity
SVD and Frobenius norm
Let the SVD of
SVD and Spectral norm
Let the SVD of
Reduced SVD
One can often prune columns of
Eckart Young Theorem
- Is fundamental to many low-rank approximation problems
- This means that approximations for any
can directly be read-off the SVD
Formal definition:
Corollary
The squared error of low rank approximations can be expressed as:
Proof:
SVD and PCA
SVD is intimately related to eigendecomposition
is square and symmetric: and have equal columns up to possible sign differences- if
is positive semi-definit, then the SVD is equal to the eigendecomposition - Squares of
: as well as :
convention: for
SVD and Matrix Completion
We generalize the simple rank 1 model to a rank
We do this by additive superposition of
If all matrix entries are completely obeserved, the solution is constructively given by the SVD.
SVD with Imputation
Naive strategy:
1. estimate values for missing matrix elements (e.g. row or column mean)
2. impute them to create a complete matrix
3. run SVD to compute low rank approximation
If we have an incomplete observation, SVD is in general not applicable directly to compute low-rank approximations.
NP Hardness
Weighted Forbenius norm problem:
- special case:
, ?!!? maybe
This is NP-hard even for