RunMat
GitHub

View all functions

CategoryMath: Linalg & Factor
GPUYes
BLAS/LAPACK

svd — Singular value decomposition with full, economy, and vectorised forms.

svd(A) computes the singular value decomposition of a real or complex matrix A. It factors A into the product U * S * V', where U and V are orthogonal/unitary matrices and S is diagonal (or rectangular diagonal) with non-negative, descending singular values.

How does the svd function behave in MATLAB / RunMat?

  • Single output s = svd(A) returns the singular values as a column vector sorted in descending order.
  • Three outputs [U,S,V] = svd(A) return the full-sized factors with U square m×m, S shaped m×n, and V square n×n (m = size(A,1), n = size(A,2)).
  • Economy form [U,S,V] = svd(A,'econ') (or svd(A,0)) reduces the shapes to the rank-defining dimension so that U and V drop the redundant orthogonal columns.
  • Vector form [U,s,V] = svd(A,'vector') supplies the singular values as a vector instead of a diagonal matrix. You can combine 'vector' with 'econ'.
  • Logical and integer inputs are promoted to double precision before factorisation.
  • Complex inputs yield unitary U and V (conjugate-transpose preserves orthogonality) with real, non-negative singular values.
  • Empty matrices, row/column vectors, and scalars are all supported and follow MATLAB’s shape conventions.

GPU behavior

RunMat reserves a dedicated svd provider hook; once a backend implements it, the factors can stay on the device as gpuTensor handles without round-tripping through host memory.

Today no provider ships that hook, so gpuArray inputs are gathered to the host, the CPU SVD executes, and the factors are returned as host tensors. You can re-establish residency with gpuArray(s) if you need to continue on the GPU.

Because SVD is a residency sink, the fusion planner treats it as a barrier—preceding GPU tensors are gathered and subsequent ops run on the host unless you manually promote them again.

Examples of using svd in MATLAB / RunMat

Getting the singular values of a matrix

A = [1 2 3; 4 5 6; 7 8 9];
s = svd(A)

Full SVD and reconstruction of a square matrix

A = [3 1; 0 2];
[U,S,V] = svd(A);
A_recon = U * S * V'

Economy-size SVD for a tall matrix

A = randn(6, 3);
[U,S,V] = svd(A, 'econ');
size(U) %  6 x 3
size(S) %  3 x 3
size(V) %  3 x 3

Economy-size SVD for a wide matrix

A = randn(3, 6);
[U,S,V] = svd(A, 'econ');
size(U) %  3 x 3
size(S) %  3 x 6
size(V) %  6 x 6

Requesting vector form of the singular values

A = [10 0; 0 1];
[U,s,V] = svd(A, 'vector')

Computing the SVD of a complex matrix

A = [1+2i, 2-1i; 0, 3i];
[U,S,V] = svd(A)

Running svd on a gpuArray (automatic host fallback today)

G = gpuArray(randn(128, 64));
s = svd(G);           % Values are gathered to host transparently

FAQ

How are the singular values ordered?

They are returned in non-increasing order. MATLAB’s sign conventions are followed: values are non-negative and appear on the diagonal of S (or inside the vector form).

What is the difference between full and economy forms?

Full SVD returns square U and V (m×m and n×n). Economy SVD trims them to m×min(m,n) and n×min(m,n). The S factor keeps the same column dimension as the input; it is min(m,n)×min(m,n) when m ≥ n and min(m,n)×n when m < n. Use economy when you do not need the redundant orthogonal columns.

What does the "vector" option change?

It affects the second output. With "vector", S is returned as a column vector of singular values, matching svd(A) in the single-output form. Without it, S is a diagonal matrix.

Can I mix 'econ' and 'vector'?

Yes. Any order of the options is accepted (svd(A,'vector','econ') and svd(A,'econ','vector') both work), and the returned dimensions mirror MATLAB’s behaviour.

What happens with scalars or empty matrices?

svd of a scalar returns its absolute value. Empty matrices return empty factors with consistent dimensions so that downstream code can continue to operate without special cases.

Does RunMat require BLAS/LAPACK for svd?

No. The builtin is always available. When BLAS/LAPACK is enabled, the host implementation leverages those libraries through nalgebra for performance; otherwise a pure-Rust algorithm is used under the hood.

Will the results stay on the GPU?

Not yet. Presently the builtin gathers GPU operands to the host, runs the CPU factorisation, and returns host tensors. The GPU spec already reserves a hook so providers can keep everything device-resident once GPU kernels land.

See also

eig, qr, lu, chol, gpuArray, gather

Source & Feedback