RunMat
GitHub

linsolve — Solve linear systems A * X = B with optional structural hints (triangular, symmetric, positive-definite, or transposed).

X = linsolve(A, B) solves the linear system A * X = B. The optional opts structure lets you declare that A is lower- or upper-triangular, symmetric, positive-definite, rectangular, or that the transposed system should be solved instead. These hints mirror MATLAB and allow the runtime to skip unnecessary factorizations.

How linsolve works in RunMat

  • Inputs must behave like 2-D matrices (trailing singleton dimensions are accepted). size(A, 1) must match size(B, 1) after accounting for opts.TRANSA.
  • When opts.LT or opts.UT are supplied, linsolve performs forward/back substitution instead of a full factorization. Singular pivots trigger the MATLAB error "linsolve: matrix is singular to working precision."
  • opts.TRANSA = 'T' or 'C' solves Aᵀ * X = B (conjugate transpose for complex matrices).
  • opts.POSDEF and opts.SYM are accepted for compatibility; the current implementation still falls back to the SVD-based dense solver when a specialised route is not yet wired in.
  • The optional second output [X, rcond_est] = linsolve(...) (exposed via the VM multi-output path) returns the estimated reciprocal condition number used to honour opts.RCOND.
  • Logical and integer inputs are promoted to double precision. Complex inputs are handled in complex arithmetic.

How linsolve runs on the GPU

When a gpuArray provider is active, RunMat offers the solve to its linsolve hook. The current WGPU backend downloads the operands to the host, executes the shared CPU solver, and uploads the result back to the device so downstream kernels retain their residency. If no provider is registered—or a provider declines the hook—RunMat gathers inputs to the host and returns a host tensor.

GPU memory and residency

No additional residency management is required. When both operands already reside on the GPU, RunMat executes the provider's linsolve hook. The current WGPU backend gathers the data to the host, runs the shared solver, and re-uploads the output automatically, so downstream GPU work keeps its residency. Providers that implement an on-device kernel can execute entirely on the GPU without any MATLAB-level changes.

Examples

Solving a 2×2 linear system

A = [4 -2; 1 3];
b = [6; 7];
x = linsolve(A, b)

Expected output:

x =
     2
     1

Using a lower-triangular hint

L = [3 0 0; -1 2 0; 4 1 5];
b = [9; 1; 12];
opts.LT = true;
x = linsolve(L, b, opts)

Expected output:

x =
     3
     2
     1

Solving the transposed system

A = [2 1 0; 0 3 4; 0 0 5];
b = [3; 11; 5];
opts.UT = true;
opts.TRANSA = 'T';
x = linsolve(A, b, opts)

Expected output:

x =
     1
     2
     1

Complex triangular solve

U = [2+1i  -1i; 0  4-2i];
b = [3+2i; 7];
opts.UT = true;
x = linsolve(U, b, opts)

Expected output:

x =
   2.0000 + 0.0000i
   1.7500 + 0.8750i

Estimating the reciprocal condition number

A = [1 1; 1 1+1e-12];
b = [2; 2+1e-12];
[x, rcond_est] = linsolve(A, b)

Expected output:

x =
     1
     1

rcond_est =
    4.4409e-12

FAQ

What happens if I pass both opts.LT and opts.UT?

RunMat raises the MATLAB error "linsolve: LT and UT are mutually exclusive."—a matrix cannot be simultaneously strictly lower- and upper-triangular.

Does opts.TRANSA accept lowercase characters?

Yes. opts.TRANSA is case-insensitive and accepts 'N', 'T', 'C', or their lowercase variants. 'C' and 'T' are equivalent for real matrices; 'C' takes the conjugate transpose for complex matrices (mirroring MATLAB).

How is opts.RCOND used?

opts.RCOND provides a lower bound on the acceptable reciprocal condition number. If the estimated rcond falls below the requested threshold the builtin raises "linsolve: matrix is singular to working precision."

Do opts.SYM or opts.POSDEF change the algorithm today?

They are accepted for MATLAB compatibility. The current implementation still uses the dense SVD solver when no specialised routine is wired in; future work will route positive-definite systems to Cholesky-based kernels.

Can I use higher-dimensional arrays?

Inputs must behave like matrices. Trailing singleton dimensions are permitted, but other higher-rank arrays should be reshaped before calling linsolve, just like in MATLAB.

These functions work well alongside linsolve. Each page has runnable examples you can try in the browser.

mldivide, mrdivide, lu, chol, gpuArray, gather, cond, det, inv, norm, pinv, rank, rcond

Open-source implementation

Unlike proprietary runtimes, every RunMat function is open-source. Read exactly how linsolve works, line by line, in Rust.

About RunMat

RunMat is an open-source runtime that executes MATLAB-syntax code — faster, on any GPU, with no license required.

  • Simulations that took hours now take minutes. RunMat automatically optimizes your math for GPU execution on Apple, Nvidia, and AMD hardware. No code changes needed.
  • Start running code in seconds. Open the browser sandbox or download a single binary. No license server, no IT ticket, no setup.
  • A full development environment. GPU-accelerated 2D and 3D plotting, automatic versioning on every save, and a browser IDE you can share with a link.

Getting started · Benchmarks · Pricing

Try RunMat — free, no sign-up

Start running MATLAB code immediately in your browser.