std — Standard deviation of scalars, vectors, matrices, or N-D tensors with MATLAB-compatible options.
std(x) measures the spread of the elements in x. By default RunMat matches MATLAB’s sample definition (dividing by n-1) and works along the first non-singleton dimension.
How does the std function behave in MATLAB / RunMat?
std(X)on anm × nmatrix returns a1 × nrow vector with the sample standard deviation of each column.std(X, 1)switches to population normalisation (nin the denominator). Usestd(X, 0)orstd(X, [])to keep the default sample behaviour.std(X, flag, dim)lets you pick both the normalisation (flag = 0sample,1population, or[]) and the dimension to reduce.std(X, flag, 'all')collapses every dimension, whilestd(X, flag, vecdim)accepts a dimension vector such as[1 3]and reduces all listed axes in a single call. Multi-axis reductions execute on the host today when the active GPU provider cannot fuse them.- Strings like
'omitnan'and'includenan'decide whetherNaNvalues are skipped or propagated. - Optional out-type arguments (
'double','default','native', or'like', prototype) mirror MATLAB behaviour.'native'rounds scalar integer results back to their original class;'like'mirrors both the numeric class and device residency ofprototype(complex prototypes yield complex outputs with zero imaginary parts). - Logical inputs are promoted to double precision before reduction so that results follow MATLAB’s numeric rules.
- Empty slices return
NaNwith MATLAB-compatible shapes. Scalars return0, regardless of the normalisation mode. - Dimensions greater than
ndims(X)leave the input untouched. - Weighted standard deviations (
flagas a vector) are not implemented yet; RunMat reports a descriptive error when they are requested. Complex tensors are not currently supported; convert them to real magnitudes manually before callingstd.
GPU behavior
When RunMat Accelerate is active, device-resident tensors remain on the GPU whenever the provider implements the relevant hooks. Providers that expose reduce_std_dim/reduce_std execute the reduction in-place on the device; the default WGPU backend currently supports two-dimensional inputs, single-axis reductions, and 'includenan' only. Whenever 'omitnan', multi-axis reductions, or unsupported shapes are requested, RunMat transparently gathers the data to the host, computes the result there, and then applies the requested output template ('native', 'like') before returning.
GPU residency
Usually you do not need to call gpuArray manually. The fusion planner keeps tensors on the GPU across fused expressions and gathers them only when necessary. For explicit control or MATLAB compatibility, you can still call gpuArray/gather yourself.
Examples of using std in MATLAB / RunMat
Sample standard deviation of a vector
x = [1 2 3 4 5];
s = std(x); % uses flag = 0 (sample) by defaultExpected output:
s = 1.5811Population standard deviation of each column
A = [1 3 5; 2 4 6];
spop = std(A, 1); % divide by n instead of n-1Expected output:
spop = [0.5 0.5 0.5]Collapsing every dimension at once
B = reshape(1:12, [3 4]);
overall = std(B, 0, 'all')Expected output:
overall = 3.6056Reducing across multiple dimensions
C = cat(3, [1 2; 3 4], [5 6; 7 8]);
sliceStd = std(C, [], [1 3]); % keep columns, reduce rows & pagesExpected output:
sliceStd = [2.5820 2.5820]Ignoring NaN values
D = [1 NaN 3; 2 4 NaN];
rowStd = std(D, 0, 2, 'omitnan')Expected output:
rowStd = [1.4142; 1.4142]Matching a prototype using 'like'
proto = gpuArray(single(42));
G = gpuArray(rand(1024, 512));
spread = std(G, 1, 'all', 'like', proto);
answer = gather(spread)Preserving default behaviour with an empty normalisation flag
C = [1 2; 3 4];
rowStd = std(C, [], 2)Expected output:
rowStd = [0.7071; 0.7071]FAQ
What values can I pass as the normalisation flag?
Use 0 (or []) for the sample definition, 1 for population. RunMat rejects non-scalar weight vectors and reports that weighted standard deviations are not implemented yet.
How can I collapse multiple dimensions?
Pass a vector of dimensions such as std(A, [], [1 3]). You can also use 'all' to collapse every dimension into a single scalar.
How do 'omitnan' and 'includenan' work?
'omitnan' skips NaN values; if every element in a slice is NaN the result is NaN. 'includenan' (the default) propagates a single NaN to the output slice.
What do 'native' and 'like' do?
'native' rounds scalar results back to the input’s integer class (multi-element outputs stay in double precision for now), while 'double'/'default' keep double precision. 'like', prototype mirrors both the numeric class and the device residency of prototype, including GPU tensors; complex prototypes produce complex outputs with zero imaginary parts.
What happens if I request a dimension greater than ndims(X)?
RunMat returns the input unchanged so that MATLAB-compatible code relying on that behaviour continues to work.
Are complex inputs supported?
Not yet. RunMat currently requires real inputs for std. Convert complex data to magnitude or separate real/imaginary parts before calling the builtin.
See also
mean, sum, median, gpuArray, gather
Source & Feedback
- Source code: `crates/runmat-runtime/src/builtins/math/reduction/std.rs`
- Found a bug? Open an issue with a minimal reproduction.