tanh — Hyperbolic tangent of scalars, vectors, matrices, complex numbers, or character arrays with MATLAB broadcasting and GPU acceleration.
y = tanh(x) evaluates the hyperbolic tangent of each element in x, preserving MATLAB's column-major layout and broadcasting rules across scalars, arrays, and tensors.
How tanh works in RunMat
- Operates on scalars, vectors, matrices, and N-D tensors with MATLAB-compatible implicit expansion.
- Logical and integer inputs are promoted to double precision before evaluation so downstream arithmetic keeps MATLAB's numeric semantics.
- Complex values follow the analytic extension
tanh(a + bi) = sinh(a + bi) / cosh(a + bi), propagatingNaN/Infcomponents component-wise. - Character arrays are interpreted through their Unicode code points and return dense double arrays that mirror MATLAB's behavior.
- Inputs that already live on the GPU stay resident when the provider implements
unary_tanh; otherwise RunMat gathers to the host, computes, and reapplies residency hints for later operations. - Empty inputs and singleton dimensions are preserved without introducing extraneous allocations.
- String and string-array arguments raise descriptive errors to match MATLAB's numeric-only contract for the hyperbolic family.
How tanh runs on the GPU
With RunMat Accelerate active, tensors remain on the device and execute through the provider's unary_tanh hook (or a fused elementwise kernel) without leaving GPU memory.
If the provider declines the operation—for example, when it lacks the hook for the active precision—RunMat transparently gathers to the host, computes the result, and reapplies the requested residency rules.
Fusion planning keeps neighbouring elementwise operators grouped, reducing host↔device transfers even when an intermediate fallback occurs.
GPU memory and residency
You usually do **not** need to call gpuArray explicitly. The fusion planner keeps tensors on the GPU whenever the active provider exposes the necessary kernels (such as unary_tanh). Manual gpuArray / gather calls remain supported for MATLAB compatibility or when you need to pin residency before interacting with external code.
Examples
Hyperbolic tangent of a real scalar
y = tanh(1)Expected output:
y = 0.7616Applying tanh to a symmetric vector
x = linspace(-2, 2, 5);
y = tanh(x)Expected output:
y = [-0.9640 -0.7616 0 0.7616 0.9640]Evaluating tanh on a matrix in GPU memory
G = gpuArray([0 0.5; 1.0 1.5]);
result_gpu = tanh(G);
result = gather(result_gpu)Expected output:
result =
0 0.4621
0.7616 0.9051Computing tanh for complex angles
z = 0.5 + 1.0i;
w = tanh(z)Expected output:
w = 1.0428 + 0.8069iConverting character codes via tanh
c = tanh('ABC')Expected output:
c = [1.0000 1.0000 1.0000]Preserving empty array shapes
E = zeros(0, 3);
out = tanh(E)Expected output:
out = zeros(0, 3)Stabilising activation functions
inputs = [-3 -1 0 1 3];
activations = tanh(inputs / 2)Expected output:
activations = [-0.9051 -0.4621 0 0.4621 0.9051]FAQ
When should I reach for tanh?
Use tanh for hyperbolic tangent evaluations—common in signal processing, numerical solvers, and neural-network activations thanks to its bounded output.
Does tanh support complex numbers?
Yes. RunMat mirrors MATLAB by evaluating tanh(z) = sinh(z) / cosh(z) for complex z, producing correct real and imaginary parts while propagating NaN/Inf values.
How does the GPU fallback work?
If the provider lacks unary_tanh, RunMat gathers the tensor to host memory, computes the result, and reapplies residency choices so downstream GPU consumers still see device-backed tensors when appropriate.
Can tanh appear in fused GPU kernels?
Absolutely. The fusion planner emits WGSL kernels that inline tanh, and providers can supply custom fused pipelines for even higher performance.
How does tanh treat logical arrays?
Logical arrays are promoted to 0.0 or 1.0 doubles before evaluation, matching MATLAB's behavior for the hyperbolic family.
What happens with empty or singleton dimensions?
Shapes are preserved. Empty inputs return empty outputs, and singleton dimensions remain intact so downstream broadcasting behaves as expected.
Do I need to worry about numerical overflow?
tanh saturates towards ±1 for large-magnitude real inputs, providing stable results. Complex poles can still yield infinities, mirroring MATLAB.
Can I differentiate tanh in RunMat?
Yes. The autograd infrastructure recognises tanh as a primitive and records it on the reverse-mode tape for native gradients once acceleration is enabled.
Related functions to explore
These functions work well alongside tanh. Each page has runnable examples you can try in the browser.
sinh, cosh, atanh, gpuArray, gather, acos, acosh, asin, asinh, atan, atan2, cos, sin, tan
Open-source implementation
Unlike proprietary runtimes, every RunMat function is open-source. Read exactly how tanh works, line by line, in Rust.
- View tanh.rs on GitHub
- Learn how the runtime works
- Found a bug? Open an issue with a minimal reproduction.
About RunMat
RunMat is an open-source runtime that executes MATLAB-syntax code — faster, on any GPU, with no license required.
- Simulations that took hours now take minutes. RunMat automatically optimizes your math for GPU execution on Apple, Nvidia, and AMD hardware. No code changes needed.
- Start running code in seconds. Open the browser sandbox or download a single binary. No license server, no IT ticket, no setup.
- A full development environment. GPU-accelerated 2D and 3D plotting, automatic versioning on every save, and a browser IDE you can share with a link.