TensorFlow.js tf.min(): Practical Minimum Reductions in JavaScript (2026)

The moment I start doing real numeric work in JavaScript (feature scaling, anomaly checks, loss debugging, data validation), I run into the same problem: I need trustworthy reductions over multi-dimensional data. A single min value can tell you whether your inputs contain negatives when they shouldn’t, whether your logits exploded, whether a mask is working, or whether your normalization step is about to divide by zero.

In TensorFlow.js, tf.min() is the workhorse for that job. It computes minimum values across an entire tensor or along specific axes, and it does so on the same backend you’re already using for your model (WebGL/WebGPU/CPU in the browser, or native bindings in Node if you’ve set that up). The result is a tensor, not a JS number, which matters for performance, memory, and gradients.

I’ll walk you through how tf.min(x, axis?, keepDims?) behaves, how axis reduction changes shapes, why keepDims is the difference between clean broadcasting and shape pain, and the patterns I reach for when I need “min” in real projects (including masked mins and debugging pipelines).

What tf.min() Actually Does (and Why You Should Care)

tf.min() reduces values: it collapses one or more dimensions of a tensor by taking the smallest element along those dimensions.

  • If you reduce across all dimensions, you get a scalar tensor (rank 0) containing the single smallest value.
  • If you reduce across one axis of a 2D tensor, you get a 1D tensor containing per-row or per-column minima.
  • If you reduce across multiple axes, you remove multiple dimensions in one operation.

This is not a “nice to have.” I use tf.min() constantly for:

  • Data validation: “Do I have negative ages? Are pixel values below 0?”
  • Numerical safety: “Is my denominator ever hitting 0?”
  • Feature scaling: min-max normalization starts with min and max.
  • Debugging training: quickly detect NaNs or wild ranges.
  • Masking: compute mins while ignoring padding or invalid entries.

A simple analogy I use when explaining this to teammates: think of a tensor as a spreadsheet, but with more than two directions. tf.min() is like asking, “What’s the smallest number in the whole spreadsheet?” or “What’s the smallest number in each column?” depending on the axis.

The API: Signature, Inputs, and Return Type

The function form is:

tf.min(x, axis?, keepDims?)

Parameters:

  • x: the input tensor (or something convertible to a tensor) whose minimum values you want.
  • axis (optional): a number or an array of numbers specifying which dimension(s) to reduce. If omitted, TensorFlow.js reduces across all dimensions.
  • keepDims (optional): if true, the reduced dimensions stay in the result with size 1. If false (the default), reduced dimensions are removed, reducing the rank.

Return value:

  • A tensor containing the minimum value(s). You still need to read it (async) if you want a JS number/array.

One subtle point that saves time: TensorFlow.js also exposes a method-style call on tensors:

  • x.min(axis?, keepDims?)

Under the hood, you can treat them as the same operation. I personally prefer the function style (tf.min(x, ...)) in shared utility code because it’s explicit, and the method style (x.min(...)) in small local calculations because it reads nicely.

A runnable baseline example (Node or browser)

Below is a complete snippet you can run in a modern setup. In Node, you’ll typically install TensorFlow.js and import it; in the browser you’ll load it via bundler or script tag depending on your toolchain.

// Example: basic tf.min usage

import * as tf from "@tensorflow/tfjs";

const a = tf.tensor1d([0, 1]);

const b = tf.tensor1d([3, 5]);

const c = tf.tensor1d([2, 4, 7]);

// Function style

tf.min(a).print();

tf.min(b).print();

tf.min(c).print();

// Method style (equivalent)

a.min().print();

Getting values out (without tripping over async)

A Tensor is not a JS array. To inspect values:

  • tensor.print() is convenient for quick checks.
  • await tensor.data() returns a typed array.
  • tensor.dataSync() is synchronous but can block the main thread in the browser.
// Reading the scalar min as a JS number

import * as tf from "@tensorflow/tfjs";

const x = tf.tensor2d([3, 5, 2, 8], [2, 2]);

const m = tf.min(x); // scalar tensor

const mArr = await m.data();

console.log("min =", mArr[0]);

// Cleanup if you‘re not using tf.tidy (more on this later)

x.dispose();

m.dispose();

If you’re debugging in a browser tab, I strongly suggest await tensor.data() over dataSync() unless you’re absolutely sure the tensor is tiny.

Axes: How Reductions Change Shape (with Clear Mental Models)

The axis argument is where most mistakes happen, especially if you’re bouncing between NumPy, PyTorch, and TensorFlow.js.

Rule of thumb I follow:

  • axis = 0 reduces “down the rows” for a 2D tensor: you get one value per column.
  • axis = 1 reduces “across the columns” for a 2D tensor: you get one value per row.
  • Negative axes count from the end: -1 means “last axis,” -2 means “second to last,” etc.

2D example: column mins vs row mins

import * as tf from "@tensorflow/tfjs";

// Shape: [2, 3]

// Row 0: [10, 7, 9]

// Row 1: [ 4, 12, 6]

const x = tf.tensor2d([

10, 7, 9,

4, 12, 6,

], [2, 3]);

// axis=0 => one min per column => shape [3]

const colMins = tf.min(x, 0);

colMins.print(); // [4, 7, 6]

// axis=1 => one min per row => shape [2]

const rowMins = tf.min(x, 1);

rowMins.print(); // [7, 4]

I like to sanity-check with a pencil test: “If I reduce axis 0, I’m collapsing the first dimension, so it should disappear and I keep the other dimension.” For shape [2, 3], reducing axis 0 should produce [3]. That mental check catches a lot of bugs.

Reducing multiple axes at once

You can pass an array of axes to reduce multiple dimensions in a single call.

import * as tf from "@tensorflow/tfjs";

// Shape: [2, 2, 3]

const x = tf.tensor3d([

// batch 0 (2x3)

1, 9, 2,

5, 3, 8,

// batch 1 (2x3)

4, 7, 6,

0, 2, 1,

], [2, 2, 3]);

// Reduce last two dims (like per-batch min)

// axes [1,2] => result shape [2]

const perBatchMin = tf.min(x, [1, 2]);

perBatchMin.print(); // [1, 0]

// Reduce everything => scalar

const globalMin = tf.min(x);

globalMin.print();

What about an empty axis list?

In TensorFlow.js, if axis has no entries (for example, []), the reduction behaves like a full reduction and returns a tensor with a single element (all dimensions reduced). Practically, I avoid passing [] and instead pass undefined (omit axis) when I want a full reduction, because it’s clearer in code review.

Negative axes (the ones I actually use)

When I’m writing reusable ops that should work for both [batch, features] and [batch, time, features], negative axes are my default. -1 is “features” in both layouts.

import * as tf from "@tensorflow/tfjs";

// Shape: [2, 4]

const scores = tf.tensor2d([

0.2, 0.1, 0.9, 0.3,

0.5, 0.4, 0.6, 0.0,

], [2, 4]);

// Per-row min using last axis

const minPerRow = tf.min(scores, -1);

minPerRow.print(); // [0.1, 0.0]

keepDims: The Shape-Saver for Broadcasting

keepDims is the difference between “this composes nicely” and “why is broadcasting failing at runtime?”

  • keepDims = false (default): reduced axes are removed.
  • keepDims = true: reduced axes remain, but with length 1.

Why I care: broadcasting rules are easier when ranks line up.

Example: subtract the per-row minimum from each row

Goal: for a [batch, features] tensor, subtract each row’s min from every element in that row.

If you compute rowMin with keepDims=false, you get shape [batch]. Subtracting [batch] from [batch, features] is not the broadcast you want.

If you compute rowMin with keepDims=true, you get shape [batch, 1], which broadcasts cleanly across features.

import * as tf from "@tensorflow/tfjs";

const x = tf.tensor2d([

10, 7, 9,

4, 12, 6,

], [2, 3]);

// Keep the reduced axis so we can broadcast

const rowMin = tf.min(x, 1, true); // shape [2, 1]

const shifted = x.sub(rowMin);

x.print();

rowMin.print();

shifted.print();

This pattern shows up everywhere: normalization, centering, min-max scaling, and “subtract max for softmax stability” (the same idea, different reduction).

Quick table: keepDims in practice

Goal

Recommended call

Result shape behavior —

— Get a single smallest value for logging

tf.min(x)

scalar tensor Per-feature mins for [batch, features]

tf.min(x, 0)

shape [features] Subtract per-row min and broadcast

tf.min(x, 1, true)

shape [batch, 1] Reduce multiple dims but keep ranks aligned

tf.min(x, [1,2], true)

dims become 1 instead of disappearing

Real-World Patterns I Reach For

This section is where tf.min() stops being a toy and starts being a tool.

1) Min-max scaling (with safe division)

Min-max scaling uses both min and max:

xScaled = (x - min) / (max - min)

You should protect against max == min (constant features) to avoid dividing by zero.

import * as tf from "@tensorflow/tfjs";

// Scale features independently: input shape [batch, features]

function minMaxScalePerFeature(x, epsilon = 1e-6) {

return tf.tidy(() => {

const minF = tf.min(x, 0, true); // [1, features]

const maxF = tf.max(x, 0, true); // [1, features]

const range = maxF.sub(minF);

// Avoid division by 0 for constant features

const safeRange = tf.maximum(range, tf.scalar(epsilon));

return x.sub(minF).div(safeRange);

});

}

const x = tf.tensor2d([

10, 100,

20, 100,

30, 100,

], [3, 2]);

minMaxScalePerFeature(x).print();

Notes from experience:

  • I keep dims (true) so the broadcast is unambiguous.
  • I wrap it in tf.tidy() so I don’t leak intermediate tensors.
  • I use a small epsilon guard; otherwise a constant feature can produce Infinity or NaN.

2) Masked minimum (ignore padding or invalid values)

In sequence models and batched data pipelines, you often have padding you don’t want to include. There isn’t a built-in “masked min” op in the core API, but you can build one safely.

Idea: replace masked-out values with +Infinity so they can’t become the minimum.

import * as tf from "@tensorflow/tfjs";

// x: numeric tensor

// mask: boolean tensor, true means "keep", false means "ignore"

function maskedMin(x, mask, axis, keepDims = false) {

return tf.tidy(() => {

const inf = tf.fill(x.shape, Number.POSITIVE_INFINITY);

const safe = tf.where(mask, x, inf);

return tf.min(safe, axis, keepDims);

});

}

// Example: shape [2, 4]

const values = tf.tensor2d([

5, 1, 9, 2,

8, 7, 3, 4,

], [2, 4]);

// Keep only some entries in each row

const mask = tf.tensor2d([

1, 0, 1, 0,

0, 0, 1, 1,

], [2, 4]).cast("bool");

maskedMin(values, mask, 1, false).print(); // per-row masked min

Two edge cases you should handle in production:

  • If a whole reduction slice is masked out, you’ll get +Infinity. I typically add a second reduction (tf.any(mask, axis, keepDims)) and replace infinities with a sentinel value or throw an error depending on the use case.
  • If your tensor is int32, mixing with Infinity forces a cast. For masking, I often cast to float32 explicitly to keep behavior predictable.

3) Quick sanity checks during training loops

When I’m debugging a training step (especially in the browser), I want cheap signals:

  • Are activations drifting negative?
  • Did a normalization op break?
  • Are gradients exploding?

tf.min() is a solid first check.

import * as tf from "@tensorflow/tfjs";

async function debugTensor(name, t) {

const minT = tf.min(t);

const maxT = tf.max(t);

const minV = (await minT.data())[0];

const maxV = (await maxT.data())[0];

console.log(${name}: min=${minV}, max=${maxV}, shape=${t.shape.join("x")});

minT.dispose();

maxT.dispose();

}

const logits = tf.randomNormal([32, 10]);

await debugTensor("logits", logits);

logits.dispose();

I keep this kind of helper around even in 2026, despite better dashboards and AI-assisted debugging, because it’s immediate and it works anywhere.

4) Gradients: min is piecewise and ties are tricky

tf.min() is differentiable almost everywhere, but it has kinks:

  • The gradient flows through the element(s) that achieve the minimum.
  • If multiple elements tie for the minimum, the exact gradient distribution can be backend-dependent.

This matters if you try to train with a loss that directly uses min (it’s not common, but it happens in constraint losses and robust objectives).

Here’s a small gradient probe:

import * as tf from "@tensorflow/tfjs";

const x = tf.variable(tf.tensor1d([2, 1, 1, 3]));

// Gradient of min(x) w.r.t. x

const gradFn = tf.grad(v => tf.min(v));

const g = gradFn(x);

x.print();

g.print();

x.dispose();

g.dispose();

If you need smoother behavior, I usually replace hard min with a smooth approximation (for example, a softmin based on log-sum-exp on the negated values). That gives you gradients everywhere, at the cost of a tunable “sharpness” parameter.

Performance and Memory: How to Keep tf.min() from Becoming the Bottleneck

tf.min() itself is generally fast because it runs as a single backend op. The slowdowns I see are usually self-inflicted:

  • Pulling data back to JS too often (await data() / dataSync()).
  • Creating tons of intermediate tensors without disposal.
  • Reducing giant tensors on the main thread and blocking UI.

My default rule: stay in tensor land

If you’re computing a min just to feed another tensor op, don’t read it into JS. Keep it as a tensor and continue.

Bad pattern (forces CPU-visible read and breaks graph-style thinking):

  • compute minTensor
  • const minValue = (await minTensor.data())[0]
  • re-create a scalar tf.scalar(minValue)

Better pattern:

  • use minTensor directly

Use tf.tidy() for pipelines

Any time you create temporary tensors in a helper, wrap it in tf.tidy(). Otherwise, long-running apps (especially browsers) will leak GPU/CPU memory.

import * as tf from "@tensorflow/tfjs";

function shiftByGlobalMin(x) {

return tf.tidy(() => {

const m = tf.min(x); // scalar

return x.sub(m);

});

}

const x = tf.randomUniform([1000, 1000]);

const y = shiftByGlobalMin(x);

// x and y are still alive here

x.dispose();

y.dispose();

Typical timings (what I actually see)

On modern hardware, reductions like min are often in the “single-digit to a few tens of milliseconds” range for moderately sized tensors, but you can easily push them into the hundreds of milliseconds if:

  • the tensor is huge (tens of millions of elements),
  • you force sync reads repeatedly,
  • you run them on a slower backend,
  • or you do them in a tight loop without tidy/dispose.

When performance matters, I do two things:

1) Reduce the frequency: log mins every N steps, not every step.

2) Reduce the payload: compute mins on a smaller diagnostic slice when possible.

Traditional vs modern approach (what I recommend in 2026)

Task

Traditional JS approach

Modern TensorFlow.js approach I use —

— Min of a nested array

Math.min(...flatArray) or loops

tf.min(tf.tensor(array)) and keep it as a tensor Per-axis mins

manual loops per row/column

tf.min(x, axis) with clear axis semantics Debugging training values

console logs of arrays

tensor mins/maxes + occasional await data() Memory management

rely on GC

tf.tidy() + explicit dispose() at boundaries

I still use plain JS mins for tiny arrays that never touch the model path, but for anything in the numeric pipeline, staying inside TF.js is cleaner and usually faster.

Common Mistakes (and the Fix I Apply Immediately)

These are the issues I see most often when teams start using tf.min().

Mistake 1: Confusing axis meaning (shape doesn’t match expectations)

Symptom: you expected shape [features] but got [batch].

Fix: write down the shape next to your tensor variable, then apply the “reduced dimension disappears” check.

  • Input [batch, features]
  • axis=0 => output [features]
  • axis=1 or axis=-1 => output [batch]

Mistake 2: Forgetting keepDims and breaking broadcasting

Symptom: Error: Operands could not be broadcast together or a silent wrong broadcast.

Fix: when the next op expects the reduced dimension to exist (common in normalization), set keepDims=true.

Mistake 3: Pulling scalar mins into JS inside hot loops

Symptom: training slows down, UI stutters, fans spin.

Fix: keep mins as tensors. If you must log numeric values, do it occasionally and asynchronously.

Mistake 4: NaNs and masked data producing weird minima

Symptom: min becomes NaN or -Infinity unexpectedly.

Fixes I reach for:

  • If you suspect NaNs: use tf.isNaN(x) to detect them, and decide whether to replace them (tf.where) or throw.
  • If you’re masking: use the +Infinity replacement pattern and handle the “all masked” case.

Mistake 5: Empty tensors or empty slices

Symptom: reductions on empty data can error or produce values that don’t make sense for your domain.

Fix: validate shapes early. If you have dynamic shapes (common with user-generated data in the browser), add explicit guards before reductions.

When I Use tf.min() vs When I Avoid It

I like being opinionated here because it keeps codebases consistent.

I use tf.min() when:

  • I need a reduction as part of a tensor pipeline (normalization, constraints, debug signals).
  • I want per-axis minima for batched processing.
  • I want results on the same backend as the rest of the model.

I avoid tf.min() when:

  • I’m already in plain JS and the data is tiny (like 5 numbers from UI state).
  • I only need the index of the minimum element (in that case, I reach for tf.argMin() instead of min).
  • I need a differentiable “min-like” behavior for training stability (then I use a smooth approximation rather than a hard min).

If your goal is “give me the smallest value and where it happened,” I typically compute both:

  • tf.min(x, axis) for the value
  • tf.argMin(x, axis) for the index

That keeps intent crystal clear.

Key Takeaways and What I’d Do Next

If you take one thing from this, make it this: tf.min() is simple in concept, but shape handling is everything in practice. I always start by writing down the input shape, then I decide which axis I’m collapsing, and I choose keepDims=true whenever I expect to broadcast the result back into the original tensor.

For everyday work, I use tf.min(x) as a fast, reliable diagnostic: it catches broken preprocessing, invalid ranges, and “why is this loss exploding?” moments earlier than most other signals. For model-adjacent math, I use axis reductions (tf.min(x, 0) or tf.min(x, -1)) to keep computations batched and backend-friendly.

If you’re building something real (a browser ML demo, a Node inference service, or a training loop that runs for minutes), you should treat memory as a first-class concern. Wrap helper pipelines in tf.tidy(), dispose boundary tensors, and avoid pulling values into JS unless you truly need them.

My suggested next steps:

1) Add a tiny debug utility that logs min/max/shape for any tensor you care about.

2) Refactor one normalization or masking step to use keepDims=true and broadcasting cleanly.

3) If you’re doing padding-heavy work, implement maskedMin once and reuse it everywhere so your handling of “all masked” slices stays consistent.

Scroll to Top