Neural Network Parameter Counter

Count the total parameters in your neural network. Add layers, specify dimensions, and get the exact parameter count with a breakdown per layer.

Built by Michael Lip

Frequently Asked Questions

How do I count parameters in PyTorch?

In code: sum(p.numel() for p in model.parameters()). Or use this tool to add layers and get a breakdown. Each layer type has a different formula: Linear = in*out + out, Conv2d = out_ch * in_ch * k * k + out_ch, LSTM = 4 * ((in + hid) * hid + hid) per layer.

Why does parameter count matter?

Parameter count affects: 1) Model file size (params * 4 bytes for float32). 2) GPU memory during training (params * 16 bytes with Adam). 3) Inference speed. 4) Risk of overfitting (more params = more capacity = needs more data).

Which layers have the most parameters?

Fully connected (Linear) layers typically dominate. A single Linear(4096, 4096) has 16.8M parameters. Embedding layers in NLP models are also large. Conv2d layers are relatively parameter-efficient due to weight sharing across spatial positions.

Is this tool free?

Yes. All HeyTensor tools are free, run in your browser, and require no signup.

Does this work offline?

Once loaded, the tool runs entirely in your browser. No internet needed after the initial page load.

About This Tool

This tool is part of HeyTensor, a free suite of PyTorch and deep learning utilities. All calculations run entirely in your browser — no data is sent to any server. The source code is open on GitHub.

Contact

HeyTensor is built and maintained by Michael Lip. For questions or feedback, email [email protected].

📊 Based on real data from our Most Common PyTorch Errors research — 20 errors ranked by frequency