I first felt the pointer–array relationship click while debugging a crash in a data parser. The parser stored fixed-width records in a buffer, and I kept mixing up where the “buffer starts” versus where my cursor was. The bug wasn’t exotic; I was walking past the end of an array because I mentally treated the array name like a normal pointer and forgot the rules that make it special. Once I re-framed arrays as blocks of contiguous memory and pointers as movable addresses, the behavior became predictable. That mental model is what I’ll use here.
If you write C in 2026, you still need this clarity. Tooling is better—sanitizers, static analysis, AI assistants—but the language model hasn’t changed: arrays decay to pointers, pointer arithmetic is scaled by element size, and multi-dimensional arrays are arrays of arrays. I’ll show you what that means with runnable code, point out places that fool people even after years of experience, and give practical guidance on when pointer-style and array-style access is the right call.
Arrays are blocks; pointers are addresses
I start with a simple, literal framing. An array is a fixed-size block of elements laid out back-to-back. A pointer is an address that can move within or across such blocks. That’s it. From there, most rules become a matter of how the compiler interprets expressions.
In C, the name of an array is not a variable. It’s more like a fixed label on a storage box. You can’t reassign it. But in many expressions, that label “decays” to a pointer to the first element. That’s why arr and &arr[0] are interchangeable in most contexts.
Here’s the smallest example I use when teaching this:
#include
int main(void) {
int prices[3] = { 5, 10, 15 };
// Indexing
printf("%d\n", prices[0]);
// Pointer-style
printf("%d\n", *prices);
return 0;
}
Both prints show the same value because prices decays to int* and points at the first int. The array itself still has a size of 3, but the expression in most contexts gives you a pointer to the first element.
A short analogy I like: the array name is the street address of an apartment building. It’s fixed. A pointer is a person walking through the hallway with a room number written on a sticky note. The address doesn’t change; the person can move room to room.
The array name is special: what decays and what doesn’t
The decay rule has exceptions that matter. I keep these in a short list:
- In most expressions,
arrbecomes&arr[0]. - In
sizeof(arr), the array does NOT decay. You get the full array size. - In
&arr, you get a pointer to the whole array, not to the first element.
That last bullet is the root of a lot of confusion. Let’s see the types:
#include
int main(void) {
int ids[4] = { 101, 102, 103, 104 };
printf("sizeof(ids) = %zu\n", sizeof(ids));
printf("sizeof(ids[0]) = %zu\n", sizeof(ids[0]));
printf("sizeof(&ids) = %zu\n", sizeof(&ids));
printf("sizeof(ids + 1) = %zu\n", sizeof(ids + 1));
return 0;
}
On a 64-bit system, you’ll typically see:
sizeof(ids)is 16 (4 ints * 4 bytes)sizeof(ids[0])is 4sizeof(&ids)is 8 (a pointer)sizeof(ids + 1)is 8 (a pointer)
&ids has type int ()[4], a pointer to an array of 4 ints. That’s different from int, and the pointer arithmetic is scaled differently. This becomes crucial with multi-dimensional arrays, which I’ll cover later.
A quick rule I keep in my head: “Array name is a pointer value in most places, but not in the places that ask about storage.” sizeof and & care about storage; most other expressions care about values.
Pointer arithmetic: why arr + 1 moves one element
Pointer arithmetic is where array access feels magical, but it’s mechanical. When you add an integer k to a pointer of type T, the compiler scales the address by k sizeof(T). That’s why arr + 1 points to the next element, not the next byte.
Here’s a traversal example that uses the pointer directly. I’m using meaningful names instead of placeholder ones.
#include
int main(void) {
int samples[5] = { 1, 2, 3, 4, 5 };
int *cursor = &samples[0];
for (int i = 0; i < 5; i++) {
printf("%d ", *(cursor + i));
}
printf("\n");
return 0;
}
You could also write cursor[i] because pointer[index] is defined as *(pointer + index). That is not syntax sugar; it is literally how the language defines indexing. This is why you can even write i[cursor] and it still works (though I don’t recommend doing that in production).
I think of pointer arithmetic as stepping stones in a stream. The stones are the elements. Your pointer is your foot. A single step moves you to the next stone, not a random number of inches. The compiler figures out the distance based on element size.
A quick check with addresses
If you want to prove this to yourself, print addresses in a debug build:
#include
int main(void) {
int samples[3] = { 7, 8, 9 };
printf("&samples[0] = %p\n", (void*)&samples[0]);
printf("&samples[1] = %p\n", (void*)&samples[1]);
printf("&samples[2] = %p\n", (void*)&samples[2]);
return 0;
}
On a system where int is 4 bytes, each address will be 4 bytes apart. If int were 2 or 8 bytes, the steps would match that size. This is why pointer arithmetic is powerful and dangerous: it’s fast and compact, but it trusts you to stay in bounds.
One more mental model: pointer ranges
I often rewrite loops using a start pointer and an end pointer because the bounds become clearer. It’s also easier to reason about when I’m slicing subranges.
int *p = samples;
int *end = samples + 5; // one past last element
for (; p < end; p++) {
printf("%d ", *p);
}
Notice the “one past last element” rule: end points to the position after the last element, which is legal as an address value but illegal to dereference. That’s a critical concept in C, and it applies to both arrays and pointers.
Passing arrays to functions: decay and lost size
When I review C code, the most frequent pointer–array mistake I see is expecting sizeof to “just work” in a function parameter. It won’t, because the array decays to a pointer when passed to a function.
Every one of these function signatures is effectively the same:
void analyze_a(int data[3]);
void analyze_b(int data[]);
void analyze_c(int *data);
Inside those functions, data is an int*. You no longer know the array length unless you pass it separately. Here’s a runnable example that shows the size change:
#include
void f1(int data[3]) {
printf("Size in f1: %zu bytes\n", sizeof(data));
}
void f2(int data[]) {
printf("Size in f2: %zu bytes\n", sizeof(data));
}
void f3(int *data) {
printf("Size in f3: %zu bytes\n", sizeof(data));
}
int main(void) {
int data[3] = { 1, 2, 3 };
printf("Size in main: %zu bytes\n", sizeof(data));
f1(data);
f2(data);
f3(data);
return 0;
}
On a 64-bit system, sizeof(data) in main is 12, but inside the functions it’s 8 because it’s a pointer. That’s the decay rule in action.
Practical guidance I follow
- Always pass the length as a separate argument.
- Use
size_tfor lengths to matchsizeof. - If the function does not modify the array, use
const.
Here’s the pattern I use in new code:
#include
#include
void printscores(const int *scores, sizet count) {
for (size_t i = 0; i < count; i++) {
printf("%d ", scores[i]);
}
printf("\n");
}
int main(void) {
int scores[] = { 98, 87, 91, 76 };
size_t count = sizeof(scores) / sizeof(scores[0]);
print_scores(scores, count);
return 0;
}
That last expression, sizeof(scores) / sizeof(scores[0]), is safe as long as it’s in the same scope as the array. Once you pass it to another function, the size is gone unless you carry it along.
A pattern I avoid: sentinel-only arrays
You’ll sometimes see arrays passed without a length, terminated by a sentinel value (like a zero). This can work for strings, but it’s easy to misuse for binary or mixed data. Unless your format explicitly guarantees a sentinel, you’ll have fewer bugs passing a length, even if it’s an extra argument.
2D arrays: pointers to arrays, not arrays of pointers
Two-dimensional arrays are where pointer and array relationships can feel intimidating. I approach them with the “array of arrays” model. A 2D array like int grid[3][4] is an array of 3 elements, where each element is an array of 4 ints. That means grid decays to a pointer to an array of 4 ints: int (*)[4].
This is why the canonical pointer expression for grid[i][j] is:
((grid + i) + j)
Let’s derive it with meaning rather than memorization:
gridpoints to the first row (an array of 4 ints)grid + ipoints to the i-th row(grid + i)gives the row itself (which decays toint)*(grid + i) + jpoints to the j-th element in that row((grid + i) + j)gives the element
Here’s a full example with traversal using pointer notation:
#include
int main(void) {
int grid[3][4] = {
{ 1, 2, 3, 4 },
{ 5, 6, 7, 8 },
{ 9, 10, 11, 12 }
};
for (int row = 0; row < 3; row++) {
for (int col = 0; col < 4; col++) {
int value = ((grid + row) + col);
printf("%d ", value);
}
printf("\n");
}
return 0;
}
A quick analogy: imagine a bookshelf. Each shelf is a 1D array (a row). The bookshelf itself is an array of shelves. grid points to the first shelf, not to the first book.
Passing 2D arrays to functions
Because the pointer type includes the column size, you must include that column size in the function parameter type:
#include
void print_grid(int rows, int cols, int grid[][4]) {
for (int r = 0; r < rows; r++) {
for (int c = 0; c < cols; c++) {
printf("%d ", grid[r][c]);
}
printf("\n");
}
}
int main(void) {
int grid[3][4] = {
{ 1, 2, 3, 4 },
{ 5, 6, 7, 8 },
{ 9, 10, 11, 12 }
};
print_grid(3, 4, grid);
return 0;
}
If you want a truly variable column size, you can use a C99 variable-length array parameter:
#include
void print_grid(int rows, int cols, int grid[rows][cols]) {
for (int r = 0; r < rows; r++) {
for (int c = 0; c < cols; c++) {
printf("%d ", grid[r][c]);
}
printf("\n");
}
}
That is still an array of arrays, just with sizes known at runtime. The key is that the column count is part of the pointer type.
Array-of-pointers vs pointer-to-array
This is a subtle but important distinction. You can model a “2D” structure as:
- A true 2D array: contiguous, row-major memory
- An array of pointers to rows: each row can live separately
If you allocate rows separately, you get an int. If you allocate a single block and treat it as 2D, you want int ()[cols] or a flat int with manual indexing.
Here’s the difference in memory layout:
int grid1[3][4]; // one contiguous block
int *grid2[3]; // array of 3 pointers (rows separate)
For grid2, each grid2[r] might point to a different allocation. That can be useful, but it’s not the same thing as a contiguous 2D array. This matters for cache behavior and for APIs that expect contiguous data.
Common mistakes I still see in 2026
Even with modern tooling, these are the classic traps I keep spotting in reviews and incident reports.
1) Expecting sizeof to give array length inside a function
This is the decay trap. In a parameter list, int arr[] is just int*. Use a separate length argument.
2) Confusing int with int[rows][cols]
A 2D array is not a pointer to pointer. An int points to pointers, not to contiguous rows. If you allocate with malloc as a flat block, use int* and index manually, or use a VLA parameter to keep the pointer-to-array type.
3) Pointer arithmetic on the wrong type
If you take &grid instead of grid, you get a pointer to the whole array type. &grid + 1 jumps by the size of the entire array, not by one row. That can surprise you if you intended row stepping.
4) Returning pointers to local arrays
A local array lives on the stack. Returning its address gives a dangling pointer. If you need a buffer that outlives the function, allocate it dynamically or pass a buffer in.
5) Off-by-one loops
Pointer arithmetic does not protect you. I rely on sentinel patterns like i < count and explicitly pass count. When iterating with pointers, I prefer using a end pointer:
int *p = data;
int *end = data + count;
for (; p < end; p++) {
/ use p */
}
That reads cleanly and matches how I reason about bounds.
6) Mixing pointer increments with array indexing
This is a subtle source of bugs in tight loops:
int *p = data;
for (int i = 0; i < count; i++) {
sum += p[i];
p++;
}
You just skipped every other element, because the pointer moved and the index also moved. I see this kind of bug in high-performance code more often than you’d expect. Choose one style per loop.
7) Assuming string rules apply to all arrays
A char[] used as a string is null-terminated; a char[] used as binary data is not. Confusing these two leads to overreads and incorrect length calculations.
When I use array syntax vs pointer syntax
Both forms are valid; choosing the right one is about clarity and intent. Here’s how I decide.
I use array indexing when:
- I want direct clarity for 2D or 3D shapes
- I’m accessing by logical position (row, column)
- I want boundary checks to be obvious in the code review
I use pointer notation when:
- I’m doing a tight loop where pointer stepping is clearer
- I need to work with slices of a bigger array
- I’m interfacing with APIs that take pointers
To make this concrete, here’s a small comparison table. This is not about speed; it’s about clarity in the codebase.
Traditional array indexing
—
for (i = 0; i < n; i++) sum += a[i];
for (p = a; p < a+n; p++) sum += *p; grid[r][c]
((grid + r) + c) f(array, count)
f(ptr, count) with pointer to a slice &a[start]
p = a + start and walk In 2026, I still default to array indexing for readability unless pointer stepping makes the intent clearer. Clarity in code reviews beats cleverness.
Real-world scenarios and edge cases
Here are three cases where the pointer–array relationship decides whether the program is safe.
1) Parsing a network frame
Many protocols include length-prefixed data. You receive a buffer and need to walk through it. The safest model is a pointer that moves through a flat array while a size counter keeps you in bounds.
#include
#include
int parseframe(const uint8t *buffer, size_t length) {
if (length < 2) return -1;
const uint8_t *p = buffer;
const uint8_t *end = buffer + length;
uint8_t version = *p++;
uint8t payloadlen = *p++;
if ((sizet)(end - p) < payloadlen) return -2;
// Process payload
for (uint8t i = 0; i < payloadlen; i++) {
printf("%u ", p[i]);
}
printf("\n");
return 0;
}
The array is the buffer. The pointer is the cursor. That separation reduces mistakes.
2) A fixed-size ring buffer
Arrays are perfect for ring buffers. Pointers are still useful for a working position, but the array is the storage. You’ll often see head and tail indices rather than raw pointers to keep wrap-around math simple.
Here’s a compact ring buffer example that uses indices but still relies on the array–pointer relationship for access:
#define CAPACITY 8
typedef struct {
int data[CAPACITY];
size_t head;
size_t tail;
size_t count;
} ring_t;
int ringpush(ringt *r, int value) {
if (r->count == CAPACITY) return -1; // full
r->data[r->tail] = value;
r->tail = (r->tail + 1) % CAPACITY;
r->count++;
return 0;
}
int ringpop(ringt r, int out) {
if (r->count == 0) return -1; // empty
*out = r->data[r->head];
r->head = (r->head + 1) % CAPACITY;
r->count--;
return 0;
}
The array is fixed, fast, and cache-friendly. The “moving” behavior is in indices, not in the array itself.
3) Image processing with 2D arrays
If you have a fixed-size image, a true 2D array can be the cleanest representation. For variable-sized images, I prefer a flat uint8_t* with manual indexing to keep memory contiguous and compatible with I/O APIs.
Here’s a flat layout with explicit indexing:
uint8_t img = malloc(width height);
if (!img) return -1;
// pixel at (x, y)
#define PIXEL(x, y) img[(y) * width + (x)]
PIXEL(10, 5) = 255;
Even though I’m not using int[rows][cols], I’m still relying on the same conceptual rule: pointer arithmetic scales by element size, and each row is a contiguous run.
A deeper look at array-to-pointer decay
Here’s a nuance that helped me: array-to-pointer decay is not an automatic “conversion” for all uses; it’s a rule applied in specific expression contexts. That matters because it explains why sizeof and & behave differently. The common cases where decay happens include:
- Most expressions (assignment, arithmetic, comparison)
- Function arguments
array[index]where the array is treated as a pointer expression
The cases where decay does not happen:
sizeof(array)&array_Alignof(array)- Some initializers where the array name is used to copy the array
You don’t need to memorize the full list, but knowing that decay is context-based helps you reason about surprising code.
Demonstrating the difference with types
If you have a compiler that can show types (or you use a static analysis tool), these two are not the same:
int a[4];
int *p = a; // ok, a decays
int (*q)[4] = &a; // ok, &a is pointer to array
p + 1 advances by one int. q + 1 advances by one array of 4 ints. That difference is exactly why &a and a are not interchangeable.
Alternative approaches: flat arrays vs pointer-to-pointer
When you build a table or matrix dynamically, you have options. I pick based on three criteria: contiguous memory, ease of indexing, and compatibility with APIs.
Option A: contiguous flat array
- One allocation
- Great cache behavior
- Manual indexing
int m = malloc(rows cols * sizeof(int));
if (!m) return -1;
#define M(r, c) m[(r) * cols + (c)]
M(2, 3) = 42;
This is my default for performance-sensitive code.
Option B: pointer to array with VLA
- One allocation
- Natural
m[r][c]indexing - Uses VLA type for readability
int (m)[cols] = malloc(rows cols * sizeof(int));
if (!m) return -1;
m[2][3] = 42;
This is very readable and still contiguous. It’s great when you want natural indexing but don’t know cols at compile time.
Option C: array of pointers to rows
- Multiple allocations
- Can have ragged rows
- Not contiguous
int m = malloc(rows sizeof(int));
for (int r = 0; r < rows; r++) {
m[r] = malloc(cols * sizeof(int));
}
I use this only when rows are truly variable length. Otherwise, it’s more overhead and less cache friendly.
Practical scenarios: when you should NOT use pointer arithmetic
Pointer arithmetic is powerful, but I avoid it in a few cases:
- When bounds are complex and a clear index makes the code safer
- When a junior developer will likely maintain the code soon
- When negative indexing might happen (pointer arithmetic does not prevent it)
- When you’re interfacing with APIs that expect sizes, not pointer ranges
In those cases, array indexing with explicit bounds checks is not just clearer; it’s safer.
Debugging pointers and arrays: my checklist
When something goes wrong, I run through this quick checklist:
1) What is the real type? Is it int or int ()[N]?
2) Where does the pointer start, and where should it end?
3) Do I have a trustworthy length? Is it in size_t?
4) Am I mixing i++ with p++?
5) Is there any implicit decay hiding in a function call?
If I can answer those, the bug usually becomes obvious.
A quick debug trick: print offsets
Instead of printing raw pointers, I sometimes print offsets from the base. That keeps values human-readable:
sizet offset = (sizet)(p - base);
printf("offset = %zu\n", offset);
This makes it clear how far the pointer has moved and whether it’s still within bounds.
Performance considerations without guesswork
Pointer arithmetic and array indexing compile to the same machine instructions in most cases. The difference comes from cache behavior and memory layout, not the syntax you write. Here’s what I keep in mind:
- Contiguous memory helps cache lines; 2D arrays stored in row-major order are fastest when you iterate by rows.
- Jumping across rows (column-major access) tends to be slower; you might see a 2–3x slowdown on large grids, or latency spikes like 10–20ms in tight loops with large working sets.
- When you must traverse by columns, consider transposing or using a different layout.
These are not exact numbers; they vary by CPU, data size, and workload. But the principle is stable: memory access patterns matter more than syntax.
A small example of locality
Compare these two loops over a large grid[rows][cols]:
// Row-major (fast)
for (int r = 0; r < rows; r++)
for (int c = 0; c < cols; c++)
sum += grid[r][c];
// Column-major (slower on row-major memory)
for (int c = 0; c < cols; c++)
for (int r = 0; r < rows; r++)
sum += grid[r][c];
Same syntax, same math, very different cache behavior. This is why I focus on layout and access order more than on array vs pointer syntax.
Edge cases that surprise experienced C programmers
Here are a few that still pop up:
Array decay in conditional expressions
If you do this:
if (array) {
// ...
}
The array decays to a pointer and is always non-null, so the condition is always true. If your intent was “is the array non-empty,” you should check the length, not the pointer.
sizeof on a pointer to array
Consider:
int (*p)[4] = &arr;
sizeof(*p) // size of the array (4 ints)
sizeof(p) // size of the pointer
That distinction is useful, but only if you know the type.
Pointer subtraction only within the same array
p - q is only defined if both pointers point into the same array (or one past). Subtracting unrelated pointers is undefined behavior, even if it “seems” to work on your system.
Safer patterns with arrays and pointers
If I’m writing library-like code, I prefer patterns that push correctness to the surface.
1) Explicit range arguments
void processrange(const uint8t begin, const uint8_t end) {
for (const uint8_t *p = begin; p < end; p++) {
// use *p
}
}
Calling code then decides the bounds. It’s very hard to misuse this function without passing the wrong range.
2) const everywhere it makes sense
Pointers let you write into memory; const tells the compiler and the reader “this is read-only.” It’s a huge signal and I use it aggressively.
3) Use of helper macros or inline functions
For 2D flat arrays, I often use a macro or inline function to make indexing obvious:
static inline sizet idx(sizet row, sizet col, sizet cols) {
return row * cols + col;
}
It’s small, but it documents your layout assumptions in one place.
Modern context: 2026 tooling and how it shapes practice
C hasn’t changed much, but tooling has. I still write the same pointer and array code, but I rely on tools to keep me honest.
- AddressSanitizer and UBSan catch out-of-bounds and undefined behavior early.
- Static analyzers surface suspicious pointer arithmetic, especially in loops with complex conditions.
- AI-assisted review in 2026 is surprisingly good at flagging a missing length parameter or mis-typed pointer to array.
Even with these tools, I don’t outsource my thinking. I use the tools to confirm my mental model, not to replace it. Pointers and arrays are still the sharp edges of C; the best safety net is a strong understanding of how the language actually treats them.
How I teach this topic to juniors
I’ve found that one lesson lands better than all the rest: “Write the pointer expression under the indexing expression.”
a[i]becomes*(a + i)grid[r][c]becomes((grid + r) + c)
Once they can reliably do that, most bugs go away. They stop guessing and start deriving. That’s the real goal.
A compact recap
- Arrays are fixed-size blocks. Pointers are movable addresses.
- Array names decay to pointers in most expressions, but not in
sizeofor&. arr + 1moves by one element, not one byte.- Passing an array to a function loses its size. Pass the length explicitly.
- 2D arrays are arrays of arrays;
intis a different thing. - Clarity matters more than cleverness. Use the syntax that best communicates intent.
If you keep those ideas in mind, the pointer–array relationship stops being mysterious and becomes a tool you can reason about with confidence.


