System.Array in C#: A Practical Guide to the Array Base Class

Most production bugs I’ve debugged around “arrays” weren’t about syntax. They were about assumptions: assuming the array has the size you think it does, assuming you can “grow” it, assuming a sort is stable, assuming a binary search works on an unsorted dataset, assuming a copy is deep, or assuming a multi-dimensional array behaves like a jagged one.

When you write int[] ordersPerDay = ..., you’re already using the runtime’s array type, but the real workhorse is System.Array: the base class that provides shared behavior and a toolbox of operations—sorting, searching, copying, resizing, and introspection (length, rank, bounds). Knowing these APIs pays off because arrays sit under everything: buffers for I/O, hot-path data transforms, image/audio samples, interop payloads, and high-throughput services.

I’m going to walk you through how I actually use System.Array in modern C# code: how to reason about size and dimensions, how to traverse without surprises, how to sort/search correctly, how to copy safely, where performance traps hide, and when you should choose something else (like List, Span, or pooled arrays).

What System.Array really is (and why you should care)

If you’ve written C#, you’ve written arrays. A single-dimensional array like int[] is an object on the managed heap. It’s always a fixed-size structure once allocated. That “fixed-size” part drives a lot of design choices: arrays are great when you know the size ahead of time, or when you want a contiguous block of memory that the runtime can iterate quickly.

System.Array is the abstract base class for all arrays. You almost never write new Array(...) (you can’t, it’s abstract), but you constantly call its static methods:

  • Array.Sort(...) to sort in-place
  • Array.BinarySearch(...) to find in sorted arrays
  • Array.Copy(...) to move data between arrays
  • Array.Resize(...) to reallocate and copy
  • Array.Clear(...) and Array.Fill(...) to reset ranges

And you can query array state through properties exposed by the Array base type:

  • Length, LongLength
  • Rank
  • IsFixedSize, IsReadOnly
  • bounds methods like GetLowerBound / GetUpperBound

Why I care: in real systems I often receive an Array (non-generic) from reflection, interop, serializers, plugin APIs, or old frameworks. Understanding System.Array lets you write code that works for int[], string[], object[], byte[], and even multi-dimensional arrays.

Here’s the simplest example: sorting numbers in place.

using System;

public static class Program

{

public static void Main()

{

int[] numbers = { 5, 2, 9, 1, 7 };

Array.Sort(numbers);

Console.WriteLine(string.Join(", ", numbers));

}

}

That one call hides a lot of nuance: default comparer, in-place mutation, and algorithm choices. I’ll unpack those details later, because they matter once the array grows, the values are complex, or you care about allocations.

Array shape: length, rank, bounds, and the “2D vs jagged” fork

Before you pick methods like Sort or Copy, you need to know what shape you’re working with.

Length and LongLength

  • Length is an int count of total elements across all dimensions.
  • LongLength is a long for the same idea, which helps when you’re working close to large-memory boundaries.

In day-to-day code, Length is what I use. LongLength becomes relevant for very large datasets.

using System;

public static class Program

{

public static void Main()

{

int[] temperaturesByHour = { 18, 19, 21, 22, 20, 17 };

Console.WriteLine($"Length: {temperaturesByHour.Length}");

Console.WriteLine($"LongLength: {temperaturesByHour.LongLength}");

}

}

Rank (dimensions)

Rank tells you how many dimensions the array has:

  • int[] has rank 1
  • int[,] has rank 2
  • int[,,] has rank 3

A jagged array like int[][] is still rank 1, because it’s an array of arrays.

using System;

public static class Program

{

public static void Main()

{

int[] oneDimensional = { 1, 2, 3 };

int[,] twoDimensional = new int[2, 3];

int[][] jagged = new int[2][];

Console.WriteLine($"oneDimensional.Rank = {oneDimensional.Rank}");

Console.WriteLine($"twoDimensional.Rank = {twoDimensional.Rank}");

Console.WriteLine($"jagged.Rank = {jagged.Rank}");

}

}

Bounds: lower bound is usually 0 (but don’t hardcode it in general-purpose code)

In normal C#, the lower bound is 0. That’s what you should assume for typical T[] code.

But System.Array can represent arrays with non-zero lower bounds created via Array.CreateInstance. You won’t see them often, but if you write library code that accepts Array (not T[]), you should respect bounds.

using System;

public static class Program

{

public static void Main()

{

// Create an array of length 3 with a lower bound of 1

Array unusual = Array.CreateInstance(typeof(int), new[] { 3 }, new[] { 1 });

Console.WriteLine($"Rank: {unusual.Rank}");

Console.WriteLine($"LowerBound: {unusual.GetLowerBound(0)}");

Console.WriteLine($"UpperBound: {unusual.GetUpperBound(0)}");

unusual.SetValue(42, 1);

unusual.SetValue(43, 2);

unusual.SetValue(44, 3);

for (int i = unusual.GetLowerBound(0); i <= unusual.GetUpperBound(0); i++)

{

Console.WriteLine($"Index {i} => {unusual.GetValue(i)}");

}

}

}

My guidance: if you control the API, accept T[] (or ReadOnlySpan) and keep the world zero-based. If you’re writing reflection-heavy tooling, serializers, or plugins, be defensive and read bounds via GetLowerBound/GetUpperBound.

Multi-dimensional ([,]) vs jagged ([][]): choose with intent

I see people pick [,] because it “looks like a matrix.” The trouble is that the tooling and method support differs:

  • Array.Sort and BinarySearch focus on one-dimensional arrays.
  • Multi-dimensional arrays support GetLength(dimension) but aren’t directly compatible with many generic patterns.

Jagged arrays (T[][]) are often easier to work with in modern C# because each inner array is a normal T[] and plays nicely with generic APIs.

If you’re doing matrix math or interop with native code that expects a contiguous 2D layout, [,] can be the right call. For most business code and data grids, jagged tends to be simpler.

Traversal that stays correct: for, foreach, and Array.ForEach

Traversal is where correctness and performance meet. The goal is to express intent clearly and avoid off-by-one mistakes.

foreach: my default for read-only passes

foreach is hard to mess up, and it naturally conveys “read every element.”

using System;

public static class Program

{

public static void Main()

{

int[] invoiceTotals = { 120, 45, 230, 99, 180 };

Console.WriteLine("Invoice totals:");

foreach (int total in invoiceTotals)

{

Console.WriteLine(total);

}

}

}

I recommend foreach when you don’t need the index.

for: when you need the index (or want a tight hot-path loop)

When you do need indices, for is clear and direct.

using System;

public static class Program

{

public static void Main()

{

int[] responseTimesMs = { 8, 12, 7, 20, 15 };

for (int i = 0; i < responseTimesMs.Length; i++)

{

// Simple bucketing as an example

int bucket = responseTimesMs[i] <= 10 ? 10 : 20;

Console.WriteLine($"Request {i}: {responseTimesMs[i]}ms (bucket <= {bucket}ms)");

}

}

}

Array.ForEach: nice for side effects, but be intentional

Array.ForEach(T[] array, Action action) reads cleanly in small scripts, demos, and admin tooling.

In large codebases, I’m cautious: it hides control flow and encourages side effects. I’ll use it when it makes a short operation clearer.

using System;

public static class Program

{

public static void Main()

{

string[] services = { "catalog", "billing", "search" };

Array.ForEach(services, service => Console.WriteLine($"Pinging {service}..."));

}

}

Traversing multi-dimensional arrays

For [,], you generally traverse with nested loops and GetLength(d).

using System;

public static class Program

{

public static void Main()

{

int[,] seating =

{

{ 1, 0, 1 },

{ 1, 1, 0 }

};

int rows = seating.GetLength(0);

int cols = seating.GetLength(1);

for (int r = 0; r < rows; r++)

{

for (int c = 0; c < cols; c++)

{

Console.Write(seating[r, c] == 1 ? "[X]" : "[ ]");

}

Console.WriteLine();

}

}

}

Common mistake I still see: using Length as if it were “number of rows” for a 2D array. Length is total elements (rows * columns). Use GetLength(dimension).

Sorting, reversing, and resizing: the in-place toolbox

This is where System.Array shines as a practical utility class.

Sorting with Array.Sort

Array.Sort sorts in place. For primitive types and many common cases, it’s exactly what you want.

using System;

public static class Program

{

public static void Main()

{

decimal[] cartTotals = { 59.99m, 9.99m, 120.00m, 29.50m };

Array.Sort(cartTotals);

Console.WriteLine(string.Join(", ", cartTotals));

}

}

For complex types, you should provide a comparer or sort keys.

using System;

public sealed record Ticket(string Id, int Priority, DateTime CreatedAtUtc);

public static class Program

{

public static void Main()

{

Ticket[] queue =

{

new Ticket("TCK-1042", priority: 2, CreatedAtUtc: DateTime.UtcNow.AddMinutes(-12)),

new Ticket("TCK-1043", priority: 1, CreatedAtUtc: DateTime.UtcNow.AddMinutes(-3)),

new Ticket("TCK-1044", priority: 2, CreatedAtUtc: DateTime.UtcNow.AddMinutes(-30)),

};

Array.Sort(queue, (a, b) =>

{

// Higher priority first

int byPriority = b.Priority.CompareTo(a.Priority);

if (byPriority != 0) return byPriority;

// Older first within the same priority

return a.CreatedAtUtc.CompareTo(b.CreatedAtUtc);

});

foreach (var ticket in queue)

{

Console.WriteLine($"{ticket.Id} priority={ticket.Priority} created={ticket.CreatedAtUtc:o}");

}

}

}

Two things I recommend you remember:

  • Sorting is a mutation. Don’t sort an array that other code treats as immutable.
  • If you plan to call BinarySearch, sorting isn’t optional; it’s the contract.

Reversing with Array.Reverse

After sorting ascending, Reverse is a quick way to get descending order.

using System;

public static class Program

{

public static void Main()

{

int[] scores = { 10, 30, 20, 50, 40 };

Array.Sort(scores);

Array.Reverse(scores);

Console.WriteLine(string.Join(", ", scores));

}

}

Resizing with Array.Resize

Arrays are fixed-size. Array.Resize creates a new array, copies data, and replaces the reference.

I use Array.Resize for small-to-medium tasks where List feels like overkill and I want to stay in array land.

using System;

public static class Program

{

public static void Main()

{

int[] activeUserIds = { 101, 102, 103 };

// Add space for two more entries

Array.Resize(ref activeUserIds, activeUserIds.Length + 2);

activeUserIds[3] = 104;

activeUserIds[4] = 105;

Console.WriteLine(string.Join(", ", activeUserIds));

}

}

Practical guidance:

  • If you’re resizing repeatedly in a loop, you’ll allocate repeatedly. That can get expensive.
  • If you don’t know the final size, a List is usually the better default.
  • For high-throughput buffering, consider pooling (ArrayPool) so you can reuse arrays instead of allocating new ones.

Copying, clearing, filling: the difference between moving bytes and moving meaning

Copying arrays sounds simple until it isn’t. The tricky part is understanding what gets copied: values, references, or deep object graphs.

Array.Copy and CopyTo

Array.Copy is the most flexible, and it supports copying ranges.

using System;

public static class Program

{

public static void Main()

{

byte[] packetHeader = { 0x01, 0x02, 0x03, 0x04 };

byte[] message = new byte[10];

// Copy header into the start of message

Array.Copy(packetHeader, sourceIndex: 0, message, destinationIndex: 0, length: packetHeader.Length);

Console.WriteLine(BitConverter.ToString(message));

}

}

CopyTo is an instance method and is convenient for copying the whole array to another one-dimensional array.

using System;

public static class Program

{

public static void Main()

{

int[] day1Sales = { 10, 20, 30 };

int[] copy = new int[day1Sales.Length];

day1Sales.CopyTo(copy, arrayIndex: 0);

Console.WriteLine(string.Join(", ", copy));

}

}

Clone: shallow copy

Clone gives you a new array object, but it’s still a shallow copy. If the elements are references, you copied references.

using System;

public sealed record Customer(string Id, string Email);

public static class Program

{

public static void Main()

{

Customer[] customers =

{

new Customer("C-100", "[email protected]"),

new Customer("C-101", "[email protected]")

};

var cloned = (Customer[])customers.Clone();

Console.WriteLine(ReferenceEquals(customers, cloned)); // False

Console.WriteLine(ReferenceEquals(customers[0], cloned[0])); // True (same element reference)

}

}

If you need a deep copy, you must define what “deep” means for your domain objects and implement it intentionally.

Array.Clear and Array.Fill

Clear sets a range to default values (0, false, null, etc.). Fill sets a range to a specified value.

using System;

public static class Program

{

public static void Main()

{

int[] counters = { 1, 1, 1, 1, 1, 1 };

// Reset the middle two counters

Array.Clear(counters, index: 2, length: 2);

Console.WriteLine(string.Join(", ", counters));

// Set all counters to 5

Array.Fill(counters, value: 5);

Console.WriteLine(string.Join(", ", counters));

}

}

ConstrainedCopy: niche, but worth knowing exists

ConstrainedCopy is designed for scenarios where you want a copy that either fully succeeds or fails without leaving your destination array half-copied. I don’t reach for it often, but it’s useful when you’re copying between arrays of reference types and you care about exception safety.

In practice, Array.Copy is fine for most code. I file ConstrainedCopy away for “critical copy” situations: low-level framework code, unusual type conversions, or code that must preserve invariants even if something throws.

using System;

public static class Program

{

public static void Main()

{

object[] source = { "ok", "still ok", "ok" };

object[] dest = new object[3];

Array.ConstrainedCopy(source, 0, dest, 0, 3);

Console.WriteLine(string.Join(", ", dest));

}

}

Searching arrays correctly: IndexOf, Find, and BinarySearch

Searching is where “array class knowledge” shows up in real bugs, because the API surface looks deceptively simple.

Linear search: IndexOf and friends

If your array is not sorted (or you can’t prove it is), use linear search.

  • Array.IndexOf(array, value) returns the first index of value or -1.
  • Array.LastIndexOf(array, value) searches from the end.
  • Overloads let you specify a start index and count.
using System;

public static class Program

{

public static void Main()

{

string[] tags = { "prod", "billing", "hotfix", "prod" };

int firstProd = Array.IndexOf(tags, "prod");

int lastProd = Array.LastIndexOf(tags, "prod");

Console.WriteLine($"firstProd={firstProd}, lastProd={lastProd}");

}

}

For reference types, IndexOf uses equality rules based on the element type (for strings, that’s ordinal equality by default). For complex types, if you haven’t implemented Equals, IndexOf will fall back to reference equality, which is a common “why didn’t it match?” moment.

Predicate search: Find, FindIndex, FindAll, Exists, TrueForAll

When the “value” isn’t a simple equality check, use predicate-based helpers.

using System;

public sealed record Order(string Id, decimal Total, bool IsFraudSuspected);

public static class Program

{

public static void Main()

{

Order[] orders =

{

new Order("O-100", 19.99m, false),

new Order("O-101", 250.00m, true),

new Order("O-102", 49.00m, false),

};

int fraudIndex = Array.FindIndex(orders, o => o.IsFraudSuspected);

Order? fraudOrder = Array.Find(orders, o => o.IsFraudSuspected);

Order[] highValue = Array.FindAll(orders, o => o.Total >= 50m);

Console.WriteLine($"fraudIndex={fraudIndex}");

Console.WriteLine($"fraudOrder={fraudOrder}");

Console.WriteLine("highValue=" + string.Join(", ", Array.ConvertAll(highValue, o => o.Id)));

bool anyFraud = Array.Exists(orders, o => o.IsFraudSuspected);

bool allValidTotals = Array.TrueForAll(orders, o => o.Total >= 0);

Console.WriteLine($"anyFraud={anyFraud}, allValidTotals={allValidTotals}");

}

}

I like FindIndex when I need to mutate in place (it gives me the index). I like Find when I need to take an action on the found item but don’t care where it lived.

Sorted search: Array.BinarySearch (and how to use the result)

Array.BinarySearch is fast, but it only works correctly on sorted data that uses the same ordering as the search.

That contract has two parts:

  • The array must be sorted.
  • It must be sorted with the same comparer you pass into BinarySearch.

Here’s the pattern I actually use: sort once, then binary search many times.

using System;

public static class Program

{

public static void Main()

{

int[] userIds = { 105, 101, 109, 103, 102 };

Array.Sort(userIds);

int idx = Array.BinarySearch(userIds, 103);

Console.WriteLine(idx >= 0

? $"Found 103 at index {idx}"

: "Not found");

}

}

The subtle part is the “not found” return value. If BinarySearch does not find the element, it returns a negative number that encodes the insertion point using the bitwise complement operator (~).

That makes it easy to insert while preserving sort order:

using System;

public static class Program

{

public static void Main()

{

int[] sorted = { 10, 20, 30, 40 };

int idx = Array.BinarySearch(sorted, 25);

if (idx < 0)

{

int insertAt = ~idx;

Console.WriteLine($"Not found; insertAt={insertAt}");

// Example: produce a new array with the inserted value.

int[] expanded = new int[sorted.Length + 1];

Array.Copy(sorted, 0, expanded, 0, insertAt);

expanded[insertAt] = 25;

Array.Copy(sorted, insertAt, expanded, insertAt + 1, sorted.Length - insertAt);

Console.WriteLine(string.Join(", ", expanded));

}

}

}

If you remember one thing: when BinarySearch returns negative, don’t use it as an index. Decode it.

“Fixed size” doesn’t mean “immutable”: mutation patterns and safe APIs

Arrays are fixed-size, not immutable. You can always change elements.

That becomes a problem when you pass arrays around as if they’re “read-only” data.

A simple rule I follow

  • If I own the array and it’s private to a component, I mutate freely.
  • If I share the array (return it from a public API, put it on a DTO, cache it globally), I treat it like immutable and protect it.

There are a few ways to protect arrays:

  • Return a copy.
  • Return a read-only view.
  • Return a ReadOnlySpan from APIs that are hot-path and internal.

Return a copy when you need isolation

public sealed class Catalog

{

private readonly string[] _categories;

public Catalog(string[] categories)

{

// Defensive copy: caller can’t mutate our internal array.

_categories = (string[])categories.Clone();

}

public string[] GetCategories()

{

// Another defensive copy: consumer can’t mutate internal state.

return (string[])_categories.Clone();

}

}

This is allocation-heavy, but it’s very safe. For some APIs, safety is worth the cost.

Return a read-only wrapper when you want to avoid copies

Array.AsReadOnly returns a ReadOnlyCollection wrapper. The wrapper prevents writes through the wrapper, but it does not prevent someone else from mutating the underlying array if they still have a reference.

I treat it as “read-only by convention,” not “cryptographically immutable.”

using System;

using System.Collections.ObjectModel;

public sealed class FeatureFlags

{

private readonly string[] _enabled;

public FeatureFlags(string[] enabled)

{

_enabled = (string[])enabled.Clone();

}

public ReadOnlyCollection Enabled => Array.AsReadOnly(_enabled);

}

Use ReadOnlySpan for high-performance read-only APIs

If you’re writing modern C# and you control both sides, spans are an excellent option for exposing read-only slices without allocations. I’ll talk more about Span later because it’s one of the best “array-adjacent” tools we have.

Practical resizing: when you insist on arrays but don’t know the final length

If you don’t know the final length, List is the easiest answer. But sometimes you’re doing something low-level (parsing, buffering) where you still want arrays.

In those cases I use a manual “grow strategy”: allocate an initial capacity, and when it fills, allocate a bigger array and copy.

using System;

public sealed class IntBuffer

{

private int[] _buffer;

private int _count;

public IntBuffer(int initialCapacity = 8)

{

if (initialCapacity < 1) initialCapacity = 1;

_buffer = new int[initialCapacity];

_count = 0;

}

public int Count => _count;

public void Add(int value)

{

if (count == buffer.Length)

{

// Double capacity: fewer reallocations than +1 growth.

int newCapacity = _buffer.Length * 2;

Array.Resize(ref _buffer, newCapacity);

}

buffer[count++] = value;

}

public int[] ToArray()

{

int[] result = new int[_count];

Array.Copy(buffer, 0, result, 0, count);

return result;

}

}

public static class Program

{

public static void Main()

{

var buf = new IntBuffer();

for (int i = 0; i < 20; i++) buf.Add(i * i);

Console.WriteLine(string.Join(", ", buf.ToArray()));

}

}

This is essentially what dynamic collections do internally: maintain a backing array, grow with a multiplier, and keep a count separate from capacity.

The two bugs I watch for in this pattern:

  • Confusing capacity (buffer.Length) with count (count).
  • Returning the internal buffer directly (leaks unused slots and allows mutation).

Sorting in the real world: stability, nulls, and key-based sorting

Sorting is easy until it isn’t. Here are the edge cases that bite.

Is Array.Sort stable?

When people ask “stable sort,” they mean: if two elements compare equal, do they keep their original order?

I do not assume stability when using Array.Sort. If I care about the relative order of “equal keys,” I make it explicit in the comparison.

For example, if I’m sorting records by a non-unique key (like Priority), and I want ties to preserve input order, I add a secondary key such as the original index.

using System;

public sealed record WorkItem(string Id, int Priority);

public static class Program

{

public static void Main()

{

WorkItem[] items =

{

new WorkItem("A", 1),

new WorkItem("B", 1),

new WorkItem("C", 2),

new WorkItem("D", 1),

};

// Decorate with original index to force deterministic ordering for ties.

var decorated = new (WorkItem Item, int Index)[items.Length];

for (int i = 0; i < items.Length; i++) decorated[i] = (items[i], i);

Array.Sort(decorated, (x, y) =>

{

int byPriority = x.Item.Priority.CompareTo(y.Item.Priority);

if (byPriority != 0) return byPriority;

return x.Index.CompareTo(y.Index);

});

for (int i = 0; i < decorated.Length; i++)

Console.WriteLine(decorated[i].Item.Id);

}

}

This “decorate-sort-undecorate” pattern is a reliable way to achieve a stable outcome with an unstable underlying sort.

Sorting parallel arrays (keys + values)

A very practical pattern is: sort an array of keys and carry an aligned array of values along with it.

For example: sort user names, but keep their IDs aligned.

using System;

public static class Program

{

public static void Main()

{

string[] names = { "Zoe", "Amy", "Mark" };

int[] ids = { 3, 1, 2 };

Array.Sort(names, ids);

for (int i = 0; i < names.Length; i++)

{

Console.WriteLine($"{names[i]} -> {ids[i]}");

}

}

}

I like this when I want to stay in pure arrays and avoid allocating intermediate objects.

Sorting with custom comparers

For anything non-trivial, I strongly prefer named comparers over inline lambdas, especially if the sort logic is shared.

using System;

using System.Collections.Generic;

public sealed record Person(string Name, DateTime BirthDateUtc);

public sealed class PersonComparer : IComparer

{

public int Compare(Person? x, Person? y)

{

if (ReferenceEquals(x, y)) return 0;

if (x is null) return -1;

if (y is null) return 1;

// Oldest first

int byBirth = x.BirthDateUtc.CompareTo(y.BirthDateUtc);

if (byBirth != 0) return byBirth;

// Tie-breaker

return string.CompareOrdinal(x.Name, y.Name);

}

}

public static class Program

{

public static void Main()

{

Person?[] people =

{

new Person("Jane", new DateTime(1990, 1, 1, 0, 0, 0, DateTimeKind.Utc)),

null,

new Person("Alex", new DateTime(1985, 2, 2, 0, 0, 0, DateTimeKind.Utc)),

};

Array.Sort(people, new PersonComparer());

Console.WriteLine(string.Join(", ", Array.ConvertAll(people, p => p?.Name ?? "")));

}

}

I’m intentionally handling null here because Array.Sort will throw if your comparison function isn’t consistent.

Copying with intent: shallow vs deep vs “clone what?”

When arrays contain reference types, the most important question is: what are you trying to copy?

Shallow copy: duplicate the container

  • Clone, Array.Copy, and CopyTo create a new array but keep the same object references.

That’s usually correct for immutable reference types (like string, many record types, or objects you treat as immutable).

Deep copy: duplicate the graph

If you need deep copy, decide what the boundary is. For example, a deep copy of Customer might duplicate addresses but keep references to shared “country” definitions.

A simple deep copy pattern uses a projection:

using System;

public sealed record Customer(string Id, string Email)

{

public Customer DeepCopy() => new Customer(Id, Email);

}

public static class Program

{

public static void Main()

{

Customer[] customers =

{

new Customer("C-100", "[email protected]"),

new Customer("C-101", "[email protected]"),

};

Customer[] deepCopy = Array.ConvertAll(customers, c => c.DeepCopy());

Console.WriteLine(ReferenceEquals(customers[0], deepCopy[0]));

}

}

Array.ConvertAll is underused. It’s a clean way to map one array to another array.

Copying overlapping regions

One practical gotcha: copying inside the same array, where ranges overlap.

In many cases Array.Copy handles overlap correctly (it behaves like a memmove for compatible element types). But I still prefer to make overlap obvious, because people reading the code won’t always know whether overlap is safe.

using System;

public static class Program

{

public static void Main()

{

int[] a = { 1, 2, 3, 4, 5 };

// Shift right by 1 in-place: copy elements 0..3 to 1..4

Array.Copy(a, 0, a, 1, 4);

a[0] = 0;

Console.WriteLine(string.Join(", ", a));

}

}

Multi-dimensional arrays in practice: indexing, layout, and conversion patterns

Multi-dimensional arrays ([,], [,,], etc.) are real arrays with multiple dimensions. They’re not the same thing as jagged arrays.

Indexing and lengths

Use GetLength(dimension) for each dimension, not Length.

using System;

public static class Program

{

public static void Main()

{

int[,,] voxels = new int[2, 3, 4];

Console.WriteLine($"Rank={voxels.Rank}");

for (int d = 0; d < voxels.Rank; d++)

Console.WriteLine($"dim {d} length={voxels.GetLength(d)}");

Console.WriteLine($"Total elements={voxels.Length}");

}

}

Traversal order and performance intuition

I keep this mental model:

  • A multi-dimensional array is stored in a contiguous block.
  • Iterating in the “right” nested loop order tends to be cache-friendly.

If you’re doing a big compute pass, it’s worth thinking about iteration order. For a [,] array, I typically loop rows outer, columns inner (or vice versa) and measure.

Converting jagged to multi-dimensional (and back)

Sometimes you receive jagged data (like CSV rows of different lengths) and want a matrix; sometimes you receive a matrix and want row arrays.

Here’s a strict conversion from jagged to 2D that validates row lengths:

using System;

public static class Program

{

public static int[,] ToMatrix(int[][] jagged)

{

if (jagged.Length == 0) return new int[0, 0];

int rows = jagged.Length;

int cols = jagged[0]?.Length ?? 0;

for (int r = 0; r < rows; r++)

{

if (jagged[r] is null) throw new ArgumentException($"Row {r} is null");

if (jagged[r].Length != cols)

throw new ArgumentException($"Row {r} length {jagged[r].Length} != expected {cols}");

}

int[,] matrix = new int[rows, cols];

for (int r = 0; r < rows; r++)

for (int c = 0; c < cols; c++)

matrix[r, c] = jagged[r][c];

return matrix;

}

public static void Main()

{

int[][] jagged =

{

new[] { 1, 2, 3 },

new[] { 4, 5, 6 }

};

int[,] m = ToMatrix(jagged);

Console.WriteLine(m[1, 2]);

}

}

And here’s a conversion from 2D matrix to jagged rows:

using System;

public static class Program

{

public static int[][] ToJagged(int[,] matrix)

{

int rows = matrix.GetLength(0);

int cols = matrix.GetLength(1);

int[][] jagged = new int[rows][];

for (int r = 0; r < rows; r++)

{

var row = new int[cols];

for (int c = 0; c < cols; c++) row[c] = matrix[r, c];

jagged[r] = row;

}

return jagged;

}

public static void Main()

{

int[,] m =

{

{ 1, 2 },

{ 3, 4 },

};

int[][] jagged = ToJagged(m);

Console.WriteLine(string.Join(", ", jagged[0]));

}

}

My rule of thumb: if your “rows” can vary in length, jagged is the right representation. If your data is truly rectangular and you need a compact contiguous matrix, multi-dimensional can be appropriate.

Arrays, types, and runtime behavior: covariance and casting pitfalls

This section is less about System.Array methods and more about the runtime behaviors that show up as production bugs.

Arrays are covariant for reference types

In C#, arrays of reference types are covariant. That means:

  • A string[] can be treated as an object[].

That sounds convenient, but it can explode at runtime if someone tries to store the wrong thing.

using System;

public static class Program

{

public static void Main()

{

string[] names = { "Amy", "Mark" };

object[] boxed = names; // Allowed

try

{

boxed[0] = 123; // Runtime failure

}

catch (ArrayTypeMismatchException ex)

{

Console.WriteLine(ex.GetType().Name);

}

}

}

I avoid exposing mutable arrays across type boundaries for this reason. If I need polymorphism, I prefer IReadOnlyList or ReadOnlySpan.

Value type arrays don’t have this problem

An int[] is not an object[]. Boxing isn’t implicit at the array level.

System.Array APIs are non-generic when you work with Array

If you accept Array as a parameter, you often end up using GetValue/SetValue, which are slower and more error-prone than generic indexing.

If you can accept T[], do it.

If you truly need Array (reflection, plugins), validate:

  • Rank
  • bounds
  • element type (via GetType().GetElementType() or pattern checks)

Creating arrays dynamically: Array.CreateInstance and reflection scenarios

Sometimes you don’t know T at compile time. That’s where Array.CreateInstance earns its keep.

Create a one-dimensional array at runtime

using System;

public static class Program

{

public static void Main()

{

Type elementType = typeof(Guid);

Array ids = Array.CreateInstance(elementType, 3);

ids.SetValue(Guid.NewGuid(), 0);

ids.SetValue(Guid.NewGuid(), 1);

ids.SetValue(Guid.NewGuid(), 2);

for (int i = 0; i < ids.Length; i++)

Console.WriteLine(ids.GetValue(i));

}

}

Create arrays with non-zero lower bounds

You saw earlier that this exists. The practical takeaway: if you’re writing general-purpose code that accepts Array, don’t assume lower bound 0.

The clean, defensive iteration pattern is:

using System;

public static class Program

{

public static void PrintAll(Array a)

{

if (a.Rank != 1) throw new ArgumentException("Only rank-1 supported");

int lo = a.GetLowerBound(0);

int hi = a.GetUpperBound(0);

for (int i = lo; i <= hi; i++)

Console.WriteLine(a.GetValue(i));

}

public static void Main()

{

Array unusual = Array.CreateInstance(typeof(int), new[] { 3 }, new[] { 1 });

unusual.SetValue(10, 1);

unusual.SetValue(20, 2);

unusual.SetValue(30, 3);

PrintAll(unusual);

}

}

Range operations: slicing without surprises (and what to use instead of arrays)

System.Array has a lot of utilities, but it doesn’t give you a first-class “slice” type.

When I need to work with a window into an array, I usually pick one of these options:

Option 1: ArraySegment

ArraySegment is a lightweight view (array + offset + count). It’s been around forever and works well with APIs that accept it.

using System;

public static class Program

{

public static void Main()

{

byte[] buffer = { 1, 2, 3, 4, 5, 6 };

var payload = new ArraySegment(buffer, 2, 3); // {3,4,5}

for (int i = 0; i < payload.Count; i++)

Console.WriteLine(payload.Array![payload.Offset + i]);

}

}

This is great for “old school” APIs and interoperability with collections.

Option 2: Span / ReadOnlySpan

If you’re on modern .NET, spans are my favorite. They’re stack-only views into contiguous memory (including arrays) and they don’t allocate.

using System;

public static class Program

{

public static int SumFirstN(ReadOnlySpan values, int n)

{

if ((uint)n > (uint)values.Length) throw new ArgumentOutOfRangeException(nameof(n));

int sum = 0;

for (int i = 0; i < n; i++) sum += values[i];

return sum;

}

public static void Main()

{

int[] numbers = { 10, 20, 30, 40, 50 };

Console.WriteLine(SumFirstN(numbers, 3));

ReadOnlySpan middle = numbers.AsSpan(1, 3);

Console.WriteLine(string.Join(", ", middle.ToArray()));

}

}

The reason I’m mentioning spans in an “Array class” article: if you’re doing high-performance work, you’ll often use System.Array for allocation and lifetime, and Span for safe slicing and manipulation.

Option 3: copy a slice into a new array

If you need an independent array (not a view), copy the range.

using System;

public static class Program

{

public static T[] Slice(T[] source, int start, int length)

{

if ((uint)start > (uint)source.Length) throw new ArgumentOutOfRangeException(nameof(start));

if ((uint)length > (uint)(source.Length - start)) throw new ArgumentOutOfRangeException(nameof(length));

T[] result = new T[length];

Array.Copy(source, start, result, 0, length);

return result;

}

public static void Main()

{

int[] a = { 1, 2, 3, 4, 5 };

int[] slice = Slice(a, 1, 3);

Console.WriteLine(string.Join(", ", slice));

}

}

Performance notes I actually care about: allocations, bounds checks, and specialized APIs

You can write correct code and still get burned by performance if arrays are in your hot path.

Allocation cost and GC pressure

Every new T[n] allocates. In high-throughput services, the cumulative effect can show up as GC pauses.

If you’re allocating arrays repeatedly in a tight loop, consider:

  • pooling (ArrayPool) for temporary buffers
  • reusing a long-lived buffer inside a component
  • using spans to avoid allocating intermediate arrays for slices

ArrayPool for temporary buffers

Pooling is one of the best techniques for high-volume “scratch” arrays.

using System;

using System.Buffers;

public static class Program

{

public static void Main()

{

var pool = ArrayPool.Shared;

byte[] rented = pool.Rent(1024);

try

{

// Use only the portion you need. Rented arrays can be larger than requested.

Array.Clear(rented, 0, 1024);

rented[0] = 42;

Console.WriteLine(rented[0]);

}

finally

{

// Consider clearing if the data is sensitive.

pool.Return(rented, clearArray: true);

}

}

}

Two real-world gotchas with pooling:

  • Don’t assume the rented array length equals the requested size.
  • Don’t store rented arrays beyond the scope you control (they belong to the pool).

Bounds checks and loop shapes

The JIT is good at eliminating bounds checks in simple loops. This is one reason for (int i = 0; i < a.Length; i++) is such a common pattern.

In more complex loops (multiple arrays, multiple indices, uncertain invariants), the JIT might keep bounds checks. If performance matters, measure and simplify.

Specialized block copy for primitive types

Array.Copy is general-purpose. For byte-oriented workloads you’ll also see Buffer.BlockCopy or spans.

I don’t default to Buffer.BlockCopy unless I’m clearly working with primitive value types and I have a reason (interop, serialization, very hot loops). Otherwise, Array.Copy keeps the intent clearer.

Common pitfalls (the ones I keep seeing)

If you’re reading this to avoid bugs, this is the section that pays for itself.

Mistake 1: treating Length as “capacity I can fill later”

An array’s Length is its capacity and its size. You can’t “add” past it.

If you need Count separate from capacity, you need a separate variable (like the IntBuffer example) or a collection like List.

Mistake 2: calling Array.Resize and expecting the original reference to change

Array.Resize replaces the array reference. That’s why it takes ref.

If you pass an array into a method and call Array.Resize on the parameter without ref, you won’t resize the caller’s array.

Mistake 3: binary searching an unsorted array

This is the classic. It fails silently by returning “not found” or weird indices.

My rule: if the data is not obviously sorted at the point of search, I use IndexOf/Find.

Mistake 4: assuming a copy is deep

Clone and Copy are shallow for reference types. If you mutate objects inside, you’ll see changes through both arrays.

Mistake 5: confusing jagged and multi-dimensional

  • int[][] is an array of arrays. Each row can have different length.
  • int[,] is a true 2D array with fixed bounds.

The APIs and performance characteristics differ.

Mistake 6: forgetting null elements exist

For arrays of reference types (or nullable value types), you might have nulls. Sorts, comparisons, and projections should handle them intentionally.

Mistake 7: leaking internal arrays

Returning your internal array gives callers the ability to mutate your internal state. Sometimes that’s okay. Often it’s the beginning of a slow-motion bug.

When I choose something else (and why)

Arrays are foundational, but they’re not always the best abstraction.

List

I choose List when:

  • I need to append/insert/remove frequently.
  • I don’t know the final size.
  • I want rich APIs (Add, Remove, etc.) without reinventing count/capacity.

Span / ReadOnlySpan

I choose spans when:

  • I want to avoid allocations for slicing.
  • I’m doing parsing, encoding, or tight loops.
  • I want to pass “a view of data” without copying.

ImmutableArray (or “treat arrays as immutable”)

If I need to share data safely across threads or components, I either:

  • treat T[] as immutable (private ownership + defensive copies), or
  • use an immutable collection type designed for that.

Memory

If I need an await-friendly slice (span can’t cross await), I look at Memory. It’s still “array-adjacent,” but designed for async boundaries.

A practical checklist I use in reviews

When I review code using arrays, these are the questions I ask:

  • What does Length mean here: actual data count or capacity?
  • Who owns this array? Can someone else mutate it?
  • Is the array sorted? If so, where is that enforced?
  • Are we copying references or values? Is that intended?
  • Are we handling nulls and edge cases (empty arrays, single element, huge arrays)?
  • Is this a hot path? If yes, can we avoid allocations or use spans/pooling?

Closing thought

Arrays in C# are simple until they’re the foundation of something important: parsing, buffering, throughput, or interop. System.Array is the toolbox that turns “a fixed-size block” into something you can sort, search, copy, resize, and introspect safely.

If you internalize the contracts—fixed size, in-place mutation, shallow copy, sorted-search coupling—and you add a few modern patterns (spans, pooling), you’ll avoid a surprising number of real production issues.

Scroll to Top