Arrays: Static and Dynamic Implementations
An array is the most fundamental data structure in computer science. It stores elements in contiguous memory, which gives it constant-time access by index and excellent cache performance. Nearly every other data structure builds on top of arrays or exists to address one of their limitations, so understanding how they work at the memory level is worth the time.
There are two types of arrays. Static and dynamic ones.
Static means that as we declare our array we need to specify the size of it, to let your operating system know how many memory slots/bytes it should allocate. If it would be a 32-bit integer then 4 bytes (slots), if a 64-bit integer, then 8 memory slots.
Time complexities of standard arrays operations:
T - Time complexity; S - Space complexity; None - both Time and Space complexity
- Accessing a value at a given index: O(1)
- Updating a value at a given index: O(1)
- Iterating over: O(n) T, O(1) S
- Inserting a value at the beginning: O(n)
- Inserting a value in the middle: O(n)
- Inserting a value at the end:
- amortized O(1) when dealing with a dynamic array
- O(n) when dealing with a static array
- Removing a value at the beginning: O(n)
- Removing a value in the middle: O(n)
- Removing a value at the end: O(1)
- Copying the array: O(n)
Why are updating and getting constant?
As we know the memory address of the first element, we can multiply the number of bytes that a single element needs. For a 32-bit integer that would be 4 bytes. So imagine that our array starts at memory address 8, and we want the element at index 2 (the third element). Since each element is a fixed 4 bytes, we calculate: 8 + 4 * 2 = 16. The element at index 2 is at memory address 16.
Cache Locality
One of the key practical advantages of arrays is cache locality. Because array elements are stored in contiguous memory slots, accessing them sequentially plays very well with the CPU cache. When the processor loads a value from memory, it also loads a chunk of nearby addresses into the cache. With arrays, the next element you need is almost always already in the cache, making sequential access extremely fast in practice. Data structures that scatter elements across memory (like linked lists) lose this benefit and may suffer many more cache misses, even when their theoretical time complexity is the same.
Static vs Dynamic Arrays
In Java, dynamic arrays are represented by ArrayList (and the legacy Vector class). Vector is synchronized, meaning every operation acquires a lock, which introduces unnecessary overhead in single-threaded code. ArrayList is not synchronized and is the standard choice for most use cases. In JavaScript, Python, and several other modern languages, standard arrays under the hood are dynamic arrays.
Dynamic arrays mean that when we create an array with three 64-bit integers, not 24 memory slots will be taken but 48, so the allocated space is generally doubled and the second half will be empty and ‘reserved’ for us, which allows faster insertions. The exact growth factor varies by language: Java’s ArrayList grows by 1.5x (50%), while C++’s std::vector and Python’s list grow by roughly 2x. So when there is space, appending is O(1). But when the last memory slot is taken, the next insertion will allocate a new, larger block of memory and copy every existing element into it. That single resize is O(n) because each of the n elements must be moved to the new location. However, as more elements are added, resizes happen less and less frequently, so we say that insertions at the end of a dynamic array have amortized O(1) complexity. If we want to insert at the beginning or in the middle, then it is O(n), because we need to shift elements to make room for the new one.
When to Use Arrays vs Other Data Structures
Picking the right data structure is half the battle, so let’s make the decision concrete. Prefer arrays when you need O(1) random access by index, when the size is known or at least predictable, when you iterate sequentially and want the best possible cache locality, or when per-element memory overhead matters, since arrays store values back-to-back with zero pointer cost.
On the other hand, choose alternatives when your workload looks different. If you are doing frequent insertions or deletions in the middle, a linked list avoids the costly shifting that arrays require. If the size is highly unpredictable and resizing happens constantly, a linked list or a tree-based structure may give you smoother performance. And if you need key-based lookup rather than index-based lookup, a hash table (HashMap in Java) is the natural fit with its average O(1) search by key.
A good real-world example is a text editor’s undo history. Each editing action is pushed to the end of the list, undone from the end, and occasionally you scan backward through recent actions. An array (or ArrayList) is ideal here because you only append and remove at the tail (both amortized O(1)) and you benefit from cache-friendly sequential access when scanning. A linked list would waste memory on per-node pointers and scatter nodes across the heap, causing frequent cache misses during that backward scan.
Common Array Patterns in Algorithms
Arrays show up in almost every coding interview and competitive programming problem. Two patterns in particular are worth internalizing because they turn naive O(n²) solutions into clean O(n) ones.
Two-pointer technique: place one pointer at the start and another at the end (or at two meaningful positions) and walk them toward each other based on some condition. The classic example is checking whether a string is a palindrome:
1
2
3
4
5
6
7
8
9
10
11
12
public static boolean isPalindrome(String s) {
int left = 0;
int right = s.length() - 1;
while (left < right) {
if (s.charAt(left) != s.charAt(right)) {
return false;
}
left++;
right--;
}
return true;
}
This runs in O(n) time and O(1) space. Each character is visited at most once, and we only store two index variables.
The same idea solves the two-sum problem on a sorted array. Given a sorted array and a target sum, find two numbers that add up to it:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public static int[] twoSumSorted(int[] nums, int target) {
int left = 0;
int right = nums.length - 1;
while (left < right) {
int sum = nums[left] + nums[right];
if (sum == target) {
return new int[]{left, right};
} else if (sum < target) {
left++;
} else {
right--;
}
}
return new int[]{};
}
Because the array is sorted, if the current sum is too small we move the left pointer up to increase it; if too large we move the right pointer down. Again O(n) time, O(1) space.
Sliding window: maintain a window over the array and slide it one element at a time, updating a running result as you go. The window can be fixed-size (always k elements) or variable-size (expanding and shrinking based on a condition, such as finding the shortest subarray whose sum exceeds a threshold). A textbook fixed-size case is finding the maximum sum subarray of size k:
1
2
3
4
5
6
7
8
9
10
11
12
public static int maxSumSubarray(int[] nums, int k) {
int windowSum = 0;
for (int i = 0; i < k; i++) {
windowSum += nums[i];
}
int maxSum = windowSum;
for (int i = k; i < nums.length; i++) {
windowSum += nums[i] - nums[i - k];
maxSum = Math.max(maxSum, windowSum);
}
return maxSum;
}
We compute the sum of the first window, then slide it forward by adding the next element and subtracting the element that just left the window. This gives us O(n) time and O(1) space, compared to the naive O(n·k) approach of recalculating each subarray from scratch.
Related posts: Hash Tables, Strings, Complexity Analysis, Memory