Inside Go — Part 2: Memory Management in Go

In the first part of this series, we explored how Go code travels through the compilation pipeline, from source code to machine instructions. That was our starting point to see how Go transforms what we write into something the computer can run. If you have not read it yet, you can check out Part 1 using the link below.

When we write Go code, we usually do not think too much about where our variables live. We just declare them, and the compiler decides. But under the hood, Go makes important decisions about stack versus heap allocation, escape analysis, and garbage collection that directly affect performance. In this part, we will take a closer look at how Go manages memory and why these details matter to us as developers.
Stack vs Heap: Two Places to Store Data
Think of stack memory as a stack of plates in a cafeteria. Each new plate (variable) goes neatly on top, and when we are done, we remove it. This is fast and predictable because we only need to place or take from the top.
The heap is more like a large storage room. You can place items anywhere in the room, but it takes more effort to find space and later clean it up. This is where the garbage collector (GC) comes in, sweeping through the room to free space for future use.
- Stack: Small, short-lived, local to a function.
- Heap: Larger, flexible, but more costly to manage because it requires garbage collection.
Example: Stack Allocation
func add(a, b int) int {
c := a + b // "c" lives on the stack
return c
}
Here, "c" is created and destroyed within the function call. It is very fast and clean.
Example: Heap Allocation
func createSlice() *[]int {
s := make([]int, 1000) // large slice
return &s // pointer escapes the function
}
The slice "s" moves to the heap because we are returning a pointer. The function ends, but the caller still needs access to the data.
Escape Analysis: Go Decides Where Variables Live
Go runs an escape analysis during compilation to figure out if a variable can safely stay on the stack, or if it needs to "escape" to the heap.
Think of it like this: if you borrow a plate and return it before leaving the cafeteria, it stays in the stack. If you walk out of the cafeteria with the plate, the system has to move it into the storage room (heap), because your plate will live beyond the function.
You can inspect escape analysis by creating a "main.go":
package main
import "fmt"
//go:noinline
func stackExample() {
x := 1
println(x) // stays on stack or in registers, no escape message
}
//go:noinline
func escapeExample() {
z := 3
fmt.Println(z) // "z escapes to heap" due to interface boxing
}
//go:noinline
func movedExample() *int {
y := 2
return &y // "moved to heap: y"
}
func main() {
stackExample()
escapeExample()
_ = movedExample()
}
and then run the following command:
go build -gcflags="-m" main.go
Note: The compiler shows both escape analysis results and inlining decisions when using the -m flag. They are separate things, but they appear together in the output.
You will see output as follows:
./main.go:14:17: z escapes to heap
./main.go:19:5: moved to heap: y
The compiler usually prints messages only when a variable escapes to the heap or is moved to the heap. If there is no "moved to heap" or "escapes to heap" line, the variable stayed local (stack or register). Modern Go compilers often optimize small locals into CPU registers, so there may not even be a visible stack slot. This is why you usually see silence for stack cases and messages only for heap cases.
If you use fmt.Println (as in Line:14), you might see variables reported as escaping to the heap even though they seem simple. That happens because fmt.Println accepts interface{} values. Passing a value into an interface{} requires boxing, which can trigger heap allocation.
How Go’s Memory Allocator Works
Go’s memory allocator is inspired by tcmalloc (thread-caching malloc). Without going too deep into computer science, here is the simplified picture:
- Memory is divided into arenas (large chunks).
- Arenas are split into spans (medium chunks).
- Spans are split into blocks (small allocations).
Each goroutine has its own cache of memory blocks to reduce contention, called mcache. When it runs out, it fetches from a central cache called mcentral. This design makes most allocations very fast.
You can imagine this like a hotel:
- The arena is the entire hotel.
- Each span is a floor.
- Each block is a room.
- Goroutines check into their own rooms without waiting in a line at reception.
This system makes allocation efficient, but heap memory still needs garbage collection, which adds cost.
Garbage Collection: The Janitor of the Heap
Go uses a concurrent mark-and-sweep garbage collector. It runs in the background, finds objects that are no longer reachable, and frees them. Unlike older languages where garbage collection could freeze the program for noticeable periods, Go’s garbage collector is designed to keep pauses very short, usually under a millisecond.
Still, garbage collection is not free. More heap allocations mean more work for the garbage collector.
Why Memory Management Matters for Performance
- Stack allocations are extremely cheap.
- Heap allocations are more expensive and require garbage collection.
- Reducing unnecessary allocations leads to faster and smoother programs.
For example, consider string concatenation:
func joinStrings() string {
s := ""
for i := 0; i < 1000; i++ {
s += "a" // creates many temporary strings on heap
}
return s
}
Go strings are immutable. Every time you do s += "a", the compiler cannot just tack "a" onto the existing memory. It must:
- Allocate a new string large enough to hold the old content plus the new "a".
- Copy the old contents.
- Append the new "a".
- Replace "s" with the new string.
That means each concatenation creates a new allocation, almost always on the heap because the size of s grows over iterations and escapes the current scope.
A better way:
func joinStringsBuilder() string {
var b strings.Builder
for i := 0; i < 1000; i++ {
b.WriteString("a")
}
return b.String()
}
strings.Builder is a struct that internally manages a byte slice buffer:
type Builder struct {
buf []byte
}
- The Builder struct itself can be on the stack if it does not escape.
- The underlying buffer (buf) grows as needed. When it needs more space, it calls append, which may allocate new backing arrays on the heap.
So:
- The Builder does not magically keep everything on the stack.
- It just reduces the number of heap allocations by growing its buffer exponentially and reusing it, instead of allocating a new string every time like s += "a".
How Developers Can Influence Memory Behavior
- Reuse buffers instead of constantly creating new ones.
var buf bytes.Buffer
buf.WriteString("hello")
- Avoid unnecessary pointers when values are enough. Passing values often keeps them on the stack.
- Be mindful with slices. Large slices often end up on the heap. Sometimes using sync.Pool helps when you need to reuse them.
- Profile allocations. Use go tool pprof or runtime.ReadMemStats to see where heap allocations happen.
Other Important Details
- Zeroing: Go clears memory when allocating, which adds some overhead.
- Escape analysis is conservative: if the compiler is unsure, it will place the variable on the heap.
- sync.Pool: useful for reusing objects in high-performance scenarios.
- Finalizers: rarely used, but Go lets you run cleanup code when the garbage collector releases an object.
Wrapping Up
Go hides much of the complexity of memory management from us, but understanding "stack versus heap", "escape analysis", and how the "allocator" and "garbage collector" work helps us write more efficient programs.
Next time you write Go code, think of the cafeteria plates (stack) and the big storage room (heap). If you can keep your plates in the cafeteria, your program will usually run faster. If not, make sure the janitor (garbage collector) does not get overwhelmed.
If you are curious about the other topics I cover in this series, you can check out the introduction post using the link below.
