Helloworld: A Go Patterns Reference
I’ve shipped a few dozen production Go services across a handful of companies. Billions of daily requests, codebases that outlived their original teams, org structures that changed faster than the software did. I’ve seen a lot of patterns tried, and I’ve seen several which hold up.
github.com/sethgrid/helloworld is where I keep the ones that held up. It’s an annotated example, concrete and runnable, and I keep it current as time permits. The patterns below are what I reach for every time.
Explicit > Implicit
Implicit logic kills velocity for teams and organizations. A DI container resolving dependencies at runtime, a mock framework inferring behavior from call counts, an error swallowed somewhere in middleware. All of these make the system harder to read, debug, and hand off. The patterns below push that logic into the open, where the compiler and your teammates can see it.
In the same direction, be explicit with your errors. By default, wrap them at all return sites. Be explicit with configuration; pass it down from main and don’t parse it deep in your logic. This is easier when you wire up your dependencies as early as possible. Be explicit with logic. Don’t let assumptions guide code flow. Being explict can take the form of using a flag instead of infering from a value. Sentinel values, while they have their place, are violations of this idea. A -1 value has implicit meaning; a flag or property that says “IsInvalid” communicates more to future developers.
Embrace the Interface
Testing with mocks sounds reasonable until the test suite becomes a liability. When you mock the implementation, you couple the test to it. Refactor the internals without changing behavior, and the tests break anyway. The tests are no longer testing what the code does; they’re testing how it does it. Martin Fowler put it well in “Mocks Aren’t Stubs”: mockist tests are inherently coupled to implementation, not behavior.
The Go standard library gives you a better tool: interfaces. Define a narrow interface for each dependency. Write a simple struct that satisfies it. Pass that in tests. The Google SWE Book’s chapter on test doubles draws a sharp line between fakes (simple working implementations) and mocks (behavior-verifying substitutes with framework machinery), and makes the case that fakes are almost always the right choice. The Google Testing Blog is more blunt: don’t mock types you don’t own. Hynek’s “Don’t Mock What You Don’t Own” is a good five-minute read on why mocking third-party code in particular punishes you twice, once in test complexity, again when the third-party API changes.
In Go, the fake is a few lines:
type eventWriter interface {
Write(userID int64, message string) error
Close() error
}
type fakeEventStore struct {
wErr error
cErr error
}
func (f *fakeEventStore) Write(userID int64, message string) error { return f.wErr }
func (f *fakeEventStore) Close() error { return f.cErr }
The fake does exactly what you tell it to. When the interface changes, the compiler tells you what to fix, in the fake and everywhere else.
Explicit Dependency Wiring
Dependency injection frameworks solve a problem Go doesn’t have. In Java or C#, DI containers handle constructor overloading and lifecycle management that the language makes painful by hand. Go has first-class functions, fast compilation, and interfaces that swap cleanly in tests. The motivation for a container largely evaporates.
Redowan Delowar’s “You probably don’t need a DI framework” walks through the concrete costs: Dig and Wire push errors to runtime instead of compile time, bury the dependency graph inside the container, and require every new team member to learn a new mental model on top of the actual problem. His conclusion matches mine: manual wiring shows you the graph directly, compile errors point at broken call sites immediately, and there is no single place you can’t look.
The server struct holds interface-typed dependencies:
type Server struct {
config Config
taskq taskqueue.Tasker
eventStore eventWriter
}
Production:
srv := &Server{
eventStore: db.NewEventStore(conn),
taskq: taskqueue.NewMySQLTaskQueue(conn),
}
Tests:
srv := &Server{
eventStore: &fakeEventStore{wErr: nil},
taskq: taskqueue.NewInMemoryTaskQueue(1, 15*time.Second, log),
}
The compiler enforces correctness. Nothing is hidden behind a container. I’ve wired codebases this way with dozens of services and deep dependency graphs, and the explicit approach has never been the bottleneck. I have had teammates who bemoaned passing arguments down the stack, but those objections fall especially short in the world of AI assisted development.
HTTP Handler Closures
Each HTTP handler is a standalone function. It takes its dependencies as arguments, captures them in a closure, and returns an http.HandlerFunc. The dependency is resolved once, at route registration time, not once per request.
func handleHelloworld(store eventWriter) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
logger := logger.FromRequest(r)
if !store.IsAvailable() {
errorJSON(w, r, http.StatusServiceUnavailable, "service unavailable", nil)
return
}
// ...
}
}
// wired at startup, not per request
router.Get("/", handleHelloworld(s.eventStore))
Matt Ryer covered this pattern in “How I write HTTP services in Go after 13 years”: handlers as functions that return http.Handler, dependencies injected at definition time. His approach goes further: he favors end-to-end tests over per-handler unit tests, arguing that unit tests at the handler level mostly exercise boilerplate. That’s a reasonable position, and I’ve used both. My preference is per-handler tests for anything with meaningful logic, end-to-end for the wiring. The closure pattern supports both without conflict.
To test a handler in isolation, pass a fake, call ServeHTTP on the result with an httptest.ResponseRecorder, and assert on the response. No HTTP server required. Alternatively, you can spin up an actual HTTP server. The helloworld repo illustrates independently running servers on their own ports for integration testing. The advantage of testing with fakes is validating error handling at every level, up through the handler layer.
HTTP Server, Two Listeners
The helloworld server runs two HTTP listeners: a public one for business logic, and an internal one for /metrics, /healthcheck, and /status. Same process, independent ports. The internal port stays firewalled from the public API without any routing logic. Prometheus scraping, health checks from a load balancer, and internal status endpoints stay off the public surface entirely.
Structured Errors
Error strings that embed variable data are a monitoring problem. "unable to delete user 180: mysql has gone away" has a cardinality equal to the number of affected users times the number of error conditions. That’s a lot of distinct log lines for what is one kind of failure. Structured logging solves this at the log level, but only if the error itself carries structured context, not a pre-formatted string. Structured errors reduce dynamic string content, factoring out useful fields for reporting and analysis.
The pattern is to attach key-value pairs to the error at the site where context is available, then extract them at the log site. I use kverr for this. You wrap an existing error with a map of context:
if err := deleteUser(id); err != nil {
return kverr.New(err, "user_id", id, "operation", "delete")
}
kverr.New lifts any existing kv pairs from a wrapped kverr error, so context accumulates as the error propagates up the call stack without duplication. The error string stays clean. Error() returns only the underlying message, not the kv map, which keeps cardinality low in any system that indexes on the error string. kverr provides a YoinkArgs helper for log extraction:
args := kverr.YoinkArgs(err)
logger.Error("had an error", args...)
kverr is lightweight and stays out of the way. An alternative is returning your own error type that includes the relevant data directly:
type UserError struct {
err error
userID int64
Op string
}
func (e UserError) Error() string { return e.err.Error() }
func (e UserError) Unwrap() error { return e.err }
if err := deleteUser(id); err != nil {
return UserError{err: err, userID: id, Op: "delete"}
}
The important part is the discipline, not the library.
Extracting fields for logging
When the organization has an agreed-upon structured logging package like zap, a StructuredLoggable interface and a small recursive extractor make field extraction consistent across any error type:
// package errfields
// StructuredLoggable is implemented by error types that carry structured
// log fields. The walker below collects fields from every layer of the
// error chain, so each wrapping level can contribute independent context.
type StructuredLoggable interface {
LogFields() []zap.Field
}
// Fields walks the full error chain and returns all zap fields found.
// Earlier layers in the chain appear first, so outer context appends
// to inner context naturally.
func Fields(err error) []zap.Field {
var fields []zap.Field
for err != nil {
if sl, ok := err.(StructuredLoggable); ok {
fields = append(sl.LogFields(), fields...)
}
err = errors.Unwrap(err)
}
return fields
}
Any error type that implements LogFields() []zap.Field participates automatically. The interface is the decoupling point. To wire in kverr:
func (e *Error) LogFields() []zap.Field {
e.mu.RLock()
defer e.mu.RUnlock()
fields := make([]zap.Field, 0, len(e.kv))
for k, v := range e.kv {
fields = append(fields, zap.Any(k, v))
}
return fields
}
To wire in UserError:
func (e UserError) LogFields() []zap.Field {
return []zap.Field{
zap.Int64("user_id", e.userID),
zap.String("operation", e.Op),
}
}
At the log site, the full chain collapses into a clean set of fields:
if err != nil {
logger.Error("failed to delete user",
append(errfields.Fields(err), zap.Error(err))...,
)
return err
}
Testing Against Logs
Here’s something I do that most people find unorthodox: I assert against log output in tests.
Structured logs are not just observability artifacts. Read another way, they are a structured event stream describing what is happening in your application. Every log line is a machine-readable record with a level, a message, and a set of key-value fields. That’s a contract. I test against it.
If a request should emit a specific log line with specific fields, I write a test that captures the log output and asserts on it. This catches regressions in observability, not just in behavior. A refactor that silently drops a user_id field from an error log is a real bug. Tests that only assert on HTTP responses miss it entirely.
The helloworld repo uses a mutex-wrapped bytes.Buffer to capture log output safely across goroutines, wired in via a functional option on the test server:
logbuf := lockbuffer.NewLockBuffer()
srv, _ := newTestServer(WithLogWriter(logbuf))
// ... make a request ...
assert.Contains(t, logbuf.String(), `"user_id"`)
assert.Contains(t, logbuf.String(), `"level":"ERROR"`)
If your logs are structured and consistent, they are a testable artifact of your system. And if your system evolves to a pipeline, this event stream will naturally fit right in.
Final Thoughts
Three things underpin everything in this repo.
Be explicit. Pass dependencies down from main. Wire them early. When something goes wrong at 2am, you want to be able to read the code and know exactly what is happening, with no framework magic to unpeel. This pays off compounding interest over the life of a codebase.
Learn to love the interface. Go makes interfaces cheap. Use them to isolate units that need failure mode testing.
Take structured logs seriously. At scale, log cardinality determines whether your monitoring system is useful or on fire. Structured errors feeding structured logs, with a consistent field extraction contract allow for aggregations that can give you alerts and dashboards that tells you what is broken, following breadcrumbs of related data. Once you have good logs and good log analysis tooling, you’ll never want to go back.
Full reference at github.com/sethgrid/helloworld. Compiles, has tests, stays current.