The “Plumbing Hell” #
Distributed systems offer incredible upside when it comes to scale. But for the developer trying to write code on a Tuesday afternoon, they are often a nightmare.
We have entered what I call “Plumbing Hell.” To run a single service locally, you often need to spin up a Kafka instance, a Postgres database, a Redis cache, and three other microservices. You end up writing 500 lines of docker-compose.yaml or, worse, trying to run a heavy Kubernetes cluster on your laptop just to verify a simple API change.
We spend more time managing ports, debugging networking between containers, and waiting for “the environment” to stabilize than we do writing code.
Intent vs. Implementation #
The core problem is that our current tools conflate intent (what I want to happen) with implementation (how it happens).
If I say “I need a Postgres database,” I shouldn’t have to manually map ports to localhost:5432 or write a Kubernetes StatefulSet manifest just to run a test.
This is the motivation behind a new tool I am building called bs.
What is bs?
#
bs stands for The No Bullshit Build System.
The goal is to build an artifact-centric build orchestrator in Go. The philosophy is simple: Define the intent, and let the system handle the plumbing.
Unlike traditional build tools that act as glorified script runners, bs is designed to act as a Translator. You define your architecture in a simple bs.yaml, and the tool translates that intent into environment-specific instructions:
- On your Local Machine: It translates intent into direct Docker API calls, simulating a “smart,” hot-reloading environment without the overhead of k8s.
- In Production: It translates that same intent into Kubernetes manifests or Helm charts.
The Architectural Vision #
I am designing the system around a few key concepts that differentiate it from standard Makefiles or scripts.
1. The “Translator” Pattern #
bs decouples the definition of a service from how it runs. You don’t write Docker Compose files; you write a bs.yaml. The system decides the best way to run that dependency based on whether you are in dev, test, or prod.
2. Auto-Wiring & Secrets Management #
One of the biggest friction points in local dev is managing the sprawl of configuration and secrets. bs handles dynamic dependency linking and secret injection automatically.
If Service A depends on a Database, bs doesn’t just inject a URL; it securely provisions the necessary environment variables, API keys, and TLS certificates at runtime. You never have to copy-paste a .env file or hardcode a secret again.
3. Everything is an Artifact #
In this system, even a database is treated as an artifact. It’s not just “infrastructure.” A database can be pre-seeded with schema scripts, snapshotted, and cached as an immutable image.
Current Status: Work in Progress #
To be clear: This is not a finished product yet.
I am currently deep in the development of Phase 1. My focus right now is establishing the skeleton of the system:
- The Core CLI: Using
spf13/cobrato handle the entry points. - The DAG Engine: Building the topological sort logic to handle parallel execution and dependency graphing.
- The Docker Adapter: Implementing the runtime interface to talk directly to the Docker Client for spawning containers.
- The Local Store: Creating the Content Addressable Store (CAS) to save and retrieve build artifacts.
This post is a statement of intent. I am building the core to solve the friction I face daily with microservice development.
I will be posting updates as I hit key milestones on the roadmap. Phase 1 is about getting the “plumbing” of the tool itself working so it can handle the plumbing of my apps.
Stay tuned.