I used to spend hours — sometimes days — squeezing every last byte out of code. Getting a program to run on hardware that “couldn’t possibly” handle it was genuinely thrilling. The demoscene was my inspiration: watching impossible visual effects rendered in 64 kilobytes or less. How did they do that?
That mindset seems almost quaint now. Why optimize when you can just throw more hardware at the problem? Why care about efficiency when compute is cheap?
But here’s the thing: compute isn’t cheap. We’ve just externalized the costs.
When Optimization Was Survival
In the demoscene, constraints weren’t obstacles — they were creative fuel. A 64KB intro had to contain everything: graphics, music, animation, often procedurally generated in real-time. Every byte mattered. Every CPU cycle counted.
The legendary demos from groups like Future Crew, Farbrausch, and Conspiracy weren’t just technical achievements — they were art born from limitation. .kkrieger, a complete first-person shooter in 96KB, demonstrated that with enough creativity and optimization, you could do seemingly impossible things.
This wasn’t masochism. It was craft. Understanding your hardware intimately enough to coax performance that shouldn’t exist.
The Great Forgetting
Then something changed.
Cloud computing made infinite scale available on demand. Moore’s Law kept delivering. Storage became essentially free. And gradually, the culture of optimization faded.
Why spend three days optimizing an algorithm when you can spin up bigger instances? Why compress assets when bandwidth is cheap? Why profile your code when the customer will just buy more RAM?
I’ve seen this firsthand in enterprise environments:
- Applications that need 32GB of RAM to display a spreadsheet
- Docker images measured in gigabytes for simple services
- JavaScript bundles that take longer to parse than the actual user interaction
- “Microservices” that need more resources than the monolith they replaced
We’ve collectively decided that developer time is expensive and compute is cheap. So we optimize for the former at the expense of the latter.
This trade-off made some sense when it was purely economic. But it’s not purely economic anymore.
The Hidden Environmental Cost
Here’s the uncomfortable truth: every unnecessary CPU cycle consumes electricity. Every bloated Docker image requires storage and network transfer. Every over-provisioned Kubernetes cluster burns power 24/7.
Data centers currently consume about 1-2% of global electricity. And that percentage is growing. Fast. The explosion of AI workloads is accelerating this dramatically.
When your inefficient code runs on thousands of instances, those small inefficiencies multiply. The difference between O(n) and O(n²) isn’t just computer science trivia — it’s kilowatt-hours. Real energy. Real environmental impact.
This is where circular economy thinking becomes relevant. We’ve been treating compute as a disposable resource: use what you need, scale up, don’t worry about waste. But that model is increasingly unsustainable.
What Would Demosceners Do?
The demoscene mindset offers a different perspective:
1. Measure first, then optimize
Demosceners didn’t guess what was slow — they profiled obsessively. Modern observability tools give us similar capabilities. How often do we actually use them for optimization rather than debugging?
2. Question the default
Does this service really need 2GB of base memory? Does this JSON response really need to include every field? Does this background job really need to run every minute?
3. Constraints as creativity
What if your Docker image had a 50MB limit? What if your service had to run on 256MB RAM? These constraints often lead to cleaner, more focused solutions.
4. Pride in efficiency
In the demoscene, making something small and fast was prestigious. In modern software, it’s often invisible. What if we celebrated optimization the way we celebrate features?
Practical Green Software
I’m not suggesting we all start coding demos. But some practices from that world translate directly:
Profile before scaling. Before spinning up bigger instances, understand why you need them. Often, a single inefficient database query or memory leak is the culprit.
Right-size everything. Kubernetes resource requests and limits aren’t bureaucracy — they’re environmental policy. An over-provisioned cluster is burning energy to keep CPUs warm and idle.
Measure energy consumption. Tools like Scaphandre, Kepler, and Cloud Carbon Footprint can show you the actual energy impact of your workloads. What gets measured gets managed.
Batch and schedule smartly. Running jobs during off-peak hours or when renewable energy is abundant reduces environmental impact. Some cloud providers now offer carbon-aware scheduling.
Cache aggressively. The greenest computation is the one you don’t do. Caching isn’t just performance — it’s sustainability.
Choose efficient languages for the right jobs. Rust and Go consume far less energy than Python or Ruby for equivalent workloads. Sometimes that trade-off matters.
The Bigger Picture
Software optimization isn’t just about saving money or improving user experience anymore. It’s about recognizing that compute has a physical footprint.
Every time we ship bloated software, we’re making a choice. We’re choosing convenience over efficiency. We’re externalizing costs to the environment. We’re assuming that someone else — the cloud provider, the end user, the planet — will absorb the waste.
The demoscene taught us that incredible things are possible within constraints. That limitations breed creativity. That efficiency is its own reward.
Maybe it’s time to remember those lessons.
The demoscene is now UNESCO-recognized cultural heritage in several countries. The art of doing more with less deserves preservation — and revival.
