Summary
Elixir enables software teams to achieve dramatic efficiency gains—often reducing server fleets by 80–95%—by leveraging the BEAM VM’s lightweight concurrency, built-in fault tolerance, and demand-driven processing. These efficiencies translate directly into lower cloud costs, reduced energy consumption, and measurable carbon savings.
Real-world examples from Bleacher Report, Discord, and WhatsApp show Elixir supporting massive scale with a fraction of the hardware, cutting both operational spend and environmental impact. In short, greener software is also cheaper software—and Elixir makes both possible.
If you care about sustainability, you should care about software efficiency. In cloud environments, every CPU cycle you don’t burn is money and electricity you don’t spend. Elixir—built on the Erlang BEAM VM—lets teams do more work with fewer servers, often by an order of magnitude. Less hardware, lower energy, lower carbon, lower bill.
Below, I’ll unpack why Elixir tends to be frugal with compute, show hard numbers from real systems, and give concrete calculations you can adapt to your own workloads (including an SCI-style footprint estimate).
Why cost and carbon move together
In the public cloud, your monthly invoice is a proxy for energy: fewer instance-hours, less electricity. In carbon accounting terms, operational emissions are:
O = E × I, where E is energy (kWh) and I is the grid’s carbon intensity (gCO₂e/kWh).
Two environmental factors magnify the effect of any software optimization:
Data center PUE (Power Usage Effectiveness). Industry-wide, average PUE has hovered around 1.58 in recent surveys (meaning for every 1 kWh used by your servers, another 0.58 kWh goes to cooling, power distribution, etc.).
Grid carbon intensity (gCO₂e/kWh). In 2023, the EU average electricity intensity was ~242 gCO₂/kWh; the global power sector average was ~480 gCO₂/kWh. Your region may be cleaner or dirtier.
Implication: If Elixir lets you cut 80–95% of your instance count, the emissions drop roughly in lockstep—then PUE and grid intensity scale that drop up or down.
What makes Elixir lean: the BEAM’s concurrency & memory model
Elixir rides on the Erlang VM (BEAM), which was built for high-availability telco systems. The BEAM’s design choices are unusually good for doing a lot with a little:
1) Ultra-lightweight processes
- A freshly spawned Erlang/Elixir process uses ~327 words of memory including a 233-word heap (a “word” is 8 bytes on 64-bit, so ~2.6 KB total at spawn). This tiny footprint is why you can run hundreds of thousands to millions of processes on one machine.
- Processes are isolated (no shared memory) and communicate via messages, which keeps contention low and enables per-process garbage collection—short, local GC pauses instead of whole-VM stop-the-world events.
2) Back-pressure and flow control (GenStage/Broadway)
Elixir’s GenStage/Broadway encourages pull-based demand. Consumers ask producers for just enough work, so your system naturally throttles at the edges instead of exploding in the middle. This prevents “autoscaling thrash” and smooths compute usage. Discord used GenStage to absorb bursts of over a million push requests per minute.
3) Server-driven UX with LiveView
Phoenix LiveView keeps state server-side and sends diffs over a WebSocket:
- For one production app, an active LiveView connection uses ~3 MB, but after 15 s of idle the connection hibernates to ~150 KB—a 95% memory reduction per idle user.
- LiveView’s diffing slashes network payloads compared to re-rendering entire pages in a traditional SPA.
Proof in production: Elixir/Erlang at massive scale
- Bleacher Report rewrote a push-notification system in Elixir and went from ~150 servers down to 8, handling 200 M push notifications/day, 1.5 B page views, and 200k concurrent app requests, with p95 ≈ 100 ms and capacity for 8× traffic spikes without autoscaling.
- Discord scaled Elixir to millions of concurrent users and managed >1 M push requests/minute bursts using GenStage.
- WhatsApp (Erlang): with ~50 engineers they ran a service for ~900 M users (2015).
- Change.org: “over a billion emails a month” with an Elixir pipeline replacing a previous stack.
Turning Elixir’s efficiency into kWh, CO₂ and dollars
Step 1 — Convert compute usage to energy (kWh)
Using AWS’s public coefficients:
- At 50% utilization: 0.00212 kWh per vCPU-hour
- Multiply by PUE to include facility overhead.
Formula:
kWh ≈ vCPU-hours × 0.00212 × PUE
Step 2 — Convert energy to emissions
kgCO₂e ≈ kWh × (grid intensity gCO₂/kWh ÷ 1000)
Step 3 — Put a price on it
Example: c6i.xlarge (4 vCPU) in us-east-1 = $0.1700/h → $1,489/year if run 24×7.
Example A — “Bleacher-style” consolidation
- Old: 150 × 4 vCPU = 600 vCPU
- New: 8 × 4 vCPU = 32 vCPU
- Δ vCPU = 568 vCPU fewer running 24×7 → 4.97 M vCPU-hours/year
Energy saved: ≈ 16.7 MWh/year
Emissions saved: ≈ ~4.0 tCO₂e/year
Bill saved: ≈ $211k/year
Example B — LiveView memory & bandwidth wins
For 10,000 concurrent users:
- Active connections: 3 MB each → 30 GB RAM.
- If 80% hibernate after 15 s, total drops to ~7.2 GB.
- That’s a ~76% reduction in memory footprint.
On the network side, LiveView sends diffs, often single-digit KB vs. 50–150 KB in SPAs.
A practical SCI-style footprint calculation
Hypothetical Phoenix API:
- 48 vCPU running 24×7 @ 35% CPU
- PUE: 1.58
- Grid: EU avg 242 gCO₂/kWh
- 100M requests/month
Result:
- 81 kWh/month
- ~20.8 kgCO₂e/month
- SCI ≈ 0.208 mg CO₂e/req
Halving the instance count halves SCI.
Tactics for greener Elixir systems
- Collapse microservices into OTP apps to reduce baseline fleets.
- Use GenStage/Broadway for demand-driven queues.
- Adopt LiveView for stateful UX with memory hibernation.
- Tune BEAM schedulers for smoother CPU utilization.
- Leverage fault tolerance to avoid hot spare fleets.
- Monitor with Telemetry and optimize process heaps.
- Run on efficient silicon (e.g., AWS Graviton) for up to 60% less energy per request.
Conclusion
If you squint, green software is just frugal software. Elixir gives you unusually powerful tools to avoid waste: tiny processes, supervision trees, back-pressure, and server-driven UX. That’s why it repeatedly shows up in stories where small teams run giant systems—or where big teams cut fleets by 5–20×.
And because cost, energy, and carbon move together in the cloud, those technical wins are environmental wins too.