The Ultimate 'It Works on My Machine' Fix: Building a Polyglot (C++, Rust, Python), Remote IDE & Jupyter-Ready Container
Source: Dev.to
TL;DR: What We Are Building
This guide solves dependency hell and permission errors by building a professional‑grade development environment from first principles.
The Core: A reproducible Debian 12 container that eliminates “it works on my machine” issues.
The Stack: A custom‑compiled polyglot toolchain featuring GCC 15.2, Rust 1.89, and multiple sandboxed Python versions (including the experimental 3.13 nogil build).
The Workflow
- Permission‑Safe: Automatically maps your host User ID to the container to prevent root‑owned file locks.
- Remote IDE Ready: Seamlessly connects to VS Code and JetBrains IDEs via SSH for a full desktop experience.
- Interactive: Includes a background JupyterLab server with self‑signed TLS for secure data exploration.
Introduction
Most development environments are a compromise—a fragile mix of system packages, conflicting dependencies, and the classic “it works on my machine” problem. Docker offers a solution, but often introduces its own headaches with file permissions, monolithic images, and toolchains that are difficult to customize. This guide rejects that compromise.
We’ll use a first‑principles approach to build a professional, multi‑language (C, C++, Python, Rust) development environment from the ground up. You’ll learn why containers are fundamentally more efficient than traditional virtual machines before we construct a Dockerfile that gives you:
- A permission‑safe architecture that works seamlessly with your local files.
- Custom‑built, modern toolchains compiled from source.
- Integrated JupyterLab and SSH access for truly remote‑capable development.
By the end, you’ll have a powerful, reproducible environment and the knowledge to build and customize any development container you’ll ever need.
A Quick Note on Operating Systems
This guide assumes a Debian‑based Linux system (Debian 12). All commands and paths reflect that native environment.
If you’re on Windows, you’ll need to run the setup inside Windows Subsystem for Linux (WSL). Enable virtualization in your BIOS and install WSL with:
wsl --install
(Full Windows‑specific steps are omitted.)
Foundations
A Primer on Isolation
Every developer knows the dreaded phrase: “but it works on my machine.” It signals an environment that is unstable, inconsistent, and impossible to reproduce.
Local machines often become chaotic battlefields of competing dependencies—different projects requiring different Python versions, system libraries upgraded for one task breaking another, and onboarding new developers turning into an archaeological dig through outdated scripts.
The goal of this guide is to create a development environment that is:
- Clean
- Reproducible
- Portable
Identical for every developer on the team and, crucially, identical to the production environment where the code will ultimately run.
Traditional Virtualization (The Heavyweight Approach)
Traditional virtualization uses a hypervisor to emulate a full set of hardware (CPU, RAM, storage, network). Each virtual machine (VM) includes a complete guest operating system, making it akin to running a separate computer inside your computer. This approach is robust but incurs significant overhead in startup time, RAM, and disk usage.
On Linux, this is typically handled by KVM (Kernel‑based Virtual Machine) together with tools like QEMU. While powerful, it’s often overkill for development tasks.
Containerization (The Lightweight Approach)
Containers are an OS‑level virtualization technique. Instead of emulating hardware, containers are isolated processes that share the host’s kernel. Two kernel features make this possible:
- Namespaces – provide isolation of filesystem, network stack, process IDs, etc.
- Control Groups (cgroups) – limit resources such as CPU and RAM.
Think of the host OS as the building’s foundation and shared infrastructure, while each container is a private apartment. Namespaces are the walls and locked doors; cgroups are the circuit breakers.
Because there’s no guest OS to boot, containers launch in milliseconds with minimal resource overhead.
Why We Choose Docker
Docker did not invent containers, but it packaged the underlying technology into a developer‑friendly ecosystem:
- A simple, text‑based
Dockerfilethat serves as a blueprint for building images. - The Docker Engine providing a powerful CLI for building, shipping, and running images.
- Docker Hub, a public library for sharing pre‑built images.
These tools combine the speed and efficiency of OS‑level virtualization with unparalleled reproducibility, making Docker the ideal choice for our development environment.