MayaFlux 0.1.0: Infrastructure for Digital Creativity

Published: (January 1, 2026 at 01:33 PM EST)
8 min read
Source: Dev.to

Source: Dev.to

MayaFlux 0.1.0 – Release Overview

MayaFlux 0.1.0 is now available. This is not another creative‑coding framework; it is a computational infrastructure built from 15 years of interdisciplinary performance practice, pedagogy, research, and production DSP engineering.

Core Philosophy

  • C++20/23 multimedia computation framework that rejects several fundamental assumptions of existing tools.
  • Analog synthesis metaphors (oscillators, patch cables, envelope generators) are pedagogical crutches borrowed from hardware that never constrained digital computation.
  • MayaFlux embraces:
    • Recursion & look‑ahead processing
    • Arbitrary precision & cross‑domain data sharing
    • Computational patterns with no analog equivalent (polynomials sculpt data, logic gates make creative decisions, coroutines coordinate time itself)

Unified Processing Model

Audio, visual, and control processing are artificially separated in most commercial tools. In MayaFlux a single unit can:

  • Output to audio channels
  • Trigger GPU compute shaders
  • Coordinate temporal events

All of this happens simultaneously in the same processing callback. Samples, pixels, and parameters are all double‑precision floating‑point, so data flows between domains without conversion overhead.

Extensibility

Tools that hide complexity also hide possibility. MayaFlux provides hooks everywhere:

  • Replace the audio callback
  • Intercept buffer processing
  • Override channel coordination
  • Substitute back‑ends

Every layer is replaceable; every system is overridable. If you understand the implications, you can modify anything. If you don’t, the documentation teaches you through working code examples that produce real sound within minutes.

What MayaFlux Is Not

CategoryWhat MayaFlux Is NotExplanation
Application SoftwareNot a DAWNo timeline editor, MIDI piano roll, or plugin hosting. MayaFlux provides the computational substrate; you build your own sequencing logic.
Not a node‑based UINo visual patching interface. Everything is C++ code—text is more precise for complex logic. Your patches are version‑controlled source files, not opaque binaries.
Not consumption‑oriented softwareIt isn’t a replacement for Max/P5.js yet. Those tools excel at rapid prototyping and visual exploration; MayaFlux excels at computational depth and architectural control.
Target AudienceCreative technologists hitting tool limitsIf you’ve prototyped in Processing but need real‑time audio, mastered Max/MSP but want programmatic control, or built installations in openFrameworks then watched Apple deprecate OpenGL, MayaFlux is built from frustration with those limitations.
Visual artists & installation makers needing computational depthIf you’ve built generative visuals in Processing or TouchDesigner but want low‑level GPU control without OpenGL’s deprecated patterns, or need truly synchronized audio‑visual processing, MayaFlux treats graphics with the same architectural rigor as audio DSP.
Researchers needing genuine flexibilityAcademic audio research shouldn’t require fighting commercial tools to implement novel algorithms. MayaFlux provides direct buffer access, arbitrary processing rates, and cross‑domain coordination.
Musicians & composers outgrowing presetsIf you’ve exhausted existing tools and want instruments that match your musical imagination rather than vendor roadmaps, MayaFlux treats synthesis as data transformation you control at every sample.
Developers escaping framework constraintsGame‑audio middleware, creative‑coding libraries, and visual‑programming environments all impose architectural boundaries. MayaFlux removes them while maintaining performance guarantees through lock‑free coordination and deterministic processing.

Technical Highlights

Lock‑Free Atomic Coordination

  • Every node, buffer, and network uses C++20’s atomic_ref, compare‑exchange operations, and explicit memory ordering.
  • Modifications (adding oscillators, connecting filters, restructuring graphs) happen while audio plays with no glitches, dropouts, or mutex contention.
  • Maximum latency for any modification: one buffer cycle (typically 10‑20 ms).

No Locks in the Processing Path

  • Pending operations are queued atomically.
  • Channel coordination uses bitmask CAS patterns.
  • Cross‑domain transfers happen through processing handles with token validation.

Single‑Pass Processing

  • Computational units process exactly once per cycle, regardless of the number of consumers.
  • Example: a spectral transform feeding both granular synthesis and texture generation processes once; both domains receive synchronized output.
  • Atomic state flags prevent reprocessing; reference counting coordinates resets; channel bitmasks handle multi‑output scenarios.

Unified Rate Token

  • The traditional separation between audio‑rate and control‑rate is eliminated.
  • Rate is just a processing token (AUDIO_RATE, VISUAL_RATE, CUSTOM_RATE) that tells the engine the calling frequency.

Numbers Everywhere

  • Audio samples, pixel values, control parameters, and time are all numbers.
  • No conversion overhead, no semantic boundaries.
  • A visual analysis routine can directly modulate synthesis parameters.
  • A recursive audio filter can drive texture coordinates.
  • The same RootBuffer pattern works for RootAudioBuffer and RootGraphicsBuffer.

Vulkan Integration

  • Not an afterthought or “audio visualization”.
  • The graphics pipeline runs on identical infrastructure: lock‑free buffer coordination, token‑based domain composition, unified data flow.
  • Particle systems, geometry generation, shader bindings—all use the same Node/Buffer/Processor architecture as audio.
  • A polynomial node can drive vertex displacement as naturally as it drives waveshaping.
  • This is a computation substrate, not an audio library with graphics bolted on.

First‑Class Coroutine Support

auto sync_routine = [](Vruta::TaskScheduler& scheduler) -> Vruta::SoundRoutine {
    while (true) {
        // Your time‑compositional logic here
    }
};
  • Time becomes compositional material through coroutines, enabling sophisticated temporal coordination across audio, visual, and control domains.

Bottom Line

MayaFlux is infrastructure, not an end‑user application. It gives you the raw, high‑performance building blocks to create deeply integrated, cross‑domain multimedia systems without the artificial boundaries imposed by conventional tools. If you need ultimate flexibility, deterministic lock‑free processing, and a unified computational model for audio, graphics, and control, MayaFlux 0.1.0 is the foundation to start from.

Sample Coroutines

co_await Kriya::SampleDelay{ scheduler.seconds_to_samples(0.02) };
process_audio_frame();

co_await Kriya::MultiRateDelay{
    .samples_to_wait = scheduler.seconds_to_samples(0.1),
    .frames_to_wait   = 6
};
bind_push_constants(some_audio_matrix);
}

Coroutines suspend on sample counts, buffer boundaries, or arbitrary predicates.
Temporal logic reads like the musical idea – no callback hell, no message‑passing complexity.

MayaFlux Features

  • Complete LLVM‑based JIT compilation – write C++ code, hit evaluate, hear/see results within one buffer cycle.
  • No separate compilation step, no application restart, no workflow interruption.
  • Full C++20 syntax, template metaprogramming, compile‑time evaluation.
  • Live coding doesn’t mean switching to a simpler language.

Processing Domains

Domains combine Node tokens (rate), Buffer tokens (backend), and Task tokens (coordination).

// Audio domain
Domain::AUDIO = Nodes::ProcessingToken::AUDIO_RATE +
                Buffers::ProcessingToken::AUDIO_BACKEND +
                Vruta::ProcessingToken::SAMPLE_ACCURATE;

// Graphics domain
Domain::GRAPHICS = Nodes::ProcessingToken::VISUAL_RATE +
                   Buffers::ProcessingToken::GRAPHICS_BACKEND +
                   Vruta::ProcessingToken::FRAME_ACCURATE;

// Custom user example
Domain::PARALLEL = Nodes::ProcessingToken::AUDIO_RATE +
                  Buffers::ProcessingToken::AUDIO_PARALLEL + // Executes on the GPU
                  Vruta::ProcessingToken::SAMPLE_ACCURATE;

Custom domains compose individual tokens for specialized requirements.
Cross‑modal coordination happens naturally through token compatibility rules enforced at registration, not during hot‑path execution.

Buffers & Processors

  • Buffers own processing chains.
  • Chains execute processors sequentially.
  • Processors transform data via mathematical expressions, logic operations, or custom functions.

Everything composes:

void compose() {
    auto sine        = vega.Sine(440.0);
    auto node_buffer = vega.NodeBuffer(0, 512, sine)[0] | Audio;

    auto distortion = vega.Polynomial([](double x) {
        return std::tanh(x * 2.0);
    });
    MayaFlux::create_processor(node_buffer, distortion);
}

The substrate doesn’t change – your access to it deepens.

Platform Support

  • Windows – MSVC / Clang
  • macOS – Clang
  • Linux – GCC / Clang

Distributed through:

  • GitHub releases
  • Launchpad PPA (Ubuntu/Debian)
  • COPR (Fedora/RHEL)
  • AUR (Arch)

Project Management

Weave – a command‑line tool that handles:

  • Automated dependency management
  • MayaFlux version acquisition & installation
  • C++ project generation with autogenerated CMake configuration loading the MayaFlux library and all necessary includes

Audio Backend

  • RtAudio with:
    • WASAPI (Windows)
    • CoreAudio (macOS)
    • ALSA / PulseAudio / JACK (Linux)

Graphics Backend

  • Vulkan 1.3 – complete pipeline from initialization, dynamic rendering, command‑buffer management, to swapchain presentation.
  • Currently supports:
    • 2D particle systems
    • Geometry networks
    • Shader bindings with node data via push constants and descriptors

Foundation for procedural animation, generative visuals, and computational geometry.

Live Coding

Lila JIT system with LLVM 21+ supporting full C++ syntax, including templates, constexpr evaluation, and incremental compilation.

Temporal Coordination

  • Complete coroutine infrastructure with:
    • Sample‑accurate scheduling
    • Frame‑accurate synchronization
    • Multi‑rate adaptation
    • Event‑driven execution

Documentation

  • Comprehensive tutorials from “load a file” to complete pipeline architectures.
  • Technical blog series covering lock‑free architecture and state‑coordination patterns.
  • Persona‑based onboarding (musician, visual artist, etc.) addressing mental‑model transitions from Pure Data, Max/MSP, SuperCollider, p5.js, openFrameworks, Processing.

Testing

  • 700+ component tests validating lock‑free patterns, buffer processing, node coordination, and graphics‑pipeline integration.

Example: Load & Process Audio

void compose() {
    auto sound   = vega.read("path/to/file.wav") | Audio;
    auto buffers = MayaFlux::get_last_created_container_buffers();

    auto poly = vega.Polynomial([](double x) { return x * x; });
    MayaFlux::create_processor(buffers[0], poly);
}

Example: Build Processing Chains

void compose() {
    auto sound   = vega.read("drums.wav") | Audio;
    auto buffers = MayaFlux::get_last_created_container_buffers();

    auto bitcrush = vega.Logic(LogicOperator::THRESHOLD, 0.0);
    auto crush_proc = MayaFlux::create_processor(buffers[0], bitcrush);
    crush_proc->set_modulation_type(LogicProcessor::ModulationType::REPLACE);

    auto clock = vega.Sine(4.0);
    auto freeze_logic = vega.Logic(LogicOperator::THRESHOLD, 0.0);
    freeze_logic->set_input_node(clock);
    auto freeze_proc = MayaFlux::create_processor(buffers[0], freeze_logic);
    freeze_proc->set_modulation_type(LogicProcessor::ModulationType::HOLD_ON_FALSE);
}

Example: Recursive Filters

auto string = vega.Polynomial(
    [](const std::deque& history) {
        return 0.996 * (history[0] + history[1]) / 2.0;
    },
    PolynomialMode::RECURSIVE,
    100
);
string->set_initial_conditions(std::vector(100, vega.Random(-1.0, 1.0)));

Example: Audio‑Visual Coordination

auto control = vega.Phasor(0.15) | Audio;
control->enable_mock_process(true);

auto particles = vega.ParticleNetwork(
    600,
    glm::vec3(-2.0f, -1.5f, -0.5f),
    glm::vec3( 2.0f,  1.5f,  0.5f),
    ParticleNetwork::InitializationMode::GRID
) | Graphics;

particles->map_parameter("turbulence", control,
                         NodeNetwork::MappingMode::BROADCAST);

Future Development

  • Expanded graphics – move toward full 3D rendering, input handling, networking for distributed processing.
  • Hardware acceleration – CUDA and FPGA implementations.
  • WebAssembly builds – interactive web demos running actual MayaFlux C++ code in browsers.
  • Additional backends – JACK audio, multiple Vulkan devices, etc.

This release establishes the foundation; the roadmap builds on it.

MayaFlux – Creative Computing Framework

What is MayaFlux?

MayaFlux exists because the computational substrate has evolved while most creative tools remain stuck in 1980s‑era architectures. By shedding analog metaphors, disciplinary silos, and restrictive abstraction layers, MayaFlux opens the door to entirely new paradigms of creative expression.

“The substrate is ready. Build what you imagine.”

Core Capabilities

  • Extensions & Custom Back‑ends – Build interfaces for embedded systems or specialized hardware.
  • Default Automation – Ready‑made workflows for common tasks, with the ability to completely override them for bespoke requirements.
  • Architectural Customization – Start with simple patterns and scale up to full‑blown architectural tweaks when needed.

Educational Content

  • Video Walkthroughs – Step‑by‑step visual guides.
  • Interactive Examples – Hands‑on code you can modify in real time.
  • Pattern Libraries – Collections that demonstrate specific creative techniques.

Institutional Partnerships

We are actively exploring collaborations that can provide:

  • Funding for full‑time development.
  • Research on hardware integration.
  • Academic work on novel algorithms.

Getting Started

  1. Visit the docs & tutorials
  2. Installation: Takes only minutes.
  3. First working audio: Under five minutes by following the Sculpting Data tutorial.

The documentation meets you where you are—whether you’re a beginner or an advanced developer.

Additional Resources

  • Source Code: (link to repository)
  • License: GPL‑3.0 (open source, copyleft)
  • Contact: [mayafluxcollective@
Back to Blog

Related posts

Read more »

The RGB LED Sidequest 💡

markdown !Jennifer Davishttps://media2.dev.to/dynamic/image/width=50,height=50,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%...

Mendex: Why I Build

Introduction Hello everyone. Today I want to share who I am, what I'm building, and why. Early Career and Burnout I started my career as a developer 17 years a...