[Paper] ParamExplorer: A framework for exploring parameters in generative art

Published: (December 18, 2025 at 08:37 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.16529v1

Overview

The paper presents ParamExplorer, an interactive framework that helps artists and developers navigate the often‑overwhelming parameter spaces of generative‑art algorithms. By borrowing ideas from reinforcement learning and offering a plug‑in for p5.js, the system lets users discover aesthetically interesting configurations without endless manual trial‑and‑error.

Key Contributions

  • ParamExplorer framework: a modular, browser‑based tool that wraps any p5.js sketch and exposes its parameters to automated or human‑guided exploration.
  • Human‑in‑the‑loop feedback: UI components (sliders, rating buttons, sketchpad) let users steer the search toward preferred visual styles.
  • Automated agents: several exploration strategies (random search, Bayesian optimization, evolutionary algorithms, and a lightweight RL‑based policy) are implemented and benchmarked within the same environment.
  • Open‑source integration: the framework ships as an npm package, making it trivial to drop into existing generative‑art projects.
  • Empirical evaluation: quantitative metrics (coverage, diversity) and user studies demonstrate that guided agents find “interesting” outputs faster than naïve random sampling.

Methodology

  1. Parameter Exposure – The framework introspects a p5.js sketch to list all numeric inputs (e.g., colors, noise scales, iteration counts). Each parameter becomes a dimension in a search space.
  2. Agent Architecture – An abstract Agent class defines suggest() (propose a new parameter vector) and feedback() (receive a reward). Implementations include:
    • Random: uniform sampling.
    • Bayesian: Gaussian‑process surrogate model that predicts aesthetic scores.
    • Evolutionary: population of candidates mutated and selected based on user ratings.
    • RL‑based: a simple policy network trained with REINFORCE using reward signals from the user or an automated classifier.
  3. Human‑in‑the‑Loop Loop – After each generated image, the UI asks the user to rate it (e.g., 1‑5 stars). The rating is transformed into a scalar reward and fed back to the active agent, which updates its internal model.
  4. Evaluation Protocol – The authors ran two experiments: (a) a synthetic benchmark where “interesting” images are pre‑labeled, allowing objective measurement of coverage; (b) a user study with 30 artists who explored the same sketch using different agents. Metrics such as time‑to‑first‑high‑rating image and total unique high‑rating images were recorded.

Results & Findings

  • Coverage Boost – Bayesian and evolutionary agents covered 2–3× more of the labeled “interesting” region than random search within the same budget of 200 evaluations.
  • Speed of Discovery – The RL‑based agent reached the first 4‑star rating on average after 35 iterations, compared to 78 iterations for random.
  • User Preference – In the artist study, 73 % of participants reported feeling “more in control” when using the evolutionary agent, citing the visible population dynamics as intuitive.
  • Computational Overhead – All agents run comfortably in the browser; the most expensive (Gaussian‑process) still updates in <150 ms per iteration on a typical laptop.

Practical Implications

  • Faster Prototyping – Developers can integrate ParamExplorer into their p5.js demos to let designers quickly surface compelling visual styles, cutting down on manual tweaking cycles.
  • Automated Content Generation – Studios building generative‑art pipelines (e.g., for game assets, NFTs, or UI backgrounds) can run an autonomous agent to harvest a diverse library of assets without constant human supervision.
  • Educational Tools – The visual feedback loop makes an excellent teaching aid for illustrating concepts like parameter sensitivity, optimization, and reinforcement learning in a creative context.
  • Extensibility – Because agents are plug‑and‑play, teams can experiment with more sophisticated models (e.g., deep RL, multi‑objective optimization) or tie the reward to downstream metrics such as user engagement or brand alignment.

Limitations & Future Work

  • Scalability to Very High Dimensions – The current implementation assumes a modest number of parameters (<20). Exploring hundreds of knobs may require dimensionality‑reduction or hierarchical search strategies.
  • Subjectivity of “Interesting” – Rewards are based on individual user ratings, which can be noisy and inconsistent; the paper suggests aggregating multiple users or learning a shared aesthetic model as a remedy.
  • Limited Domain Evaluation – Experiments focus on a single p5.js sketch; broader validation across diverse generative techniques (e.g., shader‑based, procedural modeling) is left for future studies.
  • Real‑Time Constraints – While browser‑friendly, the framework does not yet support ultra‑low‑latency scenarios (e.g., live coding performances) where immediate feedback is critical.

ParamExplorer opens a practical bridge between creative intuition and algorithmic search, giving developers a reusable toolbox to tame the combinatorial explosion of generative‑art parameters.

Authors

  • Julien Gachadoat
  • Guillaume Lagarde

Paper Information

  • arXiv ID: 2512.16529v1
  • Categories: cs.AI, cs.HC, cs.SE
  • Published: December 18, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »

[Paper] When Reasoning Meets Its Laws

Despite the superior performance of Large Reasoning Models (LRMs), their reasoning behaviors are often counterintuitive, leading to suboptimal reasoning capabil...