Release Note: Methodox Threads (v0.7)
Source: Dev.to
Overview
Version 0.7 introduces full Gen‑AI generation capabilities, including per‑document prompt‑driven content generation, provider configuration, OpenAI integration, and asynchronous multi‑editor execution with UI‑level busy indicators. This release establishes the foundation for an extensible, multi‑provider LLM workflow while maintaining the existing document layout and editing model.
New Features in v0.7
Configurable AI Provider Framework
A new Configure… dialog provides a unified interface for system‑level and provider‑specific settings.
- System Tab – edit the global System Prompt used for all generations.
- OpenAI Tab – configure API key (masked) and optional custom endpoint.
Preset model list
gpt-4o-minigpt-4oo3-mini
Other → reveals a custom model name field.
Additional capabilities:
- Support for model overrides when presets become outdated.
- Automatic load/save of configuration in a user‑specific app directory.
- Placeholder tabs for future providers (Gemini, DeepSeek, Ollama, Grok).
Gen‑AI Generation Workflow
Each document now supports prompt‑based content generation:
- Set a Prompt on any document.
- Choose Edit → Generate to trigger generation for the focused document.
Generation uses the Global System Prompt and the Document Prompt.
Per‑Editor Async Generation
- Editors generate independently in parallel.
- Editors become temporarily read‑only during generation.
- A semi‑transparent overlay displays Generating… with an indeterminate progress bar.
- Sibling/Child creation buttons remain active.
- Generated text is written directly into the document’s
Contentfield.
OpenAI Integration (First Provider Implementation)
A new abstraction layer encapsulates provider calls. Version 0.7 includes the first concrete backend:
- OpenAI Chat Completion Backend
- Uses the official OpenAI SDK.
- Supports both default and custom endpoints.
- Converts internal document structures into Chat API messages.
- Returns full assistant text as document content.
This design enables drop‑in integration of additional providers in future versions.
Configuration Persistence
All provider and system settings are automatically stored as JSON in the user‑local app directory:
- Loaded when opening the Configure dialog.
- Saved on dialog close.
- Ensures a persistent environment across editor sessions.
Limitations
- No document deletion or rearrangement.
- Markdown preview remains basic.
- Only OpenAI is implemented; other providers are placeholders.
- Generation does not yet stream partial output.