From feature engineering to deployment: a local-first MLOps workflow with Skyulf

Published: (December 23, 2025 at 04:53 AM EST)
1 min read
Source: Dev.to

Source: Dev.to

Who Skyulf is for

  • Teams working with sensitive/regulated data
  • People who want a local‑first workflow (laptop → server → on‑prem)
  • ML engineers and data scientists who prefer one integrated workflow over a pile of disconnected components
  • Anyone iterating quickly on models and wanting workflows that stay visible, repeatable, and easy to review

What you can do with Skyulf

  • Ingest + explore data
  • Feature engineering (visually, as a pipeline)
  • Training (including background jobs)
  • Deployment (self‑hosted inference service)
  • Verification with an API testing panel (send JSON, view response/latency)
pipeline → run → deploy → test API

Why “visual pipelines” matter (beyond aesthetics)

  • Explainable – anyone can see what happens between raw data and model
  • Repeatable – less tribal knowledge, fewer hidden scripts
  • Reviewable – pipelines become artifacts you can share and iterate on

What’s next

  • More example pipelines (tabular, time‑series, text/embeddings)
  • More models
  • Better packaging for “one command” self‑hosting
  • Integrations / export paths for teams already using other tools

Getting started

  • GitHub repo:
  • Website:

Install the Python engine only

If you only want the Python engine (no UI) to integrate Skyulf into your own application or scripts:

pip install skyulf-core

Contribute

If you run it and have feedback, open an issue, especially around onboarding and docs clarity.

Back to Blog

Related posts

Read more »