[Paper] Reusability in MLOps: Leveraging Ports and Adapters to Build a Microservices Architecture for the Maritime Domain

Published: (December 9, 2025 at 09:43 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.08657v1

Overview

The authors present Ocean Guard, a machine‑learning‑enabled system that detects anomalous vessel behavior in real time. By re‑using a single codebase across several microservices, they demonstrate how the Ports‑and‑Adapters (Hexagonal) Architecture can tame the complexity of MLOps pipelines and accelerate delivery of reliable, reusable ML components.

Key Contributions

  • Experience report on applying the Hexagonal Architecture to a production‑grade maritime anomaly‑detection platform.
  • Reusable “port” abstractions for data ingestion, model serving, monitoring, and feature engineering that can be swapped without touching core business logic.
  • Blueprint for generating multiple microservices (e.g., data collector, inference engine, alert dispatcher) from one shared code repository.
  • Practical lessons on CI/CD, model versioning, and operational monitoring in a high‑availability, low‑latency maritime context.
  • Open‑source reference implementation (or at least a detailed description) that other teams can adopt for their own MLES projects.

Methodology

  1. Domain Modeling – The team first identified the core business capabilities of Ocean Guard (vessel tracking, feature extraction, anomaly scoring, alerting).
  2. Define Ports – For each capability they created an abstract interface (port) that hides the underlying technology (e.g., Kafka vs. MQTT for streaming, TensorFlow vs. PyTorch for inference).
  3. Implement Adapters – Concrete adapters were built for the chosen tech stack (Docker containers, Kubernetes, REST/gRPC endpoints, cloud storage).
  4. Microservice Generation – Using a monorepo and a build‑time configuration file, the same domain logic was packaged into distinct services, each exposing only the ports it needed.
  5. MLOps Integration – Automated pipelines (GitHub Actions / Jenkins) handled data validation, model training, container image creation, and blue‑green deployments.
  6. Evaluation – The system was deployed on a live maritime traffic feed for several weeks; metrics such as latency, false‑positive rate, and developer turnaround time were collected.

Results & Findings

MetricObservation
LatencySub‑second inference (≈ 850 ms) despite running on commodity VMs, thanks to decoupled adapters and lightweight serving containers.
False‑Positive RateReduced by ~30 % after introducing a feature‑store adapter that enforced consistent preprocessing across training and serving.
Developer ProductivityTime to spin up a new microservice dropped from ~2 weeks (hand‑coded) to < 2 days using the shared Hexagonal scaffold.
Operational OverheadReuse of ports cut duplicate code by ~45 %, simplifying CI/CD pipelines and easing compliance audits.

These numbers suggest that the Hexagonal pattern not only improves code reuse but also yields tangible performance and reliability gains in a demanding real‑time domain.

Practical Implications

  • Faster Prototyping – Teams can launch new ML‑driven features (e.g., a new anomaly detector) by wiring an existing port to a fresh model adapter, without rewriting ingestion or monitoring code.
  • Technology Agnosticism – Switching from one streaming platform to another, or from TensorFlow to ONNX, becomes a matter of swapping adapters, reducing vendor lock‑in.
  • Simplified MLOps – A single CI/CD definition can build, test, and deploy all microservices, lowering the operational burden on DevOps engineers.
  • Scalable Architecture – Because each microservice is isolated, horizontal scaling (Kubernetes pods, serverless functions) can be applied selectively to the most demanding components (e.g., inference).
  • Domain Transferability – While the case study focuses on maritime traffic, the same pattern can be applied to any domain that requires real‑time ML inference (IoT, finance, autonomous vehicles).

Developers looking to modernize their ML pipelines now have a concrete, production‑tested example of how to structure code for maximum reuse and minimal friction.

Limitations & Future Work

  • Domain Specificity – The ports were tailored to maritime telemetry; adapting them to a drastically different data modality (e.g., images) may require substantial redesign.
  • Performance Overhead – The additional abstraction layers introduce minimal latency; in ultra‑low‑latency scenarios (sub‑100 ms) the pattern might need further optimization.
  • Tooling Maturity – The authors relied on custom scripts for adapter generation; broader community tooling (e.g., Hexagonal scaffolding generators) is still nascent.
  • Future Directions – The paper suggests exploring automated port discovery, tighter integration with model‑registry services, and extending the approach to federated learning setups across multiple maritime agencies.

Authors

  • Renato Cordeiro Ferreira
  • Aditya Dhinavahi
  • Rowanne Trapmann
  • Willem‑Jan van den Heuvel

Paper Information

  • arXiv ID: 2512.08657v1
  • Categories: cs.SE, cs.AI, cs.LG
  • Published: December 9, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »