[Paper] Adaptable Teastore with Energy Consumption Awareness: A Case Study

Published: (December 29, 2025 at 09:35 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.23498v1

Overview

The paper presents EnCoMSAS, a lightweight monitoring tool that measures the energy consumption of self‑adaptive cloud applications at runtime. Using the Adaptable TeaStore microservice benchmark, the authors show how real‑time energy data can be fed into adaptation decisions without adding a significant energy overhead.

Key Contributions

  • EnCoMSAS prototype: an open‑source, language‑agnostic monitor that captures per‑service energy usage on distributed cloud nodes.
  • Integration with self‑adaptive loops: demonstrates how energy metrics can be incorporated as a first‑class objective in the MAPE‑K (Monitor‑Analyze‑Plan‑Execute‑Knowledge) feedback cycle.
  • Empirical evaluation on a realistic microservice benchmark (Adaptable TeaStore) deployed on the Grid5000 testbed, focusing on the recommender service under varying workloads.
  • Evidence of low monitoring overhead: the tool’s own energy consumption is modest compared with the total footprint of the whole TeaStore application suite.
  • Insights on environment‑dependent energy behavior: shows that algorithmic complexity alone does not fully explain energy usage; deployment characteristics (e.g., VM sizing, CPU frequency) matter as well.

Methodology

  1. Tool design – EnCoMSAS hooks into the operating system’s power‑capping interfaces (e.g., RAPL on Intel CPUs) and aggregates per‑process counters, exposing them via a REST API.
  2. Instrumentation – The Adaptable TeaStore’s recommender microservice was instrumented to call EnCoMSAS before and after each recommendation request, logging the energy delta.
  3. Experimental setup
    • Platform: Grid5000 nodes (dual‑socket Xeon, 2 × 8 cores, 128 GB RAM).
    • Workloads: Synthetic user request streams ranging from low (10 req/s) to high (200 req/s) intensity, mimicking real‑world traffic spikes.
    • Adaptation scenarios: The recommender could switch between three algorithms (simple popularity, collaborative filtering, and a deep‑learning model) based on QoS and energy goals.
  4. Data collection – Energy, CPU utilization, latency, and throughput were recorded for each run, allowing correlation analysis between CPU usage and measured joules.
  5. Impact assessment – The additional energy cost of running EnCoMSAS itself was measured and compared to the total energy consumption of all TeaStore microservices.

Results & Findings

MetricObservation
Energy measurement accuracyStrong linear correlation (R² ≈ 0.92) between CPU utilization and joules reported by EnCoMSAS, confirming reliable readings.
Algorithmic impactThe deep‑learning recommender consumed up to more energy than the simple popularity algorithm under identical load.
Environment influenceSame algorithm on a higher‑frequency VM used ~15 % more energy than on a lower‑frequency VM, highlighting the role of hardware settings.
Monitoring overheadEnCoMSAS added only ≈ 4 % extra energy to the whole TeaStore microservice suite, a negligible cost for the gained visibility.
Adaptation benefitWhen the adaptation loop used EnCoMSAS data to switch to the lower‑energy algorithm during load spikes, overall system energy dropped by ≈ 12 % without violating latency SLAs.

Practical Implications

  • Energy‑aware autoscaling: Cloud operators can plug EnCoMSAS into Kubernetes operators or serverless platforms to make scaling decisions that balance performance and power draw.
  • Green DevOps pipelines: CI/CD tools can incorporate EnCoMSAS metrics to flag energy‑inefficient code paths before production rollout.
  • SLA‑enhanced microservices: Service owners can expose “energy budget” as part of their contract, enabling clients to request low‑power modes during off‑peak hours.
  • Hardware‑conscious deployment: The findings encourage teams to profile services on different VM types or bare‑metal configurations, selecting the most energy‑efficient combo for a given workload.
  • Open‑source adoption: Since EnCoMSAS is language‑agnostic and uses standard power‑capping interfaces, it can be quickly integrated into existing observability stacks (Prometheus, OpenTelemetry).

Limitations & Future Work

  • Hardware scope: Experiments were limited to Intel Xeon CPUs with RAPL; ARM or GPU‑heavy workloads need separate validation.
  • Single‑service focus: Only the recommender microservice was instrumented; scaling the approach to a full‑stack of interdependent services may reveal hidden coordination challenges.
  • Static workload patterns: Real‑world traffic exhibits more complex temporal dynamics (bursts, diurnal cycles); future studies should test EnCoMSAS under such stochastic loads.
  • Energy‑aware adaptation policies: The paper used a simple rule‑based switch; exploring reinforcement‑learning or multi‑objective optimization could yield richer energy savings.

Bottom line: EnCoMSAS demonstrates that fine‑grained, low‑overhead energy monitoring is feasible for self‑adaptive cloud applications, opening the door for developers and operators to embed sustainability directly into runtime decision‑making.

Authors

  • Henrique De Medeiros
  • Denisse Muñante
  • Sophie Chabridon
  • César Perdigão Batista
  • Denis Conan

Paper Information

  • arXiv ID: 2512.23498v1
  • Categories: cs.SE
  • Published: December 29, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »