Vision-Based Object Tracking with Dual PID Control

Published: (February 21, 2026 at 10:01 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

In robotics, feedback is what makes a system intelligent. Unlike open‑loop systems, closed‑loop systems continuously measure their output and correct themselves in real time.

In this project I built a vision‑based closed‑loop control system using a webcam and dual PID controllers. The system detects a colored object, aligns itself horizontally, and maintains a safe distance using only camera input. This setup mimics the core logic used in autonomous vehicles, drones, and mobile robots.

Problem Statement

The objective was to design a system that can track a target object using only camera input and respond to it in real time. The system should:

  • Detect a colored object from a live video stream.
  • Align itself horizontally with the object.
  • Maintain a safe and consistent distance.
  • Continuously adjust its motion using feedback.

The main challenge was to combine computer vision for perception with PID control for decision‑making into a single closed‑loop system.

Vision Pipeline

The perception layer extracts meaningful information from the camera feed.

  1. Color space conversion – Each frame is converted from RGB to HSV. HSV (Hue, Saturation, Value) separates color information (hue) from brightness, making segmentation more robust to lighting changes.
  2. Color thresholding – A threshold isolates the blue object, producing a binary mask (object = white, background = black).
  3. Contour detection – Contours are found on the binary mask; the largest contour is assumed to be the target.
  4. Feature extraction – From the selected contour we obtain:
    • Horizontal center of the object: x_object
    • Bounding‑box height: h_object

These measurements are used to compute the control errors:

[ \text{Error}{\text{steer}} = x{\text{object}} - x_{\text{frame_center}} ]

The bounding‑box height serves as an approximate distance metric: the larger the height, the closer the object.

Control System Design

Two PID controllers operate in parallel:

ControllerMeasured VariableError DefinitionGoal
SteeringHorizontal position (x_object)$\displaystyle \text{Error}{\text{steer}} = x{\text{object}} - x_{\text{frame_center}}$Keep the object centered horizontally
DistanceObject height (h_object)$\displaystyle \text{Error}{\text{dist}} = h{\text{desired}} - h_{\text{object}}$Maintain a safe distance (desired height)

Each PID controller consists of three terms:

  • Proportional (P) – Reacts to the current error.
  • Integral (I) – Corrects accumulated past error.
  • Derivative (D) – Dampens rapid changes, reducing overshoot.

By combining the steering and distance controllers, the robot can simultaneously align with the target and keep a constant gap.

Closed‑Loop Architecture

The system runs as a continuous feedback loop:

  1. Capture a frame from the camera.
  2. Process the frame (color conversion → threshold → contour → feature extraction).
  3. Compute steering and distance errors.
  4. Feed the errors to the respective PID controllers.
  5. Apply the PID outputs to adjust steering and forward speed.
  6. Repeat for the next frame.

Because the output directly influences the next measurement, the loop remains closed, enabling real‑time correction and stability.

System Architecture

Experimental Observations

During PID tuning I noted the following effects:

ParameterObservation
High $K_p$Aggressive response, leading to oscillations around the target.
Low $K_p$Slow, sluggish response.
No derivative termNoticeable overshoot, especially with sudden object motion.
Added derivativeSmoother transitions, reduced overshoot.
High integral gainError accumulation causing long‑term instability.
Properly tuned IEliminates steady‑state error without destabilizing the system.

These insights reinforced how each PID component shapes real‑world behavior.

Real‑World Applications

Vision‑based closed‑loop control is common in many robotic domains, including:

  • Line‑following robots
  • Self‑driving vehicles
  • Drone tracking systems
  • Warehouse automation robots
  • Visual servoing in industrial robotics

Although this project is a simplified prototype, it captures the fundamental principles used in production autonomous platforms.

Conclusion

The project demonstrates that computer vision and classical control theory can be seamlessly integrated to create an intelligent, feedback‑driven system. By coupling object detection with dual PID control, the robot can both align itself with a target and maintain a safe distance using only camera input.

Through this implementation I gained practical experience in:

  • Building perception‑to‑control pipelines
  • Tuning PID controllers in a real‑time loop
  • Designing stable closed‑loop architectures for robotics
System design — all of which are fundamental concepts in robotics and autonomous systems.
0 views
Back to Blog

Related posts

Read more »