Big O Notation explained
Source: Dev.to
Introduction
Big O notation describes how the running time of an algorithm grows as the size of its input increases. Understanding the most common complexities helps you write faster, more scalable code and spot performance problems early.
O(1) – Constant Time
An O(1) algorithm takes the same amount of time regardless of input size.
var users = new Dictionary
{
{ 1, "Caique" },
{ 2, "Maria" },
{ 3, "Fernanda" },
};
var user = users[1]; // Dictionary lookup by key is O(1)
O(log n) – Logarithmic Time
Logarithmic algorithms reduce the problem size by a constant factor (usually 2) at each step. The classic example is binary search on a sorted array.
int BinarySearch(int[] array, int target)
{
int left = 0;
int right = array.Length - 1;
while (left // example truncated in source
}
O(n) – Linear Time
A simple loop that processes each element once runs in linear time.
var users = new List { "Caique", "Maria", "Fernanda" };
foreach (var user in users)
{
Console.WriteLine(user);
}
If the list grows by 10 %, the loop will take roughly 10 % longer to finish.
O(n log n) – Linearithmic Time
Many efficient sorting algorithms (e.g., merge sort, quicksort) run in O(n log n) time.
var numbers = new List { 5, 2, 8, 1, 3 };
numbers.Sort(); // .Sort() uses an O(n log n) algorithm internally
O(n²) – Quadratic Time
Quadratic complexity arises when an algorithm contains a nested loop that iterates over the entire input for each outer iteration.
var users = new List { "Caique", "Maria", "Fernanda" };
foreach (var user1 in users)
{
foreach (var user2 in users)
{
Console.WriteLine($"{user1} - {user2}");
}
}
The number of operations grows proportionally to the square of the input size, which can quickly become a performance bottleneck.
Conclusion
Regardless of whether you’re working on the frontend or backend, knowing these common Big O patterns helps you diagnose and avoid performance issues before they become critical. Aim for the lowest complexity that still meets your functional requirements.