autograd-cpp
Source: Dev.to
Overview
A lightweight, high‑performance C++ automatic differentiation library using computational graphs.
- Computational Graph‑based AD: Forward and backward propagation through dynamic graphs
- Jacobian & Hessian: First‑ and second‑order derivative computations
- Optimizers: SGD with learning‑rate scheduling (linear, exponential, cosine, polynomial)
- Header‑mostly: Minimal dependencies, easy integration
- CMake Package:
FetchContentsupport for seamless integration
Installation
Using CMake FetchContent
include(FetchContent)
FetchContent_Declare(
autograd_cpp
GIT_REPOSITORY https://github.com/queelius/autograd-cpp.git
GIT_TAG main
)
set(BUILD_EXAMPLES OFF CACHE BOOL "" FORCE)
set(BUILD_TESTS OFF CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(autograd_cpp)
target_link_libraries(your_app PRIVATE autograd::autograd)
Building from source
git clone https://github.com/queelius/autograd-cpp.git
cd autograd-cpp
mkdir build && cd build
cmake ..
make -j$(nproc)
Run examples
./examples/simple_gradients
./examples/hessian_demo
Requirements
- C++17 or later
- CMake 3.14+
- Optional: OpenMP for parallelization
Basic Usage
#include
using namespace autograd;
int main() {
// Create computation graph
auto x = constant(3.0);
auto y = constant(4.0);
auto z = mul(x, y); // z = x * y
auto result = add(z, constant(2.0)); // result = z + 2
// Compute gradients
result->backward();
std::cout data[0] grad[0] grad[0] << std::endl; // 3
return 0;
}
Library Structure
tensor.hpp– Tensor class with gradient trackingops.hpp– Operations (add,mul,exp,log,matmul, etc.)jacobian.hpp– Jacobian matrix computationhessian.hpp– Hessian matrix computationoptim.hpp– SGD optimizer with learning‑rate schedules
Applications
The core automatic‑differentiation engine can be used as a foundation for:
- Neural networks and deep learning
- Statistical modeling and inference
- Physics simulations requiring gradients
- Optimization algorithms
- General scientific computing
Design Goals
- Minimal – Core AD functionality only; domain‑specific features can be built on top.
- Efficient – Optimized for performance with optional OpenMP parallelization.
- Flexible – Dynamic computational graphs support arbitrary computations.
- Portable – Standard C++17, works on any platform.
License
[Specify your license]
Contributing
Contributions are welcome! This repository provides the core AD engine; domain‑specific extensions (e.g., neural networks, statistical models) should be developed as separate packages that depend on autograd-cpp.