[Paper] LLM-Empowered Functional Safety and Security by Design in Automotive Systems

Published: (January 5, 2026 at 10:37 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.02215v1

Overview

The authors propose a novel workflow that leverages large language models (LLMs) to streamline the development of Software‑Defined Vehicles (SDVs). By integrating LLMs with formal safety models and model‑driven engineering, the approach tackles two critical pain points: designing secure system topologies and automatically validating event‑driven safety code for automotive control networks such as CAN and the emerging Vehicle Signal Specification (VSS).

Key Contributions

  • LLM‑augmented design assistant for generating and checking security‑aware vehicle system topologies using Model‑Driven Engineering (MDE) and OCL constraints.
  • Event‑chain formalism that captures the semantics of message flows across ECUs, enabling systematic functional‑safety validation (ISO‑26262) for both CAN and VSS messages.
  • End‑to‑end workflow that bridges high‑level architectural models with low‑level source‑code analysis, powered by LLMs for natural‑language specification extraction and rule generation.
  • Prototype implementation evaluated on realistic ADAS use‑cases, demonstrating both a locally deployable open‑source stack and a proprietary solution.
  • Empirical evidence that the LLM‑driven pipeline reduces manual effort in safety/security reviews while maintaining compliance with functional‑safety standards.

Methodology

  1. Model‑Driven Topology Design

    • Engineers describe the vehicle’s logical architecture (ECUs, communication buses, VSS data models) in a domain‑specific language.
    • An LLM parses these textual specifications and auto‑generates UML/MDE models enriched with Object Constraint Language (OCL) rules that encode security policies (e.g., authentication, isolation).
  2. Event‑Chain Construction

    • The system’s runtime behavior is abstracted as event chains—ordered sequences of messages exchanged between components.
    • Each chain is annotated with semantic pre‑ and post‑conditions derived from the VSS ontology and CAN signal definitions.
  3. Safety & Security Analysis

    • The LLM assists in translating natural‑language safety requirements (e.g., “brake command must be acknowledged within 10 ms”) into formal temporal logic constraints.
    • A static analysis engine checks the event‑chain model against these constraints, flagging violations such as missing acknowledgments or unauthorized message routes.
  4. Toolchain Integration

    • The workflow plugs into existing automotive development pipelines (e.g., AUTOSAR, ROS‑2) and can run locally (open‑source) or as a proprietary SaaS offering.

Results & Findings

MetricOpen‑Source PrototypeProprietary SaaS
Topology validation time↓ 45 % vs. manual OCL checks↓ 60 % vs. legacy scripts
Event‑chain safety violations detected12 previously undocumented issues in a lane‑keeping ADAS18 issues across three ADAS modules
False‑positive rate8 % (acceptable for early‑stage testing)5 % (after fine‑tuning)
Developer effort (person‑hours)Reduced by ~30 h per projectReduced by ~45 h per project

The study shows that LLM‑assisted analysis catches subtle safety and security bugs that traditional static analysis tools miss, especially those involving cross‑bus message semantics (CAN ↔ VSS). Moreover, the approach scales to complex ADAS stacks without a proportional increase in manual review effort.

Practical Implications

  • Faster Time‑to‑Market: Automating topology security checks and safety validation cuts weeks off the verification phase of SDV projects.
  • Reduced Engineering Costs: By offloading routine rule generation and compliance checks to an LLM, teams can reallocate senior engineers to higher‑impact design work.
  • Improved Safety Assurance: Formal event‑chain analysis ensures that safety‑critical messages retain their intended semantics across heterogeneous bus systems, a common source of hidden bugs in modern vehicles.
  • Seamless Integration: The workflow plugs into existing CI/CD pipelines, allowing continuous safety/security verification as code evolves—critical for over‑the‑air (OTA) updates.
  • Vendor‑Neutral Tooling: The open‑source variant gives OEMs and Tier‑1 suppliers a cost‑effective way to adopt LLM‑driven safety engineering without locking into a single vendor ecosystem.

Limitations & Future Work

  • LLM Hallucinations: Occasionally the model generates inaccurate OCL constraints or misinterprets ambiguous natural‑language requirements, necessitating a human sanity check.
  • Scalability of Formal Verification: While event‑chain models are tractable for current ADAS modules, scaling to full‑vehicle architectures may require compositional verification techniques.
  • Domain‑Specific Training Data: The LLM’s performance hinges on automotive‑specific corpora; broader adoption will benefit from curated datasets covering newer standards (e.g., AUTOSAR Adaptive, VSS 2.0).
  • Security Threat Modeling: The current work focuses on topology constraints; future extensions could integrate threat‑model generation (e.g., STRIDE) directly from LLM prompts.

Overall, the paper demonstrates a compelling blueprint for marrying LLMs with formal automotive safety engineering, promising tangible productivity gains for developers building the next generation of connected, software‑defined vehicles.

Authors

  • Nenad Petrovic
  • Vahid Zolfaghari
  • Fengjunjie Pan
  • Alois Knoll

Paper Information

  • arXiv ID: 2601.02215v1
  • Categories: cs.SE, cs.AI
  • Published: January 5, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »