[Paper] LLMAID: Identifying AI Capabilities in Android Apps with LLMs
Recent advancements in artificial intelligence (AI) and its widespread integration into mobile software applications have received significant attention, highli...
Recent advancements in artificial intelligence (AI) and its widespread integration into mobile software applications have received significant attention, highli...
Linux kernel evolution breaks drivers through API/ABI changes, semantic shifts, and security-hardening updates. We introduce DRIVEBENCH, an executable corpus of...
Deep Learning (DL) compilers have been widely utilized to optimize DL models for efficient deployment across various hardware. Due to their vital role in the DL...
Intrinsic functions are specialized functions provided by the compiler that efficiently operate on architecture-specific hardware, allowing programmers to write...
We present a novel framework that integrates Large Language Models (LLMs) into the Git bisect process for semantic fault localization. Traditional bisect assume...
Large Language Models (LLMs) are increasingly integrated into code editors to provide AI-powered code suggestions. Yet many of these suggestions are ignored, re...
Large Language Models (LLMs) have transformed code auto-completion by generating context-aware suggestions. Yet, deciding when to present these suggestions rema...
Large Language Models (LLMs) often produce code with subtle implementation-level bugs despite strong benchmark performance. These errors are hard for LLMs to sp...
Understanding how glioblastoma (GBM) emerges from initially healthy glial tissue requires models that integrate bioelectrical, metabolic, and multicellular dyna...
This study is motivated by a robustness issue in numerical optimization of bound-constrained problems: many algorithms that perform well on a particular benchma...