[Paper] Internal Representations as Indicators of Hallucinations in Agent Tool Selection
Large Language Models (LLMs) have shown remarkable capabilities in tool calling and tool usage, but suffer from hallucinations where they choose incorrect tools...