EUNO.NEWS EUNO.NEWS
  • All (21181) +146
  • AI (3169) +10
  • DevOps (940) +5
  • Software (11185) +102
  • IT (5838) +28
  • Education (48)
  • Notice
  • All (21181) +146
    • AI (3169) +10
    • DevOps (940) +5
    • Software (11185) +102
    • IT (5838) +28
    • Education (48)
  • Notice
  • All (21181) +146
  • AI (3169) +10
  • DevOps (940) +5
  • Software (11185) +102
  • IT (5838) +28
  • Education (48)
  • Notice
Sources Tags Search
한국어 English 中文
  • 1 month ago · ai

    [Paper] Explaining the Reasoning of Large Language Models Using Attribution Graphs

    Large language models (LLMs) exhibit remarkable capabilities, yet their reasoning remains opaque, raising safety and trust concerns. Attribution methods, which ...

    #research #paper #ai #machine-learning #nlp
  • 1 month ago · ai

    [Paper] Stepwise Think-Critique: A Unified Framework for Robust and Interpretable LLM Reasoning

    Human beings solve complex problems through critical thinking, where reasoning and evaluation are intertwined to converge toward correct solutions. However, mos...

    #research #paper #ai #machine-learning
  • 1 month ago · ai

    [Paper] PPSEBM: An Energy-Based Model with Progressive Parameter Selection for Continual Learning

    Continual learning remains a fundamental challenge in machine learning, requiring models to learn from a stream of tasks without forgetting previously acquired ...

    #research #paper #ai #machine-learning #nlp
  • 1 month ago · ai

    [Paper] Characterizing Mamba's Selective Memory using Auto-Encoders

    State space models (SSMs) are a promising alternative to transformers for language modeling because they use fixed memory during inference. However, this fixed ...

    #research #paper #ai #nlp
  • 1 month ago · ai

    [Paper] VTCBench: Can Vision-Language Models Understand Long Context with Vision-Text Compression?

    The computational and memory overheads associated with expanding the context window of LLMs severely limit their scalability. A noteworthy solution is vision-te...

    #research #paper #ai #machine-learning #nlp #computer-vision
  • 1 month ago · ai

    [Paper] How Much is Too Much? Exploring LoRA Rank Trade-offs for Retaining Knowledge and Domain Robustness

    Large language models are increasingly adapted to downstream tasks through fine-tuning. Full supervised fine-tuning (SFT) and parameter-efficient fine-tuning (P...

    #research #paper #ai #machine-learning #nlp
  • 1 month ago · ai

    [Paper] Evaluating Metrics for Safety with LLM-as-Judges

    LLMs (Large Language Models) are increasingly used in text processing pipelines to intelligently respond to a variety of inputs and generation tasks. This raise...

    #research #paper #ai #machine-learning #nlp
  • 1 month ago · ai

    [Paper] Human-like Working Memory from Artificial Intrinsic Plasticity Neurons

    Working memory enables the brain to integrate transient information for rapid decision-making. Artificial networks typically replicate this via recurrent or par...

    #research #paper #ai #machine-learning #computer-vision
  • 1 month ago · ai

    [Paper] You Never Know a Person, You Only Know Their Defenses: Detecting Levels of Psychological Defense Mechanisms in Supportive Conversations

    Psychological defenses are strategies, often automatic, that people use to manage distress. Rigid or overuse of defenses is negatively linked to mental health a...

    #research #paper #ai #nlp
  • 1 month ago · ai

    Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems

    Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems New NVIDIA safety frameworks and technologies are advancing h...

    #ai #gpu #nvidia
  • 1 month ago · ai

    [Paper] Bolmo: Byteifying the Next Generation of Language Models

    We introduce Bolmo, the first family of competitive fully open byte-level language models (LMs) at the 1B and 7B parameter scales. In contrast to prior research...

    #research #paper #ai #nlp
  • 1 month ago · ai

    [Paper] How Do Semantically Equivalent Code Transformations Impact Membership Inference on LLMs for Code?

    The success of large language models for code relies on vast amounts of code data, including public open-source repositories, such as GitHub, and private, confi...

    #research #paper #ai #machine-learning

Newer posts

Older posts
EUNO.NEWS
RSS GitHub © 2026