EUNO.NEWS EUNO.NEWS
  • All (7970) +28
  • AI (1325) +4
  • DevOps (398) +2
  • Software (3928) +15
  • IT (2299) +7
  • Education (20)
  • Notice
  • All (7970) +28
    • AI (1325) +4
    • DevOps (398) +2
    • Software (3928) +15
    • IT (2299) +7
    • Education (20)
  • Notice
  • All (7970) +28
  • AI (1325) +4
  • DevOps (398) +2
  • Software (3928) +15
  • IT (2299) +7
  • Education (20)
  • Notice
Sources Tags Search
한국어 English 中文
  • 3 weeks ago · ai

    [Paper] How to Correctly Report LLM-as-a-Judge Evaluations

    Large language models (LLMs) are increasingly used as evaluators in lieu of humans. While scalable, their judgments are noisy due to imperfect specificity and s...

    #research #paper #ai #machine-learning #nlp
  • 3 weeks ago · ai

    [Paper] MortgageLLM: Domain-Adaptive Pretraining with Residual Instruction Transfer, Alignment Tuning, and Task-Specific Routing

    Large Language Models (LLMs) demonstrate exceptional capabilities across general domains, yet their application to specialized sectors such as mortgage finance ...

    #research #paper #ai #machine-learning #nlp
  • 3 weeks ago · ai

    [Paper] ASR Error Correction in Low-Resource Burmese with Alignment-Enhanced Transformers using Phonetic Features

    This paper investigates sequence-to-sequence Transformer models for automatic speech recognition (ASR) error correction in low-resource Burmese, focusing on dif...

    #ASR #error correction #low-resource languages #phonetic features #transformer
  • 3 weeks ago · ai

    [Paper] Orthographic Constraint Satisfaction and Human Difficulty Alignment in Large Language Models

    Large language models must satisfy hard orthographic constraints during controlled text generation, yet systematic cross-architecture evaluation remains limited...

    #research #paper #ai #nlp
  • 3 weeks ago · ai

    [Paper] Enhancing Burmese News Classification with Kolmogorov-Arnold Network Head Fine-tuning

    In low-resource languages like Burmese, classification tasks often fine-tune only the final classification layer, keeping pre-trained encoder weights frozen. Wh...

    #Burmese NLP #Kolmogorov-Arnold Network #text classification #low-resource languages #KAN heads
  • 3 weeks ago · ai

    [Paper] Context-Aware Pragmatic Metacognitive Prompting for Sarcasm Detection

    Detecting sarcasm remains a challenging task in the areas of Natural Language Processing (NLP) despite recent advances in neural network approaches. Currently, ...

    #sarcasm detection #prompt engineering #retrieval-augmented generation #nlp #large language models
  • 3 weeks ago · ai

    [Paper] Zipf Distributions from Two-Stage Symbolic Processes: Stability Under Stochastic Lexical Filtering

    Zipf's law in language lacks a definitive origin, debated across fields. This study explains Zipf-like behavior using geometric mechanisms without linguistic el...

    #research #paper #ai #nlp
  • 3 weeks ago · ai

    [Paper] A Unified Understanding of Offline Data Selection and Online Self-refining Generation for Post-training LLMs

    Offline data selection and online self-refining generation, which enhance the data quality, are crucial steps in adapting large language models (LLMs) to specif...

    #LLM fine-tuning #bilevel optimization #data selection #self-refining generation #AI safety
  • 3 weeks ago · ai

    [Paper] Semantic Anchors in In-Context Learning: Why Small LLMs Cannot Flip Their Labels

    Can in-context learning (ICL) override pre-trained label semantics, or does it merely refine an existing semantic backbone? We address this question by treating...

    #research #paper #ai #machine-learning #nlp
  • 3 weeks ago · ai

    [Paper] Gated KalmaNet: A Fading Memory Layer Through Test-Time Ridge Regression

    As efficient alternatives to softmax Attention, linear state-space models (SSMs) achieve constant memory and linear compute, but maintain only a lossy, fading s...

    #gated kalmanet #ridge regression #long-context language models #state-space models #AI research
  • 3 weeks ago · ai

    [Paper] TrackList: Tracing Back Query Linguistic Diversity for Head and Tail Knowledge in Open Large Language Models

    Large Language Models (LLMs) have proven efficient in giving definition-type answers to user input queries. While for humans giving various types of answers, su...

    #research #paper #ai #nlp
  • 3 weeks ago · ai

    [Paper] Even with AI, Bijection Discovery is Still Hard: The Opportunities and Challenges of OpenEvolve for Novel Bijection Construction

    Evolutionary program synthesis systems such as AlphaEvolve, OpenEvolve, and ShinkaEvolve offer a new approach to AI-assisted mathematical discovery. These syste...

    #LLM #evolutionary algorithms #bijection discovery #combinatorial mathematics #OpenEvolve

Newer posts

Older posts
EUNO.NEWS
RSS GitHub © 2025