EUNO.NEWS EUNO.NEWS
  • All (21139) +104
  • AI (3166) +7
  • DevOps (939) +4
  • Software (11165) +82
  • IT (5820) +10
  • Education (48)
  • Notice
  • All (21139) +104
    • AI (3166) +7
    • DevOps (939) +4
    • Software (11165) +82
    • IT (5820) +10
    • Education (48)
  • Notice
  • All (21139) +104
  • AI (3166) +7
  • DevOps (939) +4
  • Software (11165) +82
  • IT (5820) +10
  • Education (48)
  • Notice
Sources Tags Search
한국어 English 中文
  • 1 week ago · ai

    [Paper] Multi-Modal Data-Enhanced Foundation Models for Prediction and Control in Wireless Networks: A Survey

    Foundation models (FMs) are recognized as a transformative breakthrough that has started to reshape the future of artificial intelligence (AI) across both acade...

    #research #paper #ai #machine-learning #nlp #computer-vision
  • 1 week ago · ai

    65% of Hacker News Posts Have Negative Sentiment, and They Outperform

    Article URL: https://philippdubach.com/standalone/hn-sentiment/ Comments URL: https://news.ycombinator.com/item?id=46512881 Points: 34 Comments: 20...

    #sentiment analysis #Hacker News #NLP #data science
  • 1 week ago · ai

    GliNER2: Extracting Structured Information from Text

    From unstructured text to structured Knowledge Graphs The post GliNER2: Extracting Structured Information from Text appeared first on Towards Data Science....

    #information extraction #knowledge graph #NLP #structured data #GliNER2
  • 2 weeks ago · ai

    [Paper] Hierarchical temporal receptive windows and zero-shot timescale generalization in biologically constrained scale-invariant deep networks

    Human cognition integrates information across nested timescales. While the cortex exhibits hierarchical Temporal Receptive Windows (TRWs), local circuits often ...

    #research #paper #ai #machine-learning #nlp
  • 2 weeks ago · ai

    [Paper] Chronicals: A High-Performance Framework for LLM Fine-Tuning with 3.51x Speedup over Unsloth

    Large language model fine-tuning is bottlenecked by memory: a 7B parameter model requires 84GB--14GB for weights, 14GB for gradients, and 56GB for FP32 optimize...

    #research #paper #ai #machine-learning #nlp
  • 2 weeks ago · ai

    [Paper] Robust Persona-Aware Toxicity Detection with Prompt Optimization and Learned Ensembling

    Toxicity detection is inherently subjective, shaped by the diverse perspectives and social priors of different demographic groups. While ``pluralistic'' modelin...

    #research #paper #ai #nlp
  • 2 weeks ago · ai

    [Paper] Estimating Text Temperature

    Autoregressive language models typically use temperature parameter at inference to shape the probability distribution and control the randomness of the text gen...

    #research #paper #ai #nlp
  • 2 weeks ago · ai

    [Paper] Classifying several dialectal Nawatl varieties

    Mexico is a country with a large number of indigenous languages, among which the most widely spoken is Nawatl, with more than two million people currently speak...

    #research #paper #ai #nlp
  • 2 weeks ago · ai

    [Paper] Power-of-Two Quantization-Aware-Training (PoT-QAT) in Large Language Models (LLMs)

    In Large Language Models (LLMs), the number of parameters has grown exponentially in the past few years, e.g., from 1.5 billion parameters in GPT-2 to 175 billi...

    #research #paper #ai #nlp
  • 2 weeks ago · ai

    [Paper] pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs

    PDFs are the second-most used document type on the internet (after HTML). Yet, existing QA datasets commonly start from text sources or only address specific do...

    #research #paper #ai #machine-learning #nlp
  • 2 weeks ago · ai

    [Paper] CD4LM: Consistency Distillation and aDaptive Decoding for Diffusion Language Models

    Autoregressive large language models achieve strong results on many benchmarks, but decoding remains fundamentally latency-limited by sequential dependence on p...

    #research #paper #ai #nlp
  • 2 weeks ago · ai

    [Paper] From XAI to Stories: A Factorial Study of LLM-Generated Explanation Quality

    Explainable AI (XAI) methods like SHAP and LIME produce numerical feature attributions that remain inaccessible to non expert users. Prior work has shown that L...

    #research #paper #ai #nlp

Newer posts

Older posts
EUNO.NEWS
RSS GitHub © 2026