EUNO.NEWS EUNO.NEWS
  • All (21181) +146
  • AI (3169) +10
  • DevOps (940) +5
  • Software (11185) +102
  • IT (5838) +28
  • Education (48)
  • Notice
  • All (21181) +146
    • AI (3169) +10
    • DevOps (940) +5
    • Software (11185) +102
    • IT (5838) +28
    • Education (48)
  • Notice
  • All (21181) +146
  • AI (3169) +10
  • DevOps (940) +5
  • Software (11185) +102
  • IT (5838) +28
  • Education (48)
  • Notice
Sources Tags Search
한국어 English 中文
  • 3 weeks ago · ai

    [Paper] Cube Bench: A Benchmark for Spatial Visual Reasoning in MLLMs

    We introduce Cube Bench, a Rubik's-cube benchmark for evaluating spatial and sequential reasoning in multimodal large language models (MLLMs). The benchmark dec...

    #research #paper #ai #machine-learning #nlp #computer-vision
  • 3 weeks ago · ai

    [Paper] Leveraging High-Fidelity Digital Models and Reinforcement Learning for Mission Engineering: A Case Study of Aerial Firefighting Under Perfect Information

    As systems engineering (SE) objectives evolve from design and operation of monolithic systems to complex System of Systems (SoS), the discipline of Mission Engi...

    #research #paper #ai #machine-learning
  • 3 weeks ago · ai

    [Paper] Automated stereotactic radiosurgery planning using a human-in-the-loop reasoning large language model agent

    Stereotactic radiosurgery (SRS) demands precise dose shaping around critical structures, yet black-box AI systems have limited clinical adoption due to opacity ...

    #research #paper #ai #machine-learning #nlp
  • 3 weeks ago · ai

    [Paper] Relu and softplus neural nets as zero-sum turn-based games

    We show that the output of a ReLU neural network can be interpreted as the value of a zero-sum, turn-based, stopping game, which we call the ReLU net game. The ...

    #research #paper #ai #machine-learning
  • 3 weeks ago · ai

    [Paper] Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

    Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typicall...

    #research #paper #ai #nlp
  • 3 weeks ago · ai

    [Paper] Improving ML Training Data with Gold-Standard Quality Metrics

    Hand-tagged training data is essential to many machine learning tasks. However, training data quality control has received little attention in the literature, d...

    #research #paper #ai #machine-learning
  • 3 weeks ago · ai

    [Paper] Performative Policy Gradient: Optimality in Performative Reinforcement Learning

    Post-deployment machine learning algorithms often influence the environments they act in, and thus shift the underlying dynamics that the standard reinforcement...

    #research #paper #ai #machine-learning
  • 3 weeks ago · ai

    [Paper] Fail Fast, Win Big: Rethinking the Drafting Strategy in Speculative Decoding via Diffusion LLMs

    Diffusion Large Language Models (dLLMs) offer fast, parallel token generation, but their standalone use is plagued by an inherent efficiency-quality tradeoff. W...

    #research #paper #ai #machine-learning
  • 3 weeks ago · ai

    [Paper] Distilling to Hybrid Attention Models via KL-Guided Layer Selection

    Distilling pretrained softmax attention Transformers into more efficient hybrid architectures that interleave softmax and linear attention layers is a promising...

    #research #paper #ai #machine-learning #nlp
  • 3 weeks ago · ai

    [Paper] LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving

    Simulators can generate virtually unlimited driving data, yet imitation learning policies in simulation still struggle to achieve robust closed-loop performance...

    #research #paper #ai #machine-learning #computer-vision
  • 3 weeks ago · ai

    [Paper] Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Learnable Channel Attention

    We study the problem of learning a low-degree spherical polynomial of degree ell_0 = Θ(1) ge 1 defined on the unit sphere in RR^d by training an over-parameteri...

    #research #paper #ai #machine-learning
  • 3 weeks ago · ai

    [Paper] FlashVLM: Text-Guided Visual Token Selection for Large Multimodal Models

    Large vision-language models (VLMs) typically process hundreds or thousands of visual tokens per image or video frame, incurring quadratic attention cost and su...

    #research #paper #ai #computer-vision

Newer posts

Older posts
EUNO.NEWS
RSS GitHub © 2026