EUNO.NEWS EUNO.NEWS
  • All (20993) +299
  • AI (3155) +14
  • DevOps (933) +7
  • Software (11054) +203
  • IT (5802) +74
  • Education (48)
  • Notice
  • All (20993) +299
    • AI (3155) +14
    • DevOps (933) +7
    • Software (11054) +203
    • IT (5802) +74
    • Education (48)
  • Notice
  • All (20993) +299
  • AI (3155) +14
  • DevOps (933) +7
  • Software (11054) +203
  • IT (5802) +74
  • Education (48)
  • Notice
Sources Tags Search
한국어 English 中文
  • 1 month ago · ai

    AdaSPEC: Selective Knowledge Distillation for Efficient Speculative Decoders

    Introduction AdaSPEC is a new method that speeds up large language models by using a small draft model for the initial generation pass, followed by verificatio...

    #speculative decoding #knowledge distillation #large language models #inference acceleration #draft model #AdaSPEC #AI efficiency #model compression
  • 1 month ago · ai

    [Paper] CanKD: Cross-Attention-based Non-local operation for Feature-based Knowledge Distillation

    We propose Cross-Attention-based Non-local Knowledge Distillation (CanKD), a novel feature-based knowledge distillation framework that leverages cross-attention...

    #knowledge distillation #cross-attention #computer vision #model compression #deep learning
EUNO.NEWS
RSS GitHub © 2026