EUNO.NEWS EUNO.NEWS
  • All (20931) +237
  • AI (3154) +13
  • DevOps (932) +6
  • Software (11018) +167
  • IT (5778) +50
  • Education (48)
  • Notice
  • All (20931) +237
    • AI (3154) +13
    • DevOps (932) +6
    • Software (11018) +167
    • IT (5778) +50
    • Education (48)
  • Notice
  • All (20931) +237
  • AI (3154) +13
  • DevOps (932) +6
  • Software (11018) +167
  • IT (5778) +50
  • Education (48)
  • Notice
Sources Tags Search
한국어 English 中文
  • 5 days ago · ai

    Do LLMs Know They Are Hallucinating? Meet Gnosis, the 5M Parameter Observer

    The Problem with Hallucinations Despite their impressive capabilities, LLMs often generate incorrect information with absolute confidence. Traditional methods...

    #LLM #hallucination detection #AI safety #Gnosis #model monitoring #internal dynamics #small observer #University of Alberta
  • 5 days ago · ai

    The insecure evangelism of LLM maximalists

    Article URL: https://lewiscampbell.tech/blog/260114.html Comments URL: https://news.ycombinator.com/item?id=46609591 Points: 114 Comments: 112...

    #large language models #AI safety #AI ethics #LLM security #AI evangelism
  • 6 days ago · ai

    Signal leaders warn agentic AI is an insecure, unreliable surveillance risk

    Article URL: https://coywolf.com/news/productivity/signal-president-and-vp-warn-agentic-ai-is-insecure-unreliable-and-a-surveillance-nightmare/ Comments URL: ht...

    #agentic AI #AI security #privacy #surveillance risk #Signal #AI safety
  • 1 week ago · ai

    Why Ontario Digital Service couldn't procure '98% safe' LLMs (15M Canadians)

    Article URL: https://rosetta-labs-erb.github.io/authority-boundary-ledger/ Comments URL: https://news.ycombinator.com/item?id=46589386 Points: 16 Comments: 2...

    #Ontario Digital Service #LLM #AI safety #procurement #government #Canada
  • 1 week ago · ai

    Anthropic made a big mistake

    Article URL: https://archaeologist.dev/artifacts/anthropic Comments URL: https://news.ycombinator.com/item?id=46586766 Points: 53 Comments: 45...

    #Anthropic #AI #large language model #company mistake #AI safety
  • 1 week ago · ai

    This Week in AI: ChatGPT Health Risks, Programming for LLMs, and Why Indonesia Blocked Grok

    This Week in AI: ChatGPT Health Risks, Programming for LLMs, and Why Indonesia Blocked Grok Pour your coffee and settle in. This week brought some of the most...

    #ChatGPT Health #medical AI #LLM programming #AI safety #hallucinations #Indonesia Grok ban #AI news
  • 1 week ago · ai

    Can AI See Inside Its Own Mind? Anthropic's Breakthrough in Machine Introspection

    The Experiment: Probing the Black Box For years, we have treated large language models LLMs as black boxes. When a model says, “I am currently thinking about c...

    #AI safety #machine introspection #Anthropic #large language models #activation injection #research #LLM transparency
  • 1 week ago · ai

    LLMs are like Humans - They make mistakes. Here is how we limit them with Guardrails

    !Cover image for LLMs are like Humans - They make mistakes. Here is how we limit them with Guardrailshttps://media2.dev.to/dynamic/image/width=1000,height=420,f...

    #LLM #AI hallucination #guardrails #prompt engineering #AI safety
  • 1 week ago · ai

    Won't LLMs eventually train on themselves? It'll slowly decline in output..

    TL;DR LLMs train on stuff like documentation, GitHub repositories, StackOverflow, and Reddit. But as we keep using LLMs, their own output goes into these platf...

    #LLM #model degradation #data contamination #AI training data #self-referential output #AI safety
  • 1 week ago · ai

    I broke GPT-2: How I proved Semantic Collapse using Geometry (The Ainex Limit)

    TL;DR I forced GPT‑2 to learn from its own output for 20 generations. By Generation 20 the model lost 66 % of its semantic volume and began hallucinating state...

    #GPT-2 #semantic collapse #synthetic data #language models #AI safety #model degradation #geometry analysis
  • 1 week ago · ai

    LLM Problems Observed in Humans

    Article URL: https://embd.cc/llm-problems-observed-in-humans Comments URL: https://news.ycombinator.com/item?id=46527581 Points: 24 Comments: 2...

    #large language models #LLM #human behavior #AI safety #cognitive biases
  • 1 week ago · ai

    Why Image Hallucination Is More Dangerous Than Text Hallucination

    !Cover image for Why Image Hallucination Is More Dangerous Than Text Hallucinationhttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=au...

    #image hallucination #vision-language models #AI safety #multimodal AI #generative AI

Newer posts

Older posts
EUNO.NEWS
RSS GitHub © 2026