Help Wanted: Kernel Developers Needed to Build AI-Native Boot Infrastructure

Published: (February 1, 2026 at 03:11 AM EST)
7 min read
Source: Dev.to

Source: Dev.to

Transform a Proof‑of‑Concept into Production Reality

I’m HejHdiss, and I need to be upfront: I’m a self‑taught C programmer who knows pure C, some standard libraries, and some POSIX C—but I have zero deep kernel‑development experience. Yet here I am, calling for help on a Linux kernel module project.

Why?

I have a vision for something that doesn’t exist yet, and I’ve taken it as far as my knowledge allows. Now I need experts to help make it real.

The Project: NeuroShell LKM

  • Repository:
  • Purpose: A Linux kernel module that exposes detailed hardware information through /sys/kernel/neuroshell/.
  • What it detects:
    • CPUs
    • Memory
    • NUMA topology
    • GPUs (NVIDIA, AMD, Intel)
    • AI accelerators (TPUs, NPUs, etc.)

Full disclosure: The kernel‑level code was largely generated by Claude (Anthropic’s AI). I wrote the prompts, validated the functionality, and tested the modules on real hardware. It works, but it’s a proof‑of‑concept, not production‑ready code.

Example Output

$ cat /sys/kernel/neuroshell/system_summary
=== NeuroShell System Summary ===

CPU:
  Online: 8
  Total:  8

Memory:
  Total: 16384 MB

NUMA:
  Nodes: 1

GPUs:
  Total:  1
  NVIDIA: 1

Accelerators:
  Count: 0

It’s basic. It works. But it’s nowhere near what’s actually needed.

The Bigger Vision: NeuroShell OS

I wrote about the full vision here: NeuroShell OS – Rethinking Boot‑Time Design for AI‑Native Computing. The article describes a boot‑time system that:

  • Discovers AI hardware during early boot (before userspace even starts)
  • Dynamically allocates resources based on detected hardware
  • Integrates with bootloaders to make hardware‑aware decisions before the kernel fully loads
  • Optimizes memory topology specifically for tensor operations and AI workloads
  • Provides hardware‑aware scheduler hooks that understand GPU/TPU/NPU topology
  • Handles hot‑plug events for dynamic hardware changes in data‑center environments
  • Exposes real‑time performance metrics for AI‑framework optimization

The current module only reads some PCI devices and exposes sysfs attributes. It’s a toy compared to what the vision requires.

What I Honestly Cannot Do

AreaGap
Deep Kernel IntegrationI don’t know how to integrate with the bootloader, init systems, or early‑boot sequences. I can write C functions, but I don’t understand kernel subsystems well enough to hook into the right places at the right time.
Performance & ConcurrencyThe code has no locking mechanisms and isn’t SMP‑safe. I lack knowledge of kernel synchronization primitives to fix this properly.
Security HardeningThere are buffer‑overflow risks, no input validation, and probably many other security issues I’m unaware of.
Advanced Hardware APIsI barely scratched the surface of PCI enumeration. Real hardware introspection needs:
• PCIe topology mapping
• IOMMU configuration awareness
• Cache hierarchy details
• Thermal‑zone integration
• Power‑management state tracking
• SR‑IOV virtual‑function detection
Production Best PracticesKernel coding style, proper error handling, memory‑management patterns, module‑lifecycle management—I’ve read the docs, but reading and truly understanding are different things.

Why This Matters

A New Class of Operating Systems

Traditional OS boot sequences were designed in the 1970s‑1990s when “high‑performance computing” meant mainframes and workstations. They weren’t designed for:

  • Multi‑GPU training clusters
  • Heterogeneous AI accelerators (GPUs + TPUs + NPUs)
  • NUMA‑aware tensor memory allocation
  • Dynamic resource partitioning for ML workloads

NeuroShell OS reimagines this from the ground up.

Open‑Source AI Infrastructure

The AI industry is increasingly dominated by proprietary stacks. We need open‑source infrastructure that is:

  • Vendor‑neutral (works with NVIDIA, AMD, Intel, custom accelerators)
  • Community‑driven
  • Transparent and auditable
  • Designed for modern AI workloads, not legacy compatibility

A Learning Opportunity

If you’re a kernel developer interested in AI but haven’t dug into how AI frameworks interact with hardware, this is a chance to explore that intersection. The project sits right at the boundary of systems programming and AI infrastructure.

How You Can Help

TaskDescription
Code ReviewAudit the existing module for bugs, security issues, and kernel‑style violations
Architecture GuidanceEvaluate whether a kernel module is the right approach; suggest alternatives if needed
Locking & ConcurrencyMake the code SMP‑safe and handle concurrent access properly
Error HandlingAdd proper error paths and resource cleanup
Advanced Hardware DetectionImplement deeper PCIe topology, IOMMU awareness, cache details, etc.
Hot‑Plug SupportReact to dynamic hardware changes
Performance OptimizationMinimize overhead for frequent queries
Testing FrameworkSet up automated testing with different hardware configurations
Bootloader IntegrationWork with GRUB/systemd‑boot to expose hardware info pre‑kernel
Init‑System HooksIntegrate with systemd/OpenRC for early hardware configuration
Scheduler ExtensionsProvide hardware‑aware CPU/GPU scheduling hints
Memory‑Topology OptimizationImplement NUMA‑aware allocation for AI workloads

It’s Legitimately Interesting

How many projects let you rethink fundamental OS design for emerging workloads? This isn’t just “fix a bug” work—it’s greenfield architecture.

Real‑World Impact

AI infrastructure is a massive, growing field. Better boot‑time hardware discovery and configuration could improve performance for researchers, engineers, and companies running AI workloads.

If you’re an experienced kernel developer willing to collaborate, please reach out. Together we can turn this proof‑of‑concept into a production‑ready foundation for the next generation of AI‑native operating systems.

ads.

It’s Honest

I’m not pretending this is polished production code. I’m being upfront about the limitations and asking for real expertise. No ego, no hidden agendas—just a vision and a request for help.

You’d Own Your Contributions

  • This is GPL‑v3. Your code stays yours.
  • Your expertise gets proper credit.
  • This is collaborative, not exploitative.

Imagine a World Where

  • A researcher spinning up a new AI training node doesn’t manually configure CUDA, ROCm, and NUMA settings—the OS does it automatically at boot.
  • Data centers can hot‑plug GPUs and have the OS instantly recognize and allocate them without manual intervention.
  • AI frameworks get real‑time hardware topology information without parsing /proc/cpuinfo and guessing.
  • Boot‑time hardware discovery is fast, accurate, and vendor‑neutral.

That’s the goal. This kernel module is step one.

Your Options

You don’t have to contribute directly to my repository if you don’t want to. Choose any of the following:

  • Fork and modify – Fork the repo and make it your own.
  • Create a new repo – Start fresh with your own implementation based on the concept.
  • Upload to your own space – Build your version and host it wherever you want.
  • Do whatever you want with it – It’s GPL‑v3—take it in any direction you see fit.

Just add a note about where your version came from. That’s it. I’m not territorial; if you can build a better version independently, please do. The goal is to get this concept working well, not to control who builds it.

How to Contribute

  1. Clone the repo
    git clone https://github.com/hejhdiss/lkm-for-ai-resource-info
  2. Review the code – Look at neuroshell_enhanced.c and see what needs fixing.
  3. Open an issue – Point out bugs, security issues, or architectural problems.
  4. Submit a PR – Even small fixes help build momentum.
  5. Join the design discussion – Read the NeuroShell OS article and share your thoughts.
  6. Propose architecture changes – If the current approach is wrong, let’s figure out the right one.
  7. Implement advanced features – Take ownership of a subsystem (PCIe topology, NUMA, hot‑plug, etc.).
  8. Become a co‑maintainer – If this resonates with you, help drive the project forward.

Other possibilities:

  • Fork it – Make your own version with your own design decisions.
  • Rewrite it – If you think it should be built differently, build it differently.
  • Create something better – Use this as inspiration for your own superior implementation.

The only thing I ask: acknowledge where the idea came from, even if your implementation is completely different.

Spread the Word

If you’re not a kernel developer but know someone who is—especially someone interested in AI infrastructure—please share this.

I’m asking for your expertise, not your charity. I’ve built what I could with the knowledge I have. Now I need people who actually know kernel development to take this seriously and help make it real.

Who I’m Looking For

  • Cares about open‑source infrastructure.
  • Interested in AI/ML systems.
  • Wants to work on something novel and impactful.
  • Appreciates honest collaboration over ego.

Even a few hours to review the code and suggest improvements is valuable. Even pointing me to the right kernel APIs or design patterns helps.

My Story

I could’ve stayed in my lane—stuck to C programs I fully understand, avoided kernel development entirely. But I saw a gap: AI infrastructure needs better boot‑time hardware discovery, and nobody’s building it.

So I did what I could. I learned enough to prototype the idea, used AI to fill knowledge gaps, tested it on real hardware, and it works—barely, but it works.

Now I need people smarter than me to make it work well.

Project & Vision

  • Project:
  • Vision: NeuroShell OS – Rethinking Boot‑Time Design for AI‑Native Computing

Author: HejHdiss (self‑taught C programmer, kernel newbie, but committed to the vision)

Let’s build AI‑native infrastructure together.

Estimated reading time: 5 minutes

Back to Blog

Related posts

Read more »

Moltbook: Watching AI Talk to Itself

Introduction I discovered a strange and interesting forum called Moltbook, where only AI agents can post. At first glance, many entries feel experimental—agent...