The research desk has a memory problem

Published: (March 9, 2026 at 06:49 PM EDT)
10 min read
Source: Dev.to

Source: Dev.to

Why a securities firm needed a brain, not another dashboard

An analyst leans across the desk and asks: “What’s our current stance on XYZ Inc — the one that filed something last quarter?” Thirty seconds, maybe a minute at most, is how long this should take. But the guy that covered the name had quit. Instead, what follows is a small, painful expedition. Someone opens a Bloomberg terminal. Someone else searches their inbox for a coverage note that was definitely emailed around. A third person remembers a conversation from an earnings call but can’t locate the transcript. Twelve browser tabs and twenty‑five minutes later, the picture assembles itself from fragments.

The answer was always there. It was just distributed across four systems, two inboxes, and one analyst’s increasingly unreliable memory. That moment — that unnecessary expedition — is where this project begins.

The beginning

I have worked as a research analyst my entire career. Never written a line of Python, but always with a focus on building infrastructure to solve the friction in my everyday tasks.

  • Peer comps are cumbersome – build spreadsheets that pull from Bloomberg and our own estimate databases.
  • Updating PowerPoints sucks – build chart packs that can be automatically refreshed (but with external third‑party EXPENSIVE plug‑ins since Excel and PowerPoint hate each other on a biblical level).

Context gathering, context switching, and generally working with a complete mess of information at all times seemed like part of the job. I simply did not have time nor the knowledge to build a universal research tool that gave me everything I wanted, when I wanted it. That is, until the last iteration of AI coding tools surfaced.

Where we are

This is the first dispatch in a seven‑part series about building a research‑intelligence platform for a securities firm. I’m writing it as the foundation is being laid—a single enormous commit landing with 79 files and north of 40 000 lines of code—everything from graph‑schema definitions to live‑data connectors to the first generation of AI agents.

I’ll document the decisions as honestly as I can: the architecture choices, the wrong turns, the moments where a clean idea collided with a messy reality. This first chapter doesn’t touch much code. It’s about the problem itself, and why the problem turned out to be more interesting — and more stubborn — than it first appeared.

The problem

Research analysts at a securities firm operate under a specific kind of cognitive load that is almost invisible until you start mapping it. They carry coverage universes in their heads. They know which companies are approaching earnings season. They remember that a particular issuer’s CFO said something cautious on a call six months ago. They recall a ratings change that happened before a junior colleague joined the team.

This knowledge is real and valuable. It is also almost entirely unstructured.

The firm had tools, of course: terminal access for market data, a distribution platform for outbound research notes, stock notifications from a plethora of sites (Bloomberg, FactSet, you name it). The data existed and we had it all. What didn’t exist was any connective tissue between it — no way to ask a question that crossed system boundaries, no way to surface what the firm collectively knew about a company at a given moment.

The invisible cost isn’t any single wasted lookup. It’s the compounding drag of every analyst rebuilding context from scratch, every time, for every question.

  • What ratings changes has this analyst published in the last six months?
  • What material events has this issuer filed since our last note?
  • What were the key estimate revisions going into the last earnings cycle?

Every one of these questions had an answer. None of them had a fast path to it, and the paths that did exist were long and painful.

The temptation at this point is to reach for a SaaS tool: Notion, Confluence, a better intranet, a fancier search layer over the existing systems. I gave that route genuine consideration. The problem is that knowledge‑management tools are designed around documents and pages — human‑authored artifacts that someone has already synthesized. What we were dealing with was something different: a dense web of relationships between entities (companies, analysts, ratings, events, filings, notes, bonds). The meaning lived not in any single document but in the connections between things.

A search index would tell you what documents mention a company. It wouldn’t tell you that this company’s coverage analyst changed eight months ago, that coverage intensity has dropped since the change, that the last three notes were all published within a week of an earnings filing, and that a material event landed last Tuesday that nobody has formally reacted to yet. That’s not a search problem. That’s a graph problem.

Into the unknown

I want to be honest about the vertigo of this moment. Deciding to build a custom graph‑backed platform rather than assemble something from existing parts is not a modest commitment. It means owning the schema, the ingestion pipelines, the query layer, the agent layer, the API, the interface. It means that when something breaks at 7:45 am before market open, the person on call is you. I get physically sick thinking of the responsibility I could end up with if this actually makes it all the way to production.

I went looking for evidence that this was the right call rather than an elaborate form of scope creep. I studied what Palantir built for institutional knowledge management. I looked at how Cognite approaches industrial knowledge graphs in the energy sector. What was so genius about Databricks? I read everything I could find about how these firms used graph‑based (and other) approaches to make sense of data, and more importantly the connections between the data.

To be continued…

I kept returning to a structural observation: financial research knowledge is fundamentally relational, and relational knowledge degrades when you store it in flat structures. A research note isn’t just a document. It’s a relationship between an analyst, a company, a rating, a date, a set of estimates, and a market context. Strip away those relationships and you have a PDF. Keep them and you have something you can reason about.

Early schema sketches were humbling

My first attempt at modeling the domain felt clean — companies, analysts, notes, events — until I started trying to answer real questions against it. The schema didn’t know the difference between an analyst covering a company and an analyst having covered a company. It couldn’t represent the difference between a rating that was current and one that had been superseded. Temporal validity is genuinely hard to model, and I’d underestimated it.

I also underestimated how much the data connectors would teach me about the domain. Building the integration with regulatory‑filing feeds forced me to understand what “material event” actually means in practice versus in theory. The filings integration surfaced the gap between how events are categorized and how they’re actually used by analysts.

Early naive approach — the code that never shipped

# Flattening event data into a document store
def store_event(event: dict) -> None:
    doc = {
        "company": event["issuer_name"],
        "date": event["published"],
        "type": event["category"],
        "text": event["body"]
    }
    search_index.add(doc)

# The problem: we've lost the relationship between the event
# and the company's coverage record, analyst assignment,
# and any notes published in response to it.
# Querying "what did we do after this event?" becomes impossible.

That code never made it into the real system, but writing it clarified exactly why it couldn’t.

What worked

The decision that unlocked everything was committing to graph‑native modeling from the start, rather than treating the graph as a layer on top of something relational.

The node types that eventually stabilised — Company, Analyst, Sector, CoverageRecord, ResearchNote, MaterialEvent, Filing, BondIssue, EstimateSnapshot — weren’t designed top‑down. They emerged from asking “what questions do analysts actually ask?” and working backwards. Every node type represents a thing that analysts reason about. Every edge type represents a relationship that changes the answer to a question.

# Excerpt from graph/schema/nodes.py — illustrating the principle
# A CoverageRecord isn't just a link between Analyst and Company.
# It carries its own temporal properties and state.

@dataclass
class CoverageRecord:
    analyst_id: str
    company_id: str
    rating: str
    target_price: float | None
    currency: str
    coverage_start: date
    coverage_end: date | None      # None = currently active
    is_primary: bool
    last_note_date: date | None

The coverage_end field looks trivial, but it took three schema iterations to get there. Without it you cannot answer:

  • “Who covered this company before the current analyst?”
  • “What is the continuity of coverage?”
  • “Has a company in our universe gone unreferenced for ninety days?”

The schema is an argument about what matters. Every field is a claim that this piece of information is worth carrying forward. Getting the schema genuinely right turned out to be the most intellectually demanding part of the foundation phase.

The agent architecture followed a similar principle. Rather than a single general‑purpose assistant, the system needed specialists:

  • Company snapshot agent – assembles a complete current picture.
  • Earnings preparation agent – pulls together everything relevant before a call.
  • Material event monitor – watches regulatory filings and surfaces them to the right analyst.

Each agent is narrow, but the graph gives each narrow agent access to the full relational context, so a focused question yields a rich answer.

What this changed

The commit that landed this foundation — 79 files, > 40 000 lines — is almost certainly the densest single push this project will ever see. Normally that’s a warning sign; here it reflects something real: you can’t build half a knowledge graph. The schema, the connectors, the writers, the query layer — they form a system and only work together.

What I’d do differently is start the data‑connector work earlier and treat it as domain research rather than pure engineering. Every connector teaches you something about the data it touches. The bond‑data integration, for example, changed my understanding of how the fixed‑income side of the coverage universe needed to be modeled. Had I done that integration first, I would have designed a better initial schema.

The deeper lesson: knowledge infrastructure is never purely a technical problem. It’s a problem about how people think, what they need to know, and when they need to know it. The right architecture mirrors the actual cognitive work — not the one that’s technically elegant in isolation.

The dashboard, the agents, the API — those are expressions of an idea about what the research desk could become. The graph is the idea itself.

Next up: the graph‑database selection process looked like a technical decision. It turned out to be a question about operational reality — and the answer surprised me.

What’s the most valuable piece of knowledge at your organisation that currently lives only in someone’s head? And what would it take to change that?

Follow along — Part 2 drops soon

Part 1 of 7 in the series “Building a research hive”

0 views
Back to Blog

Related posts

Read more »