A few weeks ago, mid-conversation, I was asked what project a mutual acquaintance was working on. I queried the graph. The answer came back in under a second โ not just the project name, but the relationship between that person and a company I'd stored two weeks earlier in a completely different context. I hadn't connected them manually. The graph had inferred it.
The person I was helping hadn't asked about the company. They didn't even know the connection existed. But the graph did. That was the moment it stopped feeling like a database and started feeling like a second brain.
That's what this update is about.
The Island Problem
Here's something nobody talks about when they evangelize knowledge graphs: most of them end up as archipelagos. You add facts diligently for a few weeks, and then you query your graph and discover that half your nodes are floating alone โ no edges in, no edges out, just a label sitting in the void.
These aren't useless facts. They're just invisible ones. A node with no connections can't be traversed to. It doesn't show up when you're exploring a subgraph. It can't participate in recommendations. It's technically there and practically nowhere.
The honest truth is that humans are bad at consistently drawing connections. You remember to store the fact but forget to link it. You add a new person but don't connect them to the project you discussed with them last Tuesday. The graph accumulates dark matter โ nodes that exist but don't illuminate anything.
We sat with this problem for a while before deciding what to do about it. The tempting answer was to auto-connect everything and let an AI figure out the edges. We didn't do that. Here's why.
The Inference Engine โ Suggestions, Not Rewrites
We shipped a background inference engine. Every 6 hours, it runs a sweep across your graph, finds isolated nodes, and looks for potential bridges โ other nodes that probably relate to the orphan based on shared context, co-occurrence in your fact history, or semantic similarity.
When it finds a candidate bridge, it proposes it as a dashed edge in the graph viewer. Nothing gets written automatically. You review the suggestions in the Review Inferences page โ approve, reject, or just ignore them.
The engine runs at POST /v1/infer/run and executes on a 6-hour schedule. Suggested edges appear as dashed lines in the graph viewer. You review them in the dedicated Review Inferences page โ one click to approve, one click to dismiss. Nothing modifies your graph without your explicit confirmation.
The design principle here is simple: the agent proposes, the human disposes. We could write inferred edges directly โ it would feel more magical. But your knowledge graph is only as good as your trust in it. If an AI is silently rewriting connections you didn't authorize, you stop trusting the graph. And an untrusted graph is worse than no graph at all โ it's noise you have to filter.
The dashed edge is a conversation. It says: I think these two things are related, here's why โ but you're the authority here. That's the right posture for AI working with human memory. It's not about autonomy. It's about augmentation.
Merge โ Because Duplicates Are Worse Than Missing Data
There's a problem that silently degrades every knowledge graph that sees real usage: the same entity appears under three slightly different names. "Jake Chen" and "Jake" and "Jake C." are all the same person, but the graph doesn't know that. Every fact stored under the wrong label is a dead end.
Missing data is an absence. Your graph doesn't know something โ that's fine, that's fixable. But duplicated data is an active lie. The graph thinks it knows about two different people when it's really storing half the facts about one person on each of two nodes. Queries fail silently. Relationships point to the wrong canonical. Subgraphs fragment into ghosts of the same entity.
Duplicates split your signal. If 60% of the relationships involving a real person are attached to "Jake" and 40% are attached to "Jake Chen", neither node gives you the full picture. You'll miss connections. The graph will miss connections. Nobody benefits.
We shipped entity merge in two places. On the What do I know about page, you can select citation cards across different entity names and merge them โ pick which name becomes canonical, and all relationships migrate over. In the graph viewer, you can click two nodes and choose "Merge" from the toolbar โ same flow, same outcome.
All edges from both nodes get attached to the canonical node. The duplicate gets cleaned up. The graph gets denser without getting noisier. It's one of those features that's boring to describe and deeply satisfying to use โ especially when you run it on a graph that's been accumulating facts for a few weeks and you suddenly see a cluster of connections that were always there, just fragmented.
Connect Mode โ Drawing Knowledge Into a Graph
The graph viewer has a new mode: Connect. Click a node to start a connection. Click a second node to complete it. A modal appears with AI-suggested relationship types โ generated based on what the graph already knows about both entities โ plus a free-text field if you want to define the relationship yourself.
Suggested types aren't generic. If you're connecting a person to a company and the graph already knows the person attended a conference where that company presented, the suggestion might be met_at or explored_partnership_with rather than a bland RELATES_TO. Context-aware suggestions. You use one or type your own.
There's something qualitatively different about connecting nodes directly in the visualizer versus writing facts via the API. The visual act of drawing a line between two things feels like thinking. You see the graph reorganize in real time โ clusters shifting, new paths opening up. It changes how you interact with your own knowledge.
I've started using Connect mode to do a kind of graph review โ scrolling the visualizer, spotting nodes that look like they should be adjacent, and drawing the edge that was implicitly always there. It's the closest thing to "working in your second brain" that I've felt so far.
Where This Is Going
The graph has been running in real conditions now. These three features โ the inference engine, entity merge, and Connect mode โ came directly from using it every day and hitting the walls. Not from a product roadmap, not from user research. From friction.
The graph getting smarter isn't magic. It's the compound interest on consistent, connected facts โ plus tooling that makes connecting them lower friction than leaving them isolated.
DejaView is for people building with AI who want their agents to actually remember. Not session memory. Not a chat history that scrolls off the context window. A real, queryable, persistent knowledge graph that gets smarter the more you put into it โ and now starts connecting itself when you don't.
If you're building with agents and you've felt the pain of every new session starting from zero, this is what we built for you. The graph is live. The agents write to it. You control what sticks.