Singapore’s Foreign Minister Builds An AI “Second Brain” Using NanoClaw, Says It Can Answer Every Question For A Diplomat

Many politicians across the world are talking about how they want to promote AI, but a Singaporean politician is building AI bots to help with his daily work.

Dr. Vivian Balakrishnan, Singapore’s Minister for Foreign Affairs, has publicly shared that he has built a personal AI assistant he describes as a “second brain” for a diplomat — one that answers every question, researches topics, drafts speeches, provides daily briefings, and condenses information on demand. “It has become invaluable — I don’t dare switch it off!” he wrote in a Facebook post.

Who Is Vivian Balakrishnan?

Dr. Balakrishnan is not your typical politician dabbling in tech buzzwords. A trained ophthalmologist educated at the Anglo-Chinese School and National Junior College, he earned a President’s Scholarship to study medicine at the National University of Singapore in 1980, later becoming a Fellow of the Royal College of Surgeons of Edinburgh in 1991. He has served in Singapore’s Cabinet for over two decades and is currently the country’s top diplomat.

That a minister of his standing is not just endorsing AI but actually building and running his own system — on a Raspberry Pi, no less — is a signal worth paying attention to.

What He Built: NanoClaw on a Raspberry Pi

The system is built on two open-source foundations. The first is NanoClaw, a self-hosted Claude assistant created by developer Gavriel Cohen. It runs locally on a Raspberry Pi, connects to messaging channels like WhatsApp, Telegram, Slack, and Discord, processes voice notes and images, and runs scheduled tasks — all without relying on a cloud service.

The second is the LLM Wiki pattern conceived by Andrej Karpathy, the former Tesla Director of AI. Karpathy has written extensively about how standard LLMs suffer from a form of amnesia — they forget everything between sessions. His wiki pattern addresses this by extracting structured knowledge from raw sources rather than indexing them wholesale, building a compounding knowledge base over time.

Balakrishnan has combined both into a system that ingests his speeches, articles, and web clips, synthesises them into a structured knowledge graph, and surfaces relevant information automatically every time he interacts with the assistant.


The Technical Architecture

The full technical write-up, which Balakrishnan published as a GitHub Gist, reveals a surprisingly sophisticated stack for a side project.

At its core is a three-layer design. Raw sources — speeches, articles, and web clips saved via the Obsidian mobile app — feed into a custom knowledge graph tool called mnemon, which stores discrete facts as structured nodes in a SQLite database. These nodes are then synthesised into human-readable wiki pages, organised by entity, concept, and timeline, and browsable in Obsidian on macOS and iOS.

The key insight: rather than doing simple retrieval-augmented generation (RAG), which fetches chunks of raw text, mnemon stores synthesised facts. Every time Balakrishnan asks a question, the system runs a semantic query against the knowledge graph and injects the most relevant facts as context before the AI responds — making the assistant progressively smarter as more material is ingested.

For privacy, the system is deliberately self-contained. Vector embeddings that power the semantic search run locally using Ollama on the Raspberry Pi 5, meaning no document content leaves the network. Voice notes are transcribed on-device via whisper.cpp for the same reason — the content of private conversations never reaches an external server.

Each messaging group gets its own isolated Docker container, its own local memory store, and its own Claude session. Groups cannot read each other’s data. The main group can spawn specialised child agents for parallel tasks like web research or data extraction.


What It Can Do

The practical capabilities are extensive. The system handles WhatsApp and Gmail (read and send), processes voice notes and images, runs scheduled briefings, and can be used via a web portal for longer conversations outside WhatsApp. Balakrishnan notes it answers every question, researches topics, provides daily updates, drafts speeches, and condenses information.

The system is, by his account, in daily active use — continuously ingesting new material and building up its knowledge graph.

This isn’t the first time that politicians have looked towards AI to help with their jobs. Albania had earlier given all their MPs a personal AI assistant, and had appointed an AI bot named Daella as a Cabinet Minister to prevent corruption in government tenders. Poland’s government had uploaded Polish language datasets and models on HuggingFace.


The Bigger Point

“The diplomat who learns to work with AI will have a meaningful edge,” Balakrishnan wrote. “I think that edge is now.”

AI agents are increasingly discussed in the abstract — as policy questions, investment theses, or regulatory challenges. Balakrishnan is doing something different: treating the technology as a tool to be built and used, not debated. His setup is a system he assembled from open-source parts, documented in detail, and published for others to replicate.

Karpathy’s core observation — that LLMs forget everything between sessions and need a structured memory layer to be genuinely useful over time — is exactly what Balakrishnan’s system is designed to solve. In a role where institutional knowledge, policy context, and historical nuance are everything, a compounding memory system is not a luxury. It is the point.

Whether other governments and diplomats follow is an open question. But Singapore’s Foreign Minister has already answered it for himself.

Posted in AI