Mark Nottingham

Using AI to Evaluate Internet Standards (Part Two)

Wednesday, 25 March 2026

Standards Internet and Web

I’ve previously looked at using AI as a tool to evaluate technical standards efforts – basically, asking commercially available chatbots what they think. However, “AI” is more than off-the-shelf, general-purpose chatbots. Can we do better by grounding the model in a specific context?

I’ve been looking for ways to use NotebookLM for a while: grounding a chatbot in a specific set of documents allows you to interact with them in a genuinely new way.

The breakthrough question for me was simple: What if those documents were the records of a working group? Thanks to record-keeping requirements, meetings need to keep minutes, document drafts are available, and often groups keep additional information like issue lists and meeting transcripts.

Feed all of that into NotebookLM and you can effectively chat with the history of a standards effort – asking about why a particular choice was made, who participated, what objections came up, and how a specification evolved.

I suspect this capability could be significant, precisely because the barriers to entry for tracking and understanding standards work are so high. There is simply too much going on — too many emails, issues, and drafts — for most people to follow.

If successful, this technique might help make standards efforts more legible to:

AI Preferences

My first go at this technique was in a working group I chair, AI Preferences. We needed a way to get new and casual participants up to speed on discussions, so that we didn’t need to keep repeating the same arguments.

Here’s the notebook I created.1 I asked it to summarise the arguments against proposals for a “use” term and a “search” term in the vocabulary.

Privately, I got feedback from new participants that these were very useful – and, critically, I was able to create them without injecting my own biases.

GEOPRIV

Another test case is the now-finished IETF work on Geolocation Privacy. I wasn’t involved in this group, but have long heard my IETF colleagues whisper about it in hushed tones; it didn’t succeed, and caused a lot of pain on the way there.

After gathering the relevant documents and dragging them into a notebook,1 I asked:

Why did GEOPRIV fail?

Here’s the full response. Martin Thomson (who was intimately involved in that work) reviewed that answer and said:

The privacy part is broadly correct. The whole on-behalf-of arrangement did lead to some fairly bitter fights. […] Fights were common. The part about wars is entirely accurate. I’m not sure about the over-engineering part, though maybe that relates to the privacy aspect, which is fair. The final thing about lack of commercial success is broadly right, modulo successful deployments for emergency services geolocation.

So I’d say that this is maybe 80%.

A New Tool

The hard part of all of this is getting all of the documents together in one place to feed into NotebookLM. To make that easier, at least for IETF groups, I2 created a new tool, ietf-notebook.

You can install it using pipx:

pipx install ietf-notebook

Then, use it to gather all of a group’s drafts, RFCs, meeting minutes and transcripts, its charter, and optionally its GitHub issues into a directory, ready for dragging into a new notebook, so you can chat with that group’s history.

It’s still rough, so bug reports, suggestions, and improvements are most welcome. In my experience, it takes less than a minute to gather the documents for most groups, so you can be chatting with a group in almost no time.

If you want to see a demo first, check out the notebooks for AIPREF, DIEM, and GEOPRIV.1

  1. You’ll need to be logged into Google to use these notebooks.  2 3

  2. OK, Gemini.