mark nottingham

There Are No Standards Police

Wednesday, 13 March 2024

Standards Tech Regulation

It happens fairly often. Someone brings a proposal to a technical standards body like the IETF and expects that just because it becomes an RFC, people will adopt it. Or they’ll come across a requirement in an RFC and expect it to be enforced, perhaps with some kind of punishment. Or they’ll get angry that people don’t pay attention to an existing standard and do their own thing. This is so common that there’s a ready response widely used by IETF people in these situations:

“There are no standards police.”

In other words, even if you do consider Internet standards to be a regulatory force, there is no enforcement mechanism. One of their key characteristics is that they’re voluntary. No one forces you to adopt them. No one can penalise you for violating a MUST; you have to want to conform.

Of course, you can still feel compelled to do so. If an interoperability standard gets broad adoption and everyone you want to communicate with expects you to honour it, you don’t have many options. For example, if you want to have a Web site, you need to interoperate with browsers; most of the time, they write down what they do in standards documents, and so you’ll need to conform to them.

But that’s the successful path. For every HTTP or HTML or TCP, there are hundreds of IETF RFCs, W3C Recommendations, and other standards documents that haven’t caught on – presumably much to their authors’ dismay. Adopting and using those documents was optional, and the market spoke: there wasn’t interest.

This aspect of the Internet’s standards has been critical to its success. If people were forced to adopt a specification just because some body had blessed it, it would place immense pressure on whatever process was used to create it. The stakes would be high because the future of the Internet would be on the line: businesses would play dirty; trolls would try to subvert the outcomes; governments would try to steer the results.

Of course, all of those things already happen in Internet standards; it’s just that the stakes are much lower.

So, voluntary adoption is a proving function – it means that not all of the weight of getting things right is on the standardisation process, and that process can be lighter than, for example, that used by the governments or the United Nations (I’ll get back to that in a minute). That’s important, because it turns out that it’s already incredibly difficult to create useful, successful, secure, private, performant, scalable, architecturally aligned technical specifications that change how the Internet works within all of the other natural constraints encountered; it’s threading-the-needle kind of stuff. And we need to be able to fail.

Historically, voluntary standards have been encouraged by governments in their purchasing and competition policies - for example, OMB Circular A-119, EU Regulation 1025/2012, and the EC guidelines on horizontal agreements. Standards bodies are a ‘safe space’ where competitors can cooperate without risking competition enforcement, so long as they follow a set of rules – and one of the biggest rules is that adoption should be voluntary, not mandatory or coerced (at least by those setting the standard).

But it’s no secret that the policy landscape for the Internet has changed drastically. Now, there is increasing interest in using interoperability standards as a mechanism to steer the Internet. Academics are diving deep into the cultures and mechanisms of technical standards. Civil society folks are coming to technical standards bodies and trying to figure out how to incorporate human rights goals. Regulation is coming, and policy experts are trying to figure out how to get involved too.

This influx has caused concern that that these relative newcomers are mistakenly focusing on standards as a locus of power when, in fact, the power is expressed in the adoption of a standardised technology. For example, Geoff Huston recently wrote an opinion piece along these lines.

I have no doubt that some still come to the IETF and similar bodies with such misapprehensions; we still have to remind people that ‘there are no standards police’ on a regular basis. However, I suspect that at least the policy people (including regulators) largely understand that it’s not that simple.

That’s because modern regulators are very aware that there are many influences on a regulatory space. They want to learn about the other forces acting on their target, as well as persuade and inform. Similarly, those who are involved in policymaking are intensely aware of the diffuse nature of power. In short, their world view is more sophisticated than people give them credit for.

(All that said, I’m still interested and a bit nervous to see what Global Digital Compact contains when it becomes public.)

Another concern is that governments might try to influence Internet standards to suit their purposes, and then exert pressure to make the results mandatory – short circuiting the proving function of voluntary standards.

Avoiding that requires separating the legal requirement from the standards effort, to give the latter a chance to fail. For example, MIMI may or may not succeed in satisfying the DMA requirement for messaging interop. It is an attempt to establish voluntary standards that, if successful in the market, could satisfy legal regulatory requirements without using a preselecting standards venue.

Of course, that pattern is not new – for example, accessibility work in the W3C is the basis of many regulatory requirements now, but wasn’t considered (AFAIK) by regulators until many years after its establishment.

Because of the newly intense focus on regulating technology, there’s likely to be increasing pressure on such efforts: both the pace and volume of standardisation will need to increase to meet the requirements that the standards bodies want to attempt to address. I suspect aligning the timelines and risk appetites of standards bodies and regulators are going to be some of the biggest challenges we’ll face if we want more successes.

So right now I believe the best way forward is to create ‘rails’ for interactions with legal regulators – e.g., improved communication, aligned expectations, and ways for an effort to be declined or to fail without disastrous consequences. Doing that will require some capacity building on the parts of standards bodies, but no fundamental changes to their models or decision-making processes.

This approach will not address everything. There are some areas where at least some regulators and the Internet standards community are unlikely to agree. Standards-based interoperability may not be realistically achievable in some instances, because of how entrenched a proprietary solution is. Decentralising a proprietary solution can face many pitfalls, and may be completely at odds with a centralized solution that already has broad adoption. And, most fundamentally, parties that are not inclined to cooperate can easily subvert a voluntary consensus process.

However, if things are arranged so that when conforming to a voluntary consensus standard that has seen wide review and market adoption is considered to be prima facie evidence of conformance to a regulatory requirement, perhaps we do sometimes have standards police, in the sense that legal requirements can be used to help kickstart standards-based interoperability where it otherwise wouldn’t get a chance to form.