Age Assurance just became the Internet’s New Access Layer

Concordium
Age Assurance just became the Internet’s New Access Layer

The internet was built without a native trust layer. For most of its history, that absence was tolerable, and even useful. It enabled openness, rapid growth, and the liability shields that protected early platforms. But that structural gap is now being filled, one layer at a time. The first layer being laid down isn't identity. It's age. 

This isn't being driven by a single law or jurisdiction. Regulators are learning from each other, expectations are aligning, and a common principle is emerging: access to certain digital environments must be mediated by reliable, privacy-preserving proof of age. 

Wallet and credential-based ecosystems, including Concordium, are embedding age assurance into infrastructure rather than treat it as an external control. That shift is what makes this a structural change rather than just a compliance update. 

The implications now extend well beyond adult content. AI systems, particularly chatbots and agents, are under growing scrutiny. Social media remains at the centre of regulatory pressure, with even US juries and federal judges showing signs that the First Amendment’s free speech protections and historic assistance from Congress may not be an invincible shield for Silicon Valley. Marketplaces such as Amazon and eBay are increasingly expected to prevent underage access to regulated goods rather than abdicating that responsibility to their sellers. In each case, the question is no longer whether age checks are required, but how they are implemented and how effective they are.

As this pressure builds, the relationship between platforms, regulators, and users is changing. Compliance obligations are being placed firmly on platforms rather than users. If liability sits with the platform, then the platform must be able to rely on the integrity of the age assurance it deploys.

This is where weaker approaches begin to fail. Device-level age signals, such as ages set by parents in app-stores and operating systems, and parental controls are structurally unreliable. Devices are shared, handed down, or configured by parents who may, in line with cultural support for parents’ rights to determine how they raise their children, inflate a child’s age to bypass restrictions. An 11 year-old child registered as 14 today to spare parents regular demand for verifiable parental consent under the US COPPA law, for example, will be treated as 18 by such a system on their 15th birthday. These approaches can create persistent misinformation rather than meaningful safeguards. 

The instinctive response to growing regulation and legal jeopardy from some organisations is to collect more identity data so they can know – and prove they know – the age of their users. But in practice, that significantly increases risk rather than reducing it.

The history of age verification makes this clear. In sectors where anonymity was essential, centralised data collection proved fatal. The Ashley Madison data breach where unfaithful adults seeking affairs were exposed, demonstrated how quickly trust can collapse when sensitive data is exposed, and that platform never recovered. 

The broader lesson is straightforward. The only non-hackable database is the one that does not exist. This aligns with data minimisation requirements under GDPR and with emerging industry standards such as  ISO 27566-1 and IEEE 20891. Centralised identity databases create cost, liability, and an irresistible honeypot. Avoiding their creation is both a privacy and a security strategy.

The alternative is to prove eligibility without revealing identity. This is the foundation of modern age assurance and what makes it viable as a scalable access layer.

Privacy-preserving techniques now make this possible in practice. The key concepts now becoming core requirements across the age assurance sector are: 

  • Zero-knowledge approaches allow a user to prove they are over a required age threshold without revealing their date of birth or any other identifying information.
  • Selective disclosure allows only the required attribute to be shared, rather than a full identity credential
  • Double-blind architectures ensure separation between the user, the verification provider and the relying service, such that the platform can never identify its users and the service that checks a user’s age does not know where they then use that proof.

In practical terms, this enables fully anonymized age checks - a user can demonstrate they are over 13, 16, 18, 21 or indeed under a particular age too, without exposing any additional personal data.

These approaches are increasingly being deployed in interoperable ecosystems. Tokenised models such as AgeAware, OpenAge, and Age Connect aim to allow users to verify once and reuse that proof across multiple services, even if they are served by competing age assurance providers. In Europe, the EU Digital Identity Wallet is being augmented by many Member States to allow selective, anonymous disclosure of an age attribute. Similar developments are underway in the UK Digital Identity and Attributes Trust Framework and state mobile driving licence (mDL) solutions in the USA.

Device level approaches are also emerging. Apple has just introduced operating system-level age verification in the UK and South Korea, in addition to the age signals parents can set within their app store. Meta is campaigning for those app store signals to be disclosed to developers, backing the App Store Accountability Act, with backing from other apps popular with children. Google is enabling selective disclosure of age from credentials supplied by third parties through its wallet infrastructure. These developments are significant, but they also raise yet more questions about interoperability, competition and control.

For age assurance to function as a true access layer, three conditions must be met. 

  • Interoperability - users should be able to verify once and reuse proof across services for a defined period of time
  • User control - the credential should remain with the user, not be continuously obtained from a central authority
  • Reliability - platforms need a level of assurance they can depend on, rather than signals inferred from behaviour or prompts

The last of these is particularly important. Inferring age from behaviour or prompts is inherently unreliable. It assumes that behaviour correlates consistently with age. It does not. It is trivial to manipulate. A child instructed to mimic adult queries will quickly bypass inference-based systems. The result is safety theatre rather than meaningful protection.

As agents begin to access services on behalf of users, a new set of questions emerges for age assurance.

A logical starting point is that an agent should inherit the age status of its controller. This creates what might be described as a “Know Your Agent” problem. Ensuring that age attributes are correctly inherited, persist across sessions, and cannot be circumvented will require new technical and policy approaches, particularly where agents operate with limited or intermittent user oversight.

However, simple inheritance is unlikely to be sufficient on its own. An agent may act at scale, across multiple services, and beyond the immediate visibility of the user. If a user proves their age and then delegates control to an agent, that agent can access age-restricted content or functionality continuously and without friction. This raises a fundamental question of accountability.

One approach is to treat the agent as an extension of the user, carrying the same age status and permissions. This preserves continuity, but it also risks enabling unrestricted downstream access, including exposure to content that may ultimately be consumed by a child if control is shared or transferred.

An alternative is to treat the agent as a distinct actor, requiring its own constraints and safeguards. In this model, the agent may be permitted to access restricted environments, but is responsible for filtering, moderating, or constraining what is returned to the user. That shifts part of the safety function into the agent itself, but introduces new challenges around enforcement and verification.

Neither model is fully resolved. But the direction is clear: age assurance will need to extend beyond the point of initial access and into the chain of delegation and that is a problem the industry has not yet seriously started to solve. 

Robust age assurance can also provide an additional benefit in this context. Certain methods function as a form of proof of life, confirming not only that a user is of age, but that they are a real person rather than an automated actor. As AI-driven interactions expand, this distinction will become increasingly important, both for child safety and for the integrity of digital services more broadly.

Age verification is no longer about gating individual websites. It is becoming part of the internet's core access architecture. The question is not whether this layer will exist, but how it will be built. Built on a centralized identity collection, it will struggle to earn trust. Built on privacy-preserving, interoperable, and independently verifiable systems, it can be a stable foundation for the next phase of the internet.

About the Author

Iain Corby is Executive Director of the Age Verification Providers Association (AVPA), which he joined in 2019. Previously, he was a management consultant at Deloitte delivering large-scale IT-enabled business transformation, led a research team in the UK Parliament, and was Deputy CEO of GambleAware. He studied Politics, Philosophy and Economics at Balliol College, Oxford, and holds an MBA from UCLA.