Join the Community

23,587
Expert opinions
41,339
Total members
358
New members (last 30 days)
191
New opinions (last 30 days)
29,160
Total comments

Should verifiable data be mandatory for LLMs?

Yes (ChatGPT agrees with me..) — it is very much planned and already in progress that large language models (LLMs) will use verifiable data (e.g., Verifiable Credentials, or VCs) when available. This is a key area of development at the intersection of AI and digital trust infrastructure.

🔍 What does this mean?

LLMs are powerful at text generation and reasoning, but they do not inherently know what is true. Verifiable data solves this by adding:

✅ Trustworthiness — the data is cryptographically signed.

🧾 Proofs — it comes from a known issuer and can be verified.

👤 Accountability — it’s tied to real people or organizations (via wallets, DID, etc.).

 

🔧 Is this being planned or implemented? Yes — in several areas:

1. EU digital infrastructure (e.g. EUDI Wallet):

The EU Digital Identity Wallet (EUDI Wallet) and regulations like eIDAS2, AI Act, and Data Act clearly anticipate AI agents and LLMs using verifiable credentials.

The EU emphasizes that automated decision-making and trusted AI must rely on data that is both machine-readable and verifiable.

 

2. Sector pilots and agent models:

Examples:

 

  • A company wants to delegate tasks to an AI agent (e.g., procurement, onboarding). The AI needs to verify identity, legal status, credentials, etc.
  • A recruitment LLM is told: “only select candidates with a certified professional degree” — it can query credentials stored in a verifiable wallet.
  • AI agents handling invoices or government forms may verify company registrations via VCs issued by a national business registry.

 

3. Agent infrastructure and protocols:

 

 

  • Open-source and standard-based initiatives (like W3C VC, Trust over IP, EDIW, Findy, etc.) already define how:
    • LLM-based agents can query verifier APIs or digital wallets,
    • Parse VC formats (like JSON-LD with proofs),
    • And use trusted metadata (e.g., registry-backed DID Documents, trust lists).

�� Why is this a big deal?

It makes LLMs:

 

  • Reliable (using verified sources, not just text prediction),
  • Auditable (who said what, when),
  • Compliant (supporting legal, regulatory, and business process requirements).

 

✅ Summary

Yes, the integration of verifiable data into LLMs and AI agents is clearly planned, both technically and politically — especially in Europe. This is a key enabler for:

 

  • Trusted digital interactions,
  • Automating business and public processes,
  • And ensuring AI systems are grounded in facts, not just plausible text.

 

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

23,587
Expert opinions
41,339
Total members
358
New members (last 30 days)
191
New opinions (last 30 days)
29,160
Total comments

Now Hiring