Community
Yes (ChatGPT agrees with me..) — it is very much planned and already in progress that large language models (LLMs) will use verifiable data (e.g., Verifiable Credentials, or VCs) when available. This is a key area of development at the intersection of AI and digital trust infrastructure.
🔍 What does this mean?
LLMs are powerful at text generation and reasoning, but they do not inherently know what is true. Verifiable data solves this by adding:
✅ Trustworthiness — the data is cryptographically signed.
🧾 Proofs — it comes from a known issuer and can be verified.
👤 Accountability — it’s tied to real people or organizations (via wallets, DID, etc.).
🔧 Is this being planned or implemented? Yes — in several areas:
1. EU digital infrastructure (e.g. EUDI Wallet):
The EU Digital Identity Wallet (EUDI Wallet) and regulations like eIDAS2, AI Act, and Data Act clearly anticipate AI agents and LLMs using verifiable credentials.
The EU emphasizes that automated decision-making and trusted AI must rely on data that is both machine-readable and verifiable.
2. Sector pilots and agent models:
Examples:
3. Agent infrastructure and protocols:
�� Why is this a big deal?
It makes LLMs:
✅ Summary
Yes, the integration of verifiable data into LLMs and AI agents is clearly planned, both technically and politically — especially in Europe. This is a key enabler for:
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Serhii Bondarenko Artificial Intelegence at Tickeron
30 July
Prashant Bansal Sr. Principal Consultant at Oracle
28 July
Carlo R.W. De Meijer Owner and Economist at MIFSA
Steve Morgan Banking Industry Market Lead at Pegasystems
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.