Skip to content

Exporting

This guide covers how to produce a valid PAM export from your own system.

At minimum, a valid PAM export must include:

  • schema: "portable-ai-memory"
  • schema_version: "1.0"
  • owner.id: A unique identifier for the user
  • memories: An array with at least one memory object

Each memory must include:

FieldDescription
idUnique identifier (SHOULD be UUID v4)
typeOne of the defined memory types
contentThe memory content string
content_hashSHA-256 hash of the normalized content
temporal.created_atISO 8601 timestamp of when the memory was created
provenance.platformSource platform identifier (e.g., chatgpt, claude, gemini)

See the Minimal Example for the smallest valid PAM file.

Compute the content_hash per spec §6:

import hashlib
import unicodedata
def compute_content_hash(content: str) -> str:
text = content.strip().lower()
text = unicodedata.normalize("NFC", text)
text = " ".join(text.split())
return f"sha256:{hashlib.sha256(text.encode('utf-8')).hexdigest()}"

Add semantic relationships between memories. Each relation requires id, from, to, type, and created_at:

{
"relations": [
{
"id": "rel-001",
"from": "mem-002",
"to": "mem-001",
"type": "derived_from",
"confidence": 0.9,
"created_at": "2026-02-15T00:00:00Z"
}
]
}

Available relation types: supports, contradicts, extends, supersedes, related_to, derived_from. See spec §13.

Reference companion conversation files via the conversations_index array. Each entry uses a storage object to point to the external file:

{
"conversations_index": [
{
"id": "conv-001",
"platform": "claude",
"title": "Infrastructure discussion",
"temporal": {
"created_at": "2024-06-01T10:00:00Z"
},
"message_count": 42,
"derived_memories": ["mem-001", "mem-002"],
"storage": {
"type": "file",
"ref": "conversations/conv-001.json",
"format": "json"
}
}
]
}

The companion conversation files must conform to the Conversations Schema.

Generate a file-level integrity block for tamper detection. The checksum covers only the memories array, not the entire document:

  1. Take the memories array
  2. Sort memory objects by id field, ascending
  3. Canonicalize the sorted array using RFC 8785 (JSON Canonicalization Scheme)
  4. Compute SHA-256 of the canonical UTF-8 bytes
  5. Add the integrity block:
{
"integrity": {
"canonicalization": "RFC8785",
"checksum": "sha256:<hex_digest>",
"total_memories": 5
}
}

See spec §15 and the Integrity Verification guide for details.

For exports shared between systems, add a cryptographic signature for authenticity verification. See the Integrity Verification guide for the signature payload format and supported algorithms.

Always validate your export before distribution:

from jsonschema import Draft202012Validator
import json
import sys
with open("portable-ai-memory.schema.json") as f:
schema = json.load(f)
with open("memory-store.json") as f:
data = json.load(f)
errors = list(Draft202012Validator(schema).iter_errors(data))
if not errors:
print("memory-store.json valid")
else:
for e in errors:
print(e.message)
sys.exit(1)

See the Validation Guide for detailed instructions including Python alternatives.