The Zenith Law Glossary is your guide through evolving cross-domain terminology, including but not limited to engineering, governance, legal, policy, and AI topics, providing concise definitions and context for further exploration. Each term is linked to a source and related article for deeper reference and applied context. Glossary terms are grouped A-Z and mapped to related site articles. Each definition includes a source link so the page is useful for readers, search engines, and answer systems that need concise, attributable definitions.
This glossary is informational and educational. It is not legal advice, and legal obligations can vary by jurisdiction.
Source tiers distinguish authority level: external standards and peer-reviewed sources are shown separately from internal editorial synthesis links.
Source tier guide: Tier 1 = official standards or peer-reviewed primary sources; Tier 3 = reference encyclopaedias and general technical references; Internal synthesis (editorial) = this site's own evidence-grounded summaries.
Tier 3 references support orientation and discovery. They should not be treated as authoritative legal advice or as a substitute for jurisdiction-specific primary legal sources.
A
AEO
Answer Engine Optimisation focuses on making content easy for answer systems to quote directly as concise, high-confidence responses.
AI Alignment
AI alignment is the practice of steering model behaviour so outputs better match human goals, constraints, and safety expectations.
B
Banker's Algorithm
Banker's algorithm is a deadlock-avoidance method that grants resource requests only if the system remains in a provably safe state.
C
Circular Wait
Circular wait is a deadlock condition where each process waits for a resource currently held by the next process in a cycle.
Cloud Fragmentation
Cloud fragmentation can describe a split operating model where platform features, support paths, or control requirements diverge by jurisdiction or provider channel.
Cloud Localization
Cloud localization is the adaptation of cloud services to jurisdiction-specific legal, operational, and data-residency requirements.
Coffman Conditions
The Coffman conditions are four necessary conditions for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait.
D
Data Lineage
Data lineage records how data moves and transforms across systems, while preserving dependency paths from source to downstream outputs.
Data Provenance
Data provenance records where data originates, how it changes, and which processes and models consume or produce it.
Deadlock
Deadlock is a state where competing processes hold resources and wait indefinitely, so none can progress without external intervention.
Digital Sovereignty
In this glossary context, digital sovereignty describes jurisdiction-level control expectations for data handling, cloud operations, and digital infrastructure governance boundaries.
F
Federated Learning
Federated learning trains a shared model across distributed nodes without centralising raw data, reducing direct data-movement exposure.
G
Generative Engine Optimisation
Generative Engine Optimisation structures content so AI answer systems can retrieve, summarise, and cite it with minimal ambiguity.
Graph Neural Network
A graph neural network is a model family that learns from node-edge structures, making it useful for relational provenance and dependency analysis.
H
Hallucination
In LLM contexts, hallucination is a fluent but unsupported output that is not grounded in reliable evidence.
Hold and Wait
Hold and wait is a deadlock condition where a process keeps one resource while waiting to acquire additional resources.
I
In-Context Learning
In-context learning is a model behavior where prompts and examples in the input guide task performance without updating model weights.
Indicator of Compromise
An indicator of compromise is a forensic artefact such as a domain, hash, or process pattern that signals potential malicious activity.
K
Knowledge Distillation
Knowledge distillation transfers behavior from a larger teacher model to a smaller student model to improve efficiency while retaining useful performance.
L
Large Language Model
A large language model is a neural language model trained on large corpora to generate and analyze text through token prediction.
M
Mutual Exclusion
Mutual exclusion is a concurrency rule that allows only one process at a time to access a critical shared resource.
N
No Preemption
No preemption means a held resource cannot be forcibly taken away and must be released voluntarily, which contributes to deadlock risk.
P
PROV-ML
PROV-ML is a provenance representation proposal that extends W3C PROV with machine-learning-specific entities and lifecycle relations.
Postinstall Script
A postinstall script is package lifecycle code that runs automatically after dependency installation and can execute privileged local actions.
Prompt Engineering
Prompt engineering is the disciplined design and testing of model instructions to improve accuracy, consistency, and controllability.
R
Resource Starvation
Resource starvation occurs when a process waits indefinitely because scheduling or lock contention keeps denying access to required resources.
S
SBOM
A software bill of materials is a nested inventory of software components that improves supply-chain transparency and vulnerability response.
SEO
Search Engine Optimisation is the practice of improving content structure and metadata so search systems can discover, rank, and present pages accurately.
Semaphore
A semaphore is a synchronization primitive that controls access to shared resources through counter-based permits.
Software Supply Chain
The software supply chain is the end-to-end dependency and delivery ecosystem through which code, packages, and build artifacts move to production.
Supply Chain Attack
A supply chain attack compromises a trusted upstream component or channel to reach downstream victims at scale.
T
Tokenization
Tokenization splits text into model-processable units so language models can compute probabilities and generate outputs token by token.
Transformer
The Transformer architecture uses attention mechanisms instead of recurrence to model long-range token dependencies efficiently.
Trusted Publishing
Trusted publishing is a release practice where package publication is tied to verifiable identity and provenance controls in CI/CD workflows.
W
W3C PROV
W3C PROV is a standard data model for representing provenance, including entities, activities, and agents involved in data production.
Frequently Asked Questions
What is this glossary for?
This page provides quick definitions for recurring terms across engineering, governance, legal, policy, and AI topics published across the site.
How should I use these definitions?
Start with the one-line definition, then open the source link for a deeper reference and the related post link for applied context from this site.
Why are source links included on each card?
Source links make each definition attributable and easier for readers and AI retrieval systems to validate before reuse.
What is the difference between SEO, GEO, and AEO in this context?
SEO improves discoverability in search results, GEO may improve citation likelihood in generative AI responses, and AEO improves extraction quality for direct-answer systems.