← Back to Labs

Introduction

The adoption of AI within the Australian Public Service (APS) presents a unique set of challenges. The imperative to leverage AI for efficiency and improved citizen services must be balanced against stringent requirements for data sovereignty, privacy, and security. This framework provides a reference architecture for deploying "Sovereign-First" AI systems.

Defining Sovereign AI

In the context of Australian government deployments, Sovereign AI is defined by three core principles:

The Reference Architecture

The proposed architecture leverages a Virtual Private Cloud (VPC) or on-premise deployment model, isolating the AI infrastructure from the public internet.

Key Layers

1. The Application Layer: Internal government applications, citizen-facing portals, or internal chatbots.

2. The Control Layer (Songlines Control): The critical enforcement point. This layer handles identity verification (via Entra ID / Active Directory), policy enforcement (e.g., preventing PROTECTED data from entering the model), and comprehensive audit logging.

3. The Model Layer: Locally hosted open-weight models (e.g., Llama 3, Mistral) or sovereignly deployed commercial models (e.g., Azure OpenAI Australia East) configured with zero-data-retention policies.

Implementation Guidelines

"Sovereignty is not merely a geographic constraint; it is an architectural guarantee of control and visibility."

Agencies should begin by classifying their data and AI use cases according to the Protective Security Policy Framework (PSPF). The Control Layer must be configured to enforce these classifications dynamically, ensuring that sensitive data is redacted or blocked before reaching the model layer, regardless of user intent.

Conclusion

By adopting a Sovereign-First architecture, Australian government agencies can harness the transformative power of AI while maintaining the highest standards of security, privacy, and public trust.