Sovereignty vs Security and Protection
- rorochick1
- Sep 6, 2025
- 2 min read
The Diary Analogy: Why "Data Security" and "Data Sovereignty" Are Not the Same Thing.
As educators, we're all looking for tools that can save us time, and generative AI platforms like ChatGPT are nothing short of incredible. But while we're using them for lesson plans and creative ideas, there's a vital concept we must understand: the difference between data security and data sovereignty.
They sound similar, but they're not. Misunderstanding them could put our sensitive school data at risk.
The Diary Analogy: A Simple Way to Understand
Imagine you have a secret diary where you write down important plans and personal thoughts.
Data Security is the strong lock you put on that diary. It's about keeping the wrong people out. Think of it as the technical stuff: passwords, encryption, and firewalls. This is about protecting your data from hackers and unauthorized access.
Data Sovereignty is about where you keep that diary. If you take your diary on holiday to the United States, even with your strong lock on it, it's now under U.S. law. A U.S. court could legally demand to see it. Your New Zealand laws no longer have power over it.
The key difference is this: security is about the lock; sovereignty is about which country's laws apply.
The Problem with Generative AI
When you type a prompt into an AI tool, that information is sent to a server. Even if a service says it stores your data in a country like Australia, the processing that generates the response often happens on servers in another country, like the U.S.
The moment that information leaves New Zealand, it is no longer protected by our laws, such as the Privacy Act 2020. Under U.S. law, the government could, in theory, request access to that data. Your information is no longer just "our" data—it's now under the power of another country.
A Call to Be Cautious
So, while these tools are fantastic for generating ideas, we must be incredibly careful. Never put any identifiable information about our students, their whānau, or our staff into these tools. Treat any public-facing generative AI tool like a public space, not a private diary.
Once that information is on a server in another country, we lose control over it. It's a risk we simply cannot take with the personal and sensitive information entrusted to us.
The solution is simple: stick to general, non-identifiable prompts. Use these tools as a thought partner, not a record keeper. Our students' privacy depends on it.





Comments