The three AI governance essentials sector leaders need to know

In this two-part series, Kristi Mansfield, Founder & CEO, Seer Data, outlines what you need to know to avoid the hype and implement AI safely.
Artificial Intelligence is no longer a future scenario. It’s in the early stages of widespread adoption in Australia across all industries and sectors. This adoption is predicted to accelerate in the next 2-3 years and by 2030 most industries will be impacted, including philanthropy and not for profits.
Firstly, you need to be across the fundamentals of data governance, First Nations Data Sovereignty and AI Assurance for safe and ethical use of artificial intelligence (AI).
With governance, assurance and alignment to human values, AI can be an empowering tool. Without those, it’s been predicted by global technology leaders that it could destabilise society so much so that what is familiar today will be soon unfamiliar.
Here’s what you need to know:
1. Data governance is the foundation
Data governance sets the baseline rules for security, privacy, and accountability across all AI applications.
- Definition: The policies, processes, and standards for how data is collected, stored, accessed and used.
- Relevance to AI: AI systems depend on large datasets. Without strong governance, data can be inaccurate, biased, misused or unsafe.
- Key Elements: Security, quality, privacy, access rights, accountability, stewardship and compliance.
- You should know: the 2024 reforms to the Privacy Act raised the penalties for serious breaches of privacy (up to $2.5m for individuals and $50m for organisations) requiring organisations and individuals to take reasonable precautions to protect personal information and re-identification through clear policies, accountability and secure data practices.
2. First Nations data sovereignty and governance applies a rights-based lens
This means that governance cannot be generic and must respect First Nations rights, authority and cultural protocols. For example, First Nations data about Country, language, people or community cannot simply be treated as “open data” for AI use. It must be governed by First Nations decision-making and this means deferring to those nations and original owners of the data.
- Definition: The right of Aboriginal and Torres Strait Islander peoples to exercise ownership over Indigenous data. Ownership of data can be expressed through the creation, collection, access, analysis, interpretation, management, dissemination, storytelling and reuse of data.
- This definition is outlined by the Maiam nayri Wingara Indigenous Data Sovereignty Collective and internationally through the CARE Principles for Indigenous Data Governance (Collective benefit, Authority to Control, Responsibility, Ethics).
- Relevance to AI: If AI models are trained on First Nations data, such as health, land use, language, community services, justice and traditional knowledge, there must be First Nations governance over how that data is used. Otherwise, AI risks perpetuating colonial patterns of extraction and decision-making without consent.
- You should know: Priority Reform 4 of the National Agreement on Closing the Gap states that Aboriginal and Torres Strait Islander people have access to, and the capability to use, locally relevant data and information to set and monitor the implementation to close the gap. This specifically relates to government-held data. Repatriation of Indigenous data collected and held by corporations (such as mining companies), foundations, not-for-profits and universities is important within a broader movement to decolonise data. Philanthropy is already providing critical funding to support capability and capacity-building for First Nations data sovereignty for self-determination.
- Learn more: Framework for Governance of Indigenous Data (GID).
3. AI assurance framework is the operational tool
This ensures that when AI is adopted, data governance principles (including First Nations governance) are checked, documented and monitored. It applies the guardrails for your organisation to safely use and apply AI.
- Definition: A structured method for ensuring use of AI is safe, fair, transparent and ethical.
- Relevance to data governance and First Nations data sovereignty: Assurance frameworks require organisations to check where training and operational data come from, who has authority over it, how bias is handled, and whether usage respects rights and community priorities.
- Safety measures: Wrap-around guardrails are important when deploying Gen AI. Some example guardrails include pre- and post-processing filters to check for sensitive content, privacy violations or compliance issues. Another industry standard approach is model-on-model evaluation. This means you use another Large Language Model (such as Claude or Google’s models) to evaluate outputs of a primary model (which might be ChatGPT or Anthropic) to test for biases, safety risks and factual consistency. Human oversight is also essential.
- You should know: There are standards that exist for safe use of AI (e.g. ISO/IEC 42001 and related Australian AI Assurance frameworks). Ask if vendors align with these standards and, if not, how they plan to.
So, what should you do?
Here are three priority actions you should take now to set the foundations:
- Educate your board and executive team.
- Prioritise data foundations and governance.
- Create guardrails through an AI strategy and assurance framework.
Thank you to Seer Data Board Members Dr Ian Oppermann and Brian Donn, Jo Garner, my team, many of our customers who are not-for-profits and philanthropic foundations for your review and contribution to this article. Your insights and feedback have been essential.
Kristi Mansfield is a global leader in AI and data with 25 years’ experience across technology, data governance, cross-border data sharing, strategy, and philanthropy. As Founder and CEO of Seer Data, she has represented Australia at the G20 on data-sharing across borders, was on the working group for the development of the Framework for the Governance of Indigenous Data, and led transformative roles in corporate, technology and NFP sectors.
Join Dr Ian Oppermann, Brian Donn, Jo Garner and Kristi Mansfield for The AI Trust Crisis, an online conversation on Wednesday 22 October 2025, about what philanthropic and not-for-profit leaders need to know to safely and responsibly use Artificial Intelligence (AI).