AI Transparency Statement
Introduction
The Digital Transformation Agency's Policy for the responsible use of AI in government sets out the Australian Government approach to embrace the opportunities of Artificial Intelligence (AI) 1 and provide for safe and responsible use of AI.
The Office of Commonwealth Ombudsman (the Office), including the ACT Ombudsman, is committed to identifying ethical, responsible and meaningful usage of AI to deliver our mission of “helping people, improving government”.
We will be transparent in our assessment, preparation, engagement, adoption, monitoring, and pivot to changes in AI technology, environment and policy requirements.
Scope and Usage
Currently, the Office uses AI to support security monitoring of its ICT systems.
This AI is embedded into the monitoring software we use, with access limited to ICT and Cyber Security personnel that are suitably cleared, trained, and have a need to know and need to access the monitoring system.
The Office has recently completed pilots of two Microsoft AI capabilities – Microsoft Co-Pilot and Microsoft Azure AI services.
The Azure AI pilot, within the Office secure IT environment, tested the feasibility of using Azure AI services to summarise written complaints for decision makers to review in order to process them.
If we do implement any additional AI use cases and technologies, we will update this statement to outline our use, with a summary of
- why the Office is using AI
- the legislative authority for the use of AI
- whether the public may directly interact with the AI or be significantly impacted by it
- how the Office intends to notify those affected by our use of AI
- what role AI plays in relation to decision-making, administrative action or service delivery, or other such usage patterns and domains as described in the Classifications system for AI use
- measures to identify and protect the public against negative impacts and other risk mitigation measures
- measures in place to identify and remediate errors
- how the use complies with administrative law principles, is consistent with human rights obligations, applicable legislation including the Privacy Act 1988 and the Protective Security Policy Framework
- compliance with each requirement under the Policy for the responsible use of AI in government.
The Office does not currently use or intend to use AI to make discretionary decisions in accordance with the best practices set out in the Office’s Automated Decision-making Better Practice Guide.
Governance
AI Governance
Each AI use case and AI technology in the Office requires the Ombudsman’s approval, after endorsement by the Information Technology Governance Committee (ITGC) chaired by the Deputy Ombudsman.
AI Usage Policy and Processes
We have an internal AI Policy that aligns with advice and guidance provided by the DTA and other agencies for using AI services responsibly.
We have processes to ensure:
- our AI use is appropriately governed
- our engagement with AI is confident, safe and responsible
- our staff are appropriately trained in AI
- any relevant and likely cyber, data and privacy risks are identified and addressed
- our AI access and usage is monitored
- our stakeholders have trust in our use of AI.
Accountable Officials
The Chief Operating Officer and Chief Information Officer are the designated AI Accountable Officials.
Transparency Statement Updates
This statement will be reviewed annually or when we change our AI use cases and usage.
Contact
For questions about this statement or for further information on the Office’s usage of AI, please contact ai@ombudsman.gov.au.
Change Log
Date | Note |
---|---|
26 February 2025 | Initial release. |
[1] Explanatory memorandum on the updated OECD definition of an AI system