Ethical design and use of automated decision systems
This Standard specifies minimum requirements in protecting human values and incorporating ethics in the design and use of automated decision systems.
This Standard is limited to artificial intelligence (AI) using machine learning for automated decisions.
This Standard applies to all organizations, including public and private companies, government entities, and not-for-profit organizations. It provides a framework and process to help organisations address AI ethics principles, such as those described by the OECD:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Requirements in this Standard are principles-based and recognize an organization’s governing practices may depend on its size; ownership structure; nature, scope and complexity of operations; strategy; and risk profile. Organisations are expected to take reasonable and responsible measures to adopt and implement the principles in this Standard.
This Standard is intended to be used in conjunction and integrated with the organisation’s compliance programs, including but not limited to, existing privacy, cybersecurity, data governance, complaints and appeals, and legal programs.