Developing safe and responsible AI
Artificial general intelligence has the potential to benefit nearly every aspect of our lives—so it must be developed and deployed responsibly.
"AI systems are powerful tools that are becoming more integrated into everyday life, the key is to be responsible, thoughtful and thorough in their deployment and application."
Andrew Robinson
Chief Information Security Officer | 6clicks
Our focus on safety
AI technology comes with tremendous benefits, along with serious risk of misuse. Our Charter guides every aspect of our work to ensure that we prioritize the development of safe and beneficial AI.
What we do
To put our principles into practice, 6clicks is building and operating a management system for AI/ML systems based on ISO/IEC 42001 in a similar way to its Information Security Management System certified to ISO/IEC 27001.
Activities to introduce Responsible AI include:
Conducting an AI/ML risk assessment and associated System Impact Assessment (SIA) that explores the societal implications of 6clicks' use of AI in addition to company impacts.
Adopting controls to mitigate identified AI/ML-based risks based on Annex A of ISO/IEC 42001, including a customized set of policies and control responsibilities.
Broadening the existing governance group to form an Information & Technology Governance Group that includes the CTO, CISO, Head of GRC and AI/ML leads.
Carrying out internal audits and penetration testing related to AI/ML components (of Hailey AI) and tabling the results at the Information & Technology Governance Group.
Shared information about our use of AI/ML in a detailed Knowledgebase article and make our policies / control sets and assessments available via the 6clicks Trust Portal (prospects and customers can request an invitation via their account manager).