Responsible AI: 5 ways to prepare your business
Artificial intelligence (AI) is without a doubt the biggest technological advancement since the discovery of electricity. It’s set to completely transform every aspect of how we live and work, putting down deep roots in our economy and society. Like most technology, AI brings both opportunities and threats. It brings us countless opportunities to solve the world’s problems, boost our economies and generally make our lives easier. Yet, we also have to beware of the negative side of AI and how it can be detrimental to human rights and welfare.
But how do we manage this relationship between humans and machine intelligence? How do we mitigate the potential negative effects of AI? This is where responsible AI – also called ethical or trustworthy AI – comes into play.
Why responsible AI matters
Every few months, you’ll see something in the news that shows the dark side of AI. From facial recognition app resulting in wrongful arrests to algorithms that lean towards gender or racial bias, they reveal failures for AI to be trustworthy. These are examples where apps or systems weren’t designed and operated in a lawful, ethical and robust manner – according to the definition from the European Committee (EC).
While the EC offers guidance to businesses in the EU, many other regulatory bodies around the world are following suit. Typically, these key principles are formulated to document and regulate how artificial intelligence systems should be developed, deployed, and governed to comply with ethics and laws. It asks of businesses involved in AI to cover things like human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability.
Global best practice: where things currently stand
Responsible AI should be a globally aligned mission, so it’s interesting to note what has been happening in territories like China and the US. China has the strongest AI start-up ecosystem, with 19 tech unicorns (valued over $1 billion). Back in 2019, it already released eight principles from the National New Generation of Artificial Intelligence Governance Committee in China:
- Harmony and human-friendly
- Fairness and justice
- Inclusion and sharing
- Respect for privacy
- Safety and controllability
- Shared responsibility
- Openness and collaboration
- Agile governance
It followed this with its recent privacy legislation called the PIPL (the personal information protection law), as well as a Data Security and Cyber Security Law.
Over in the US, steps were taken with The Algorithmic Accountability Act of 2021 but since it wasn’t supported, each state has been left to its own devices. States like California, Virginia and Colorado have already created their own privacy protection laws, with similar bills in legislative process elsewhere.
Meanwhile, the EU is one of the first jurisdictions that is pursuing ‘designed for purpose’ regulation.’ Its proposal for AI regulation and harmonization across member states will provide the legal certainty necessary to motivate innovation while protecting consumer rights. Like GDPR, the proposed legislation concerns any person or organisation (including those based outside the EU managing EU citizen PII data). However, the accountabilities of the AI Act go further than GDPR by proposing to directly regulate the use of AI systems. Companies will be required to demonstrate their commitment to AI balance by showing the literal software validation method they applied.
5 practical steps to limit risk in your business
AI legislation is constantly evolving so there can be a lot of uncertainty on how to make a positive difference to limit risk. Some key practical steps to take include familiarising yourself AI safety companies you trust, determining how you will measure AI risk in your organisation, and how you can put ongoing testing and monitoring procedures in place.
For the full list plus a tier of examples, read the original EPAM article here.