21/08/2023

Responsible AI is AI we can trust

Author: Kathryn Hughes and Martin Lopatka

Artificial intelligence is powerful and exciting. But is it responsible? Responsible AI is the difference between a future where we live in fear of automated decisions and one where we welcome them. Fortunately for us, unrestrained and unregulated automated decision-making is being challenged by legislators and end-users who believe that AI solutions should be developed by a multidisciplinary team consisting of academics, social scientists, legally trained professionals, data scientists and engineers. EPAM is partnering with the University of Manchester for a pilot programme to evaluate our own framework for responsible AI.

The regulations driving this need for action

The European Union (EU) is leading the way to create a world-class hub of AI innovation. The EU’s AI harmonization legislation will mandate that all AI system providers assess and classify their AI system risk using strict criteria (or risk steep fines). Regardless of where they are headquartered, all businesses deploying and developing AI technologies that impact EU citizens in a high-risk context will be required to register with the European Commission. AI that poses unacceptable risk to safety and livelihood and which distorts physical or psychological well-being will become unlawful if not for defined legitimate purpose.

The answer: the right framework for responsible AI

EPAM created a framework that provides measurable, transparent and extensible guidelines to direct AI product development. To test our responsible AI framework, we partnered with the University of Manchesterforapilot programme. EPAM experts were paired with computer science and politics, philosophy and economics (PPE) students who critically and independently evaluated two AI-powered use cases used at EPAM.

We achieved four key results:

  • Our AI framework was further strengthened.
  • We demonstrated value when applied to real-world AI in production.
  • Our own selected internal AI systems were reviewed and scored.
  • We formed a strong partnership with the University of Manchester.

A successful proof-of-concept internship model

Bringing together undergrad students in a professional, commercial environment has already led to a hugely successful outcome. These students gained valuable analytical and professional skills that make them highly employable, while the university is now better equipped to help companies prepare for the new AI world.

Read more about the partnership here.

gallery image