Accountability and oversight must be continuous because AI models can change over time; indeed, the hype around deep learning, unlike conventional data processing tools, is driven by its flexibility to adjust and modify in response to changing data. But this can lead to problems such as model drift, where a model’s performance, such as in predictive accuracy, deteriorates over time or begins to exhibit flaws and biases the longer it lives in the wild. Explanation methods and human-in-the-loop control systems can not only help scientists and product owners build better AI models from the start, but also be used in post-deployment monitoring systems to ensure that the quality of the models does not degrade over time.

“We don’t just focus on training models or making sure our training models aren’t biased; we also focus on all dimensions related to the machine learning development life cycle,” says Cukor. “It’s a challenge, but it’s the future of artificial intelligence,” he says. “Everyone wants to see that level of discipline.”

Responsible AI priority

There is a clear consensus in the business that RAI is important, not just nice. In PwC’s 2022 AI Business Survey, 98% of respondents said they have at least some plans to make AI accountable through measures including improving AI governance, monitoring and reporting on AI model performance, and ensuring that solutions can be interpreted and easily explained.

Despite ​​these aspirations, some companies find it difficult to implement RAI. A PwC survey found that less than half of respondents plan specific RAI actions. Another survey conducted by MIT Sloan Management Review and Boston Consulting Group found that while most firms view RAI as a tool to reduce technology-related risks, including risks related to security, bias, fairness and privacy, they admit that prioritization has failed, with 56% saying it is a top priority, and only 25% having a fully mature program. Challenges can stem from organizational complexity and culture, lack of consensus on ethical practices or tools, insufficient staff capacity or training, regulatory uncertainty, and integration with existing risk and data practices.

For Cukor, RAI is optional despite these significant operational challenges. “For many, investing in fences and practices that enable rapid adoption of responsible innovation feels like a trade-off. JPMorgan Chase has a commitment to our clients to innovate responsibly, which means carefully balancing issues such as resources, reliability, privacy, power, explainability and business impact.” He argues that investing in good controls and risk management practices early on in all stages of the AI ​​data lifecycle will allow a firm to accelerate innovation and ultimately serve as a competitive advantage for the firm.

For RAI initiatives to be successful, RAI must be embedded in the organization’s culture, not simply added as a technical tick. Implementing these cultural changes requires the right skills and mindset. An MIT Sloan Management Review and Boston Consulting Group survey found that 54% of respondents have difficulty finding RAI expertise and talent, with 53% citing a lack of training or knowledge among current employees.

Finding talent is easier said than done. RAI is an emerging field, and its practitioners have noted the distinct interdisciplinary nature of the work, involving sociologists, data scientists, philosophers, designers, policy experts, and lawyers, to name just a few fields.

“Given this unique context and the newness of our field, it’s rare to find individuals who possess the trifecta of AI/ML technical skills, ethical expertise, and financial expertise,” Cukor says. “That’s why RAI in finance should be an interdisciplinary practice with collaboration at its heart. To get the right mix of talent and perspectives, you need to hire experts in different fields so they can have the tough conversations and identify issues that others might overlook.”

This article is for informational purposes only and is not intended as legal, tax, financial, investment, accounting or regulatory advice. The views expressed herein are the personal views of the individual(s) and do not reflect the views of JPMorgan Chase & Co. JPMorgan Chase & Co. is not responsible for the accuracy of any statements, linked resources, messages or quotations.

This content was created by Insights, the custom content division of MIT Technology Review. This was not written by the MIT Technology Review.

Source by [author_name]

Previous articleAudiobook narrators fear Spotify used their voices to train AI
Next articleRepublican FTC Commissioner Wilson announces resignation