Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Good governance is important for businesses deploying artificial intelligence

Laurel: It’s wonderful. Thanks for the detailed explanation. So, as you personally specialize in governance, how can businesses balance securing AI protections and deploying machine learning while still encouraging innovation?

Stephanie: Therefore, balancing safeguards for AI/ML deployment and encouraging innovation can be a real challenge for enterprises. It’s a big scale, and it’s changing extremely fast. However, it is very important to have this balance. Otherwise, what is the point of having innovation here? There are several key strategies that can help achieve this balance. First, establish clear governance policies and procedures, review and update existing policies where they may not be consistent with AI/ML development and deployment, in new required policies and procedures such as monitoring and continuous compliance as I mentioned earlier. Second, involve all stakeholders in the AI/ML development process. We start with data engineers, business, data scientists, and ML engineers who deploy models in production. Model reviewers. Business stakeholders and organizational risk. And this is what we focus on. We build integrated systems that provide transparency, automation and a good end-to-end user experience.

So all of this will help streamline the process and bring everyone together. Third, we needed to build systems that not only enabled this common workflow, but also captured the data that enabled automation. Often, many of the activities that occur in the ML lifecycle process are performed using different tools because they reside in different teams and departments. And this results in members manually sharing information, viewing and signing up. Therefore, having an integrated system is essential. Fourth, monitoring and evaluating the performance of AI/ML models, as I mentioned before, is really important because if we don’t monitor the models, it will have a negative effect from the original intention. And doing it manually will stifle innovation. Model deployment requires automation, so having it in place is key to being able to develop and deploy your models to a production environment while actually working. It is reproducible, it works in production.

This is very, very important. And having well-defined metrics for monitoring models that include the performance of the infrastructure model itself, as well as the data. And finally, providing coaching and education, because it’s a team sport, everyone comes from different walks of life and has different roles. Having a cross-functional understanding of the entire lifecycle process is essential. And understanding what data is needed for the use case and whether we are using the data correctly for the use cases will prevent us from abandoning the model deployment much later. So I think all of these are key to balancing management and innovation.

Laurel: So there’s another topic to discuss, and you touched on it in your answer: How does everyone understand the AI ​​process? Could you describe the role of transparency in the AI/ML lifecycle from creation to management to implementation?

Stephanie: Of course. So AI/ML, it’s still pretty new, it’s still evolving, but generally people have settled into a high-level process flow that defines a business problem, gets data and processes the data to solve the problem, and then creates a model that is the development of the model and then its deployment. But before deployment, we conduct a review within our company to ensure that the models are designed according to the correct responsible AI principles, and then continue with ongoing monitoring. When people talk about the role of transparency, it’s not just about being able to capture all the metadata artifacts throughout the lifecycle, the lifecycle events, all of that metadata needs to be transparent with a time stamp so people can know what happened. And that’s how we shared the information. And this transparency is very important because it builds trust, it ensures fairness. We need to make sure that the right data is used, and this contributes to interpretability.

There is this thing about models that needs to be explained. How does he make decisions? And then it helps to maintain constant monitoring, and that can be done in a number of ways. One thing we pay particular attention to from the very beginning is understanding what are the goals of the AI ​​initiative, the purpose of the use case, and what are the intended uses of the data? We are looking into it. How did you process the data? What is a data lineage and transformation process? What algorithms are used and what ensemble algorithms are used? And the model specification must be documented and written down. What are the constraints on when a model should be used and when not? Interpretability, verifiability, can we actually trace how this model is created throughout the entire lineage of the model itself? And also the specifics of the technology, like the infrastructure, the containers it’s involved in, because that actually affects the performance of the model, where it’s deployed, what business application actually consumes the prediction output from the model, and who can access the decisions from the model. So this is all part of the theme of transparency.

Laurel: Yes, it is quite broad. So, given that AI is a rapidly changing field with so many emerging technologies, such as generative AI, how do JPMorgan Chase teams stay abreast of these new developments and then also choose when and where to deploy them?

Source by [author_name]

Related Articles

Back to top button