How to Launch Your AI Projects from Pilot to Production – and Ensure Success
IT leaders can take an active role in improving how many AI projects get fully off the ground.

This post is brought to you by NVIDIA and CIO. The views and opinions expressed herein are those of the author and do not necessarily represent the views and opinions of NVIDIA.

CIOs seeking big wins in high business-impacting areas where there’s significant room to improve performance should review their data science, machine learning (ML), and AI projects.

A recent IDC report on AI projects in India[1] reported that 30-49% of AI projects failed for about one-third of organizations, and another study from Deloitte casts 50% of respondents’ organizational performance in AI as starters or underachievers.

That same study found 94% of respondents say AI is critical to success over the next five years. Executives see the AI opportunity for competitive differentiation and are looking for leaders to deliver successful outcomes.

ML and AI are still relatively new practice areas, and leaders should expect ongoing learning and an improving maturity curve. But CIOs, CDOs, and chief scientists can take an active role in improving how many AI projects go from pilot to production.

Are data science teams set up for success?

A developing playbook of best practices for data science teams covers the development process and technologies for building and testing machine learning models. Developing models isn’t trivial, and data scientists certainly have challenges cleansing and tagging data, selecting algorithms, configuring models, setting up infrastructure, and validating results.

Leaders who want to improve AI delivery performance should address this first question: are data scientists set up for success? Are they working on problems that can yield meaningful business outcomes? Do they have the machine learning platforms (such as NVIDIA AI Enterprise),infrastructure access, and ongoing training time to improve their data science practices?

CIOs and CDOs should lead ModelOps and oversee the lifecycle

Leaders can review and address issues if the data science teams struggle to develop models. But to launch models and ensure success, CIOs and CDOs must establish a model lifecycle or ModelOps.

The lifecycle starts before model development and requires educating business leaders on their roles in contributing to AI projects. It also requires steps for planning the infrastructure at scale, instituting compliance and governance, creating an edge security strategy, and partnering with impacted teams to ensure a successful transformation.

Here are several factors to consider:  

  1. Educate business leaders about their roles in ML projects. Have business leaders defined realistic success criteria and areas of low-risk experimentation? Are they involved in pilots and providing feedback? Are they ready to transform business processes with machine learning capabilities, or will they slow down investments at the first speed bump?
  2. Adopt a build, buy, or partner when developing models. Sometimes, developing proprietary models makes sense, but also evaluate frameworks such as recommendation engines or speech AI SDKs.
  3. Think a step ahead regarding production infrastructure requirements. The lab infrastructure used to develop models, and the lower scale required to pilot an AI capability, may not be the optimal production infrastructure. For example, AI in healthcare, smart buildings, and industrial applications that impact human safety may require edge or embedded computing options to ensure reliability and performance.
  4. Plan for large-scale AI applications on the edge. Where there are thousands of IoT devices, there are opportunities to deploy AI applications to run on the devices. For example, fleets of vehicles, including delivery trucks, construction tools, and farming equipment, can use device-deployed AI apps to provide real-time feedback to their operators that improve productivity and safety. An edge management solution that deploys the apps to the devices, supports communications, and provides monitoring capabilities is critical.
  5. Establish MLOps, ModelOps, and infrastructure-monitoring capabilities. The data science teams will need MLOps to automate paths to production, while compliance should require ModelOps and want model updates to address model drift. Infrastructure and operations teams will want monitoring to help them review cloud infrastructure costs, performance, and reliability.

IT teams don’t just deploy apps. They participate in planning to deliver business outcomes and then institute DevOps to ensure delivery and ongoing enhancements. Applying similar practices to data science, machine learning, and AI will improve successful pilot and production deliveries.


[1] IDC FutureScape: Worldwide Artificial Intelligence 2021 Predictions — India Implications

Regeneron turns to IT to accelerate drug discovery
CIO Bob McCowan’s digital and data transformation sets the stage for innovation by equipping the pharmaceutical’s scientists with the data they need to experiment and test hypotheses.