How to move from experimental AI to enterprise AI

How to move from experimental AI to enterprise AI


Asgarali Mia, principal consultant and data strategy advisor at iOCO.

Asgarali Mia, principal consultant and data strategy advisor at iOCO.

Most enterprises have embarked on multiple artificial intelligence (AI) initiatives with different goals and on different platforms, on an experimental basis and in various business areas. This has led to many challenges in terms of governance, risks, maintenance and compliance.

Most of the time, there is duplication of work, and if the AI application was not developed in a modular way, then reusability can become a major problem. These multiple AI projects or applications typically access both the same data sources and transformer models and thus, entry control becomes a problem in terms of risks and governance.

Looking at the AI value chain, data is acquired from source systems, prepared, and quality checks are carried out, and the data is then transformed and ingested into a curated data store. Following this, models are built and fine-tuned using the curated data, and finally, these are published to the appropriate users.

To mitigate the risks, ensure proper governance and is enforced and to save costs enterprises must consider having a single or centralised artificial data intelligence platform. Ideally, this needs to be driven by a key leader, like the chief AI officer who is responsible for all things AI, including driving the strategy with the support of the program owners and model stewards.

Having a centralised artificial data intelligence platform will allow the enterprise to fast-track its AI adoption strategy.

The AI program owners ensure compliance and risk management within their domain when implementing the various use cases, and the model stewards carry out the execution.

The platform should comprise the following components or functionalities:

  • An AI control centre layer, which centrally manages user security, monitors the use and performance of the AI assistants and agents, monitors performance of the platform and shows the data lineage.
  • A data ingestion layer that creates a single point of entry from the various source systems which controls the ingestion of unstructured data, such as visual (videos, images), speech, text and structured data. This would eliminate the risk of multiple systems accessing key data and diminish the strain on the structured data that is on premises or in the cloud, as well as reducing the cost of accessing unstructured data.
  • A common AI storage area should be used to store the raw, transformed, curated data as well as the results that the AI agents and assistants have generated. This will allow the enterprise to have consistency when making important decisions.
  • An AI engineering lifecycle management layer.

The AI engineering lifecycle management layer should have the following subcomponents:

Data management: Ensuring data is prepared efficiently, quality checks are performed and a catalogue that manages the data assets that are being utilised by the various models.

Model management: This area should have a catalogue of all the AI assistants and agents (custom and prebuilt) and the various actions they carry out. For example, which models are being used by agents that extract text from PDFs and summarise it for end-users?

AI framework: This is one of the most important components as it takes care of AI assurance and governance, overseeing the plus ethics, managing risks, building trust with end-users and ensuring compliance.

Feature store: This caters for features, such as pre-built AI agents that are utilised in the functional areas of enterprises, such as HR, finance, supply chain, etc. An example of a prebuilt AI agent in HR is an agent that does onboarding of an employee, or an assistant that answers HR queries. Other functionalities that need to be present in the feature store include robotic process automation, workflow, decision flows and document management.

AI store: This is where enterprises keep all use cases on a single platform with different levels of access. Part of the AI store should include AI readiness assessments and advisory to ensure the different business areas are prepared to embark on the AI journey.

In conclusion, having a centralised artificial data intelligence platform will allow the enterprise to fast-track its AI adoption strategy, deliver use cases more efficiently and build trust with end-users.

Enterprises should also look at having an AI governance forum to guarantee compliance throughout the various AI programs. The chief data officer and chief AI officer should be working together as there are functionalities like data ingestions, quality and preparation that overlap, and would avoid duplication of work.

Similarly, the and AI governance forums could operate as one entity or independently as there would involve the common domain owners in terms of security, architecture and more.