In the first part of our article series on AI integration, we looked at how to thoroughly prepare AI projects: defining business goals, designing the architecture and the critical role of data quality, whether it's more traditional machine learning tasks or the use of the latest language models.
We now focus on implementation - that is, when AI integration goes from theory to working system. The designs become a tangible system: data pipelines are built, the AI model is born or selected, testing takes place and finally the intelligent component is integrated into the business processes. This is where theory becomes practice, and where a lack of careful planning can most often lead to a project stalling. It is important to see that the implementation steps may have different emphasis. It requires a different approach to develop a completely new, customised AI model than it does to integrate an existing, high-performance base model (especially large language models, LLMs) into existing systems. This article will guide you through the key steps, highlighting the specificities of both paths.

Every artificial intelligence (AI) system relies on high-quality data. But it's not enough to just store the data: you need to make sure it gets to where the AI will use it in a structured, clean, correct form and version. This is particularly important when using AI in cases where fine-tuning or predictive analysis is required.
Data often has to come from several systems, be transformed and finally loaded into the AI system or a central data warehouse. ETL (Extract-Transform-Load) and ELT (Extract-Load-Transform) are proven methodologies for this - their core purpose is to ensure that data is routed and transformed correctly.
Especially when developing custom models, it is important to extract the most important information from the data, the "features" that the model needs to pay attention to during learning (e.g. the purchase frequency of a customer, the fluctuations in the sensor data of a machine). This helps the model to learn more efficiently.
In the era of large language models (LLMs), new aspects are coming to the fore:
Not every AI task requires building a model from scratch. Choosing the right strategy is key to success and efficiency.
Today, this is increasingly common, especially for tasks such as text comprehension, content generation or customer service.
Whether you are developing your own model or customising an existing one, it is important to keep track of the different versions to know how each one performs.
The completed or selected AI component must be connected to existing business applications, databases and processes. The way this is done will greatly affect the speed, reliability and future extensibility of the system.
The use of artificial intelligence does not exclude traditional software testing. In fact, testing systems is a specific challenge that needs to be complemented by AI-specific validation steps. In the case of predictive models, it is particularly important to pay attention to the behaviour in unexpected situations, the possibility of prompt injection attacks, and possible inaccuracies in the report generation.
Basic software tests are needed (do the parts work, are they well connected, can they withstand the load, are they secure).
Accuracy and reliability:
Does the model really give the right results (measured by statistical indicators such as F1-score, MAE, RMSE).
Specific challenges of LLM:

The implementation of AI does not end with the deployment of the model, it is only successful if it delivers measurable business benefits. Therefore, we need to define what we expect from it in advance:
And during live operations, the performance and costs of the system need to be constantly monitored. This includes detecting any deterioration in accuracy, tracking changes in data and, in particular, keeping the user charges for LLM services under control. Proper logging and reporting ensures transparency and the prevention of potential errors.
The implementation phase of AI integration is technically challenging, but it can be done successfully with proper planning, the right model strategy and thorough testing. The goal is to create a robust, transparent system that contributes measurably to the company's business goals, whether it is a custom-built model or a modern LLM integration.
In the third and final part of this series, we will examine:
If you want a truly working, value-creating AI solution - and not just a pilot project - stay tuned for the next part!
[banner type="encoai" text="Want to take the first step in implementing AI?" button="Apply for AI Brunch!" link="https://encoai.com/"]