I am planning to use Machine Learning based NetApp. Is there is inbuilt support for that in 5GASP? What is the approach I can use to develop and deploy my ML models on 5GASP platform?
Hi @dev, thank you for the query. There is no specific answer to this question. The nature and applicability of ML are as broad as the range of possible NetApps in the 5G ecosystem. There are several aspects to consider when selecting the right ML approach for your application. Some of them include:
- Define the problem you want to solve. Is this a classification, prediction, clusterization or control problem?
On one hand, if your problem is related to classification and prediction, you can start looking into supervised learning approaches. If you are required to cluster data with similar features, you may need to explore unsupervised learning. Both approaches are dependent on the type of data you are going to be dealing with. On the other hand, if you are looking for an ML approach that will help you to solve an optimization problem on a closed control loop, you can explore reinforcement learning.
- Understand your data sources.
It is expected, that in the coming decade, more than a billion connected devices, including vehicles, robots and in addition to humans will generate zettabytes of data and information. Understanding your data sources and the way you collect and process this data is key to the success of your ML-empowered NetApp. You may need to convert the raw incoming data into a format suitable for your ML models to consume. This could include data homogenization, aggregation, labelling (in the case of supervised learning), etc.
- Select the right ML algorithm based on your needs and resources.
Once you have understood the domain of your problem and your data sources, you can start exploring the specific ML model or models that suit your NetApp. They could include decision trees for supervised learning, K-means for unsupervised learning, or Q-learning for Reinforcement learning. Important things to consider for the ML model are as follows:
*Performance of the model in the selected domain. Accuracy, precision, recall and f1-score are some important metrics that are useful for analyzing the performance of the ML in your application.
*interpretability and explainability of the ML model. Understanding how an ML algorithm works and its outcomes is not an easy task and may be a deterrent to its usability. Unfortunately, there’s not a silver bullet for this task that usually requires a balance between performance and interpretability. Explainable AI (XAI) is a rapidly growing research area that investigates promoting the understandability of ML models and their outcomes. You will need to find the right ML model that suits your task accompanied by results that you can understand.
*Complexity and scalability of the ML model. The purpose of building NetApps is to provide certain functionalities to verticals and their associated use cases. These functionalities are required to be scalable like the nature of 5G networks. The ML models used in your NetApp should be scalable and maintainable.
*Computational resources and costs. Building, training, and deploying ML is not a cheap task, both time and money-wise. Depending on the character of your Models and source data, you may require long processing units as well as long training times. Analyzing both the requirements of your NetApp and the resources available in your deployment space (ask for our 5G Testbeds) are key for its successful functionality.
- Train and deploy your models.
Using ML models commonly involves two main tasks; training and deploying (also known as inference). The training process involves setting up the pipeline and the evaluation criteria for your ML models. You will need to run your models up to a defined training time. During this process, keep track of the model and its environment including hyperparameters, input data, and resulting outcomes. V&V the models’ performance and retrain if required. This can include among others updating hyperparameters and/or further processing your input data. Once you have the trained models ready for deployment you can the inference stage where you execute the models against (new) input data either in real-time or in batches. The inference process can be performed as a standalone in your local host/network or on a bigger scale using the 5GASP ecosystem for example. The former can be useful for validating the performance of your models regarding the accuracy and especially inference time. How long does your model take to make a prediction? It is crucial for your NetApp performance to analyze these trade-offs. Deploying your models on a bigger scale requires considering different stages/aspects in the pipeline of your targeted deployment environment (discussed in the next point).
- Understand the 5GASP ecosystem for CI/CD your NetApp. How would you maintain your ML Models once deployed?
The 5GASP project offers state-of-the-art 5G facilities across Europe with 6 interconnected testbeds. Each of them offers you different capabilities for deploying and testing your NetApp in real-world scenarios. There are different guidelines that will help you to understand the 5GASP pipeline and will help you with the onboarding of your NetApp to the ecosystem. Understanding the ecosystem will allow you to deploy but also maintain your ML models that usually require retraining.
- Contact 5GASP experts through the community forum.
If you have any questions, contact our experts through our community portal. We can provide valuable information and examples for implementing your NetApp.
I hope it helps.