Skip to content

AI model training

Contact Service Provider

Summary

AI model training

S00291
Version: 1.11

  • arable farming
  • horticulture
  • viticulture
  • training
  • software; other (including: nothing)
  • Location: remote
  • Offered by: POLIMI; UNIMI

E-Mail

Description

This service concerns training AI models for the customer for a specific task and optimization objective: e.g., improving accuracy on crop classification from image data. The target model in this case is the solution provided by the customer that needs to be enhanced with respect to a set of pre-determined features to reach the desired TRL. However, the training can also be extended to additional state-of-the-art models available in the market, for benchmarking purposes. Model features to improve, reference model baselines to include in the performance comparison, as well as benchmark datasets may have been previously identified via S00179 - Desk assessment activities for digital systems and/or data. The data used for training the model can be provided by the customer, annotated ad hoc as a preparatory activity to model training via S???? - Data labeling, or retrieved among reference benchmark datasets that are openly available. We will also agree with customers on the level of hardware acceleration required, based on the considered AI models: e.g., GPU acceleration via connection to a remote server vs. on-device training. The training procedure itself will be monitored by tracking the evaluation metrics that are relevant to the end task (e.g., training loss wrt the optimization objective, Average classification Precision and Accuracy, …).

Example service: The customer is interested in promptly identifying the emergence of the Peronospora (downy mildew) disease in vineyards. Peronospora symptoms can be detected by inspecting changes on the leaf surface (appearance of small spots, gradual changes in the leaf colour). The customer has already implemented a Computer Vision model to classify leaves as healthy or unhealthy. However, the model needs to be re-trained to account for the collection of higher quality images and annotations of disease symptoms (e.g., via S00113 - Collection of test data during physical testing and S???? - Data labeling). Since the solution is expected to work in real-time, we use a TPU-accelerated stick readily available on the market to train the model directly on the device as opposed to training the model offline on a remote server. Given the real-time performance requirements, we opt for an incremental training protocol, where only a few image examples (i.e., shots) are used for each update of the model parameters, so that the customer will be able to modularly update the model in the future, as soon as additional images are acquired. The performance of the model across training iterations is tracked with respect to the Binary Cross-Entropy Loss associated to the healthy and unhealthy classes, and to the Mean Average Precision of the detected leaf regions. Ultimately, we deliver the model checkpoints that led to the highest performance to the customer, with a report explaining how the best model was chosen and the parameters used at training time (e.g., learning rate, batch size, momentum, …). The trained model can be provided in a lightweight format that supports on-device learning (e.g., tflite) but also in more interoperable formats like ONXX, to facilitate the conversion across different Deep Learning frameworks and computing devices.