Future of AI Systems

Xiaozhe Yao

Xiaozhe Yao

AID Team

This article is written together with GitHub Copilot.

When we are talking about AI or machine learning, we focus a lot on the "models" or the "algorithms" that we build. The motivation of building better algorithms has been the driving force of the AI revolution. However, successful models algorithms are not always deployed or used in the production system, in the proper way, at the proper place.

Two years ago, when we first initialized the AID project, we started to build a system that allows users to label the images, train the machine learning models and test them online. We visited a company that is trying to apply machine learning to deficit components detection. By that time, the neural network is the rising standard in the field of AI and computer vision. At the beginning, we felt that the company might be using some state-of-the-art, and the most accurate computer vision models. However, to our surprise, they are using either the traiditional computer vision algorithms or the simple, small neural networks.

Recalling that discussion, after some time I realized that

  • Accurate, but large models may not always be the best models.
  • It depends on the application, or more concretely, the data and the goal you have.
    • If your data is "good": captured under the same conditions, having similar distributions across all samples, etc, then a simple neural network may be the best choice.
    • If your data is "bad", but you have a powerful setup, then a more complex model may be the best choice. Even in this case, your "complex" model may not be working as you wish.
  • If your data is not good, a better bet might be to capture more data, rather than use complex models.