Most common models of machine learning

Machine learning

Broadly speaking the most commonly used machine learning models are:

  1. Linear Regression / Logistic Regression
    • Among the simplest and oldest models, still extremely common in business, healthcare, and social sciences.
    • Often used as a baseline model.
  2. Decision Trees & Ensembles (Random Forests, Gradient Boosted Trees like XGBoost, LightGBM, CatBoost)
    • Very popular in applied machine learning because they handle structured/tabular data well.
    • Frequently the go-to in competitions (like Kaggle) when deep learning isn’t necessary.
  3. Neural Networks / Deep Learning (especially Transformers today)
    • Dominant in unstructured data domains (images, text, audio).
    • Convolutional Neural Networks (CNNs) → vision.
    • Recurrent Neural Networks (RNNs, now mostly replaced).
    • Transformers → now the standard for language and increasingly for vision and multimodal tasks.

👉 If we had to pick the single most common model overall, it would likely be tree-based ensemble models (like XGBoost or Random Forests) in industry for structured data, and transformer neural networks in modern AI research and applications.