Broadly speaking the most commonly used machine learning models are:
- Linear Regression / Logistic Regression
- Among the simplest and oldest models, still extremely common in business, healthcare, and social sciences.
- Often used as a baseline model.
- Decision Trees & Ensembles (Random Forests, Gradient Boosted Trees like XGBoost, LightGBM, CatBoost)
- Very popular in applied machine learning because they handle structured/tabular data well.
- Frequently the go-to in competitions (like Kaggle) when deep learning isn’t necessary.
- Neural Networks / Deep Learning (especially Transformers today)
- Dominant in unstructured data domains (images, text, audio).
- Convolutional Neural Networks (CNNs) → vision.
- Recurrent Neural Networks (RNNs, now mostly replaced).
- Transformers → now the standard for language and increasingly for vision and multimodal tasks.
👉 If we had to pick the single most common model overall, it would likely be tree-based ensemble models (like XGBoost or Random Forests) in industry for structured data, and transformer neural networks in modern AI research and applications.