Major optimization techniques in Data Science categorized by constrained vs. unconstrained scenarios – Part 1

Image credit: www.Pixabay.com

1. Unconstrained Optimization Methods

(Used when there are no explicit constraints on variables)

  • Gradient-Based Methods
  • Second-Order Methods
  • Heuristic & Meta-Heuristic Methods
  • Bayesian Optimization

2. Constrained Optimization Methods

(Used when optimization involves constraints on variables)

  • Convex Optimization Methods
  • Augmented Lagrangian Methods
  • Penalty Methods
  • Sequential Quadratic Programming (SQP)
  • Interior-Point Methods
  • Constraint-Specific Heuristic Approaches

Here are the Wikipedia references for each optimization method:

Unconstrained Optimization Methods

  1. Gradient Descent
  2. Stochastic Gradient Descent (SGD)
  3. Mini-Batch Gradient Descent
  4. Momentum
  5. Nesterov Accelerated Gradient (NAG)
  6. RMSprop
  7. AdaGrad
  8. Adam
  9. Adadelta
  10. Newton’s Method
  11. Quasi-Newton Methods (BFGS, L-BFGS)
  12. Conjugate Gradient Method
  13. Genetic Algorithms
  14. Simulated Annealing
  15. Particle Swarm Optimization (PSO)
  16. Bayesian Optimization

Constrained Optimization Methods

  1. Linear Programming (LP)
  2. Simplex Method
  3. Interior Point Methods
  4. Quadratic Programming (QP)
  5. Semidefinite Programming (SDP)
  6. Augmented Lagrangian Method
  7. Penalty Method (Quadratic Penalty)
  8. Barrier Methods (Log Barrier)
  9. Sequential Quadratic Programming (SQP)
  10. Interior-Point Method
  11. Genetic Algorithms with Constraints
  12. Constrained Particle Swarm Optimization (CPSO)

By Neil Harwani

Interested in movies, music, history, computer science, software, engineering and technology

Leave a comment