Katana VentraIP

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives.[1][2] It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering[3] to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.[4]

"Optimization" and "Optimum" redirect here. For other uses, see Optimization (disambiguation) and Optimum (disambiguation).

In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics.

An optimization problem with discrete variables is known as a , in which an object such as an integer, permutation or graph must be found from a countable set.

discrete optimization

A problem with continuous variables is known as a , in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems.

continuous optimization

Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:


An optimization problem can be represented in the following way:


Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework.


Since the following is valid


it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too.


Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization,[5] speaking of the value of the function f as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error.


Typically, A is some subset of the Euclidean space , often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions.


The function f is called, variously, an objective function, criterion function a loss function or cost function (minimization),[6] a utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution.


In mathematics, conventional optimization problems are usually stated in terms of minimization.


A local minimum x* is defined as an element for which there exists some δ > 0 such that


the expression f(x*) ≤ f(x) holds;


that is to say, on some region around x* all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly.


While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima.


A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem.

Convex programming

Linear programming

studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming.

Integer programming

allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming.

Quadratic programming

studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem.

Fractional programming

studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it.

Nonlinear programming

studies the case in which some of the constraints or parameters depend on random variables.

Stochastic programming

is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set.

Robust optimization

is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one.

Combinatorial optimization

is used with random (noisy) function measurements or random inputs in the search process.

Stochastic optimization

studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.

Infinite-dimensional optimization

and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems.

Heuristics

Constraint satisfaction

Constraint programming

Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling.

is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model.

Space mapping

Classification of critical points and extrema[edit]

Feasibility problem[edit]

The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal.


Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative.

Existence[edit]

The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view.

Necessary conditions for optimality[edit]

One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions.


Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'.

Sufficient conditions for optimality[edit]

While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality.

Sensitivity and continuity of optima[edit]

The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics.


The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters.

of George Dantzig, designed for linear programming

Simplex algorithm

Extensions of the simplex algorithm, designed for and for linear-fractional programming

quadratic programming

Variants of the simplex algorithm that are especially suited for

network optimization

Combinatorial algorithms

Quantum optimization algorithms

Applications[edit]

Mechanics[edit]

Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold;[9] the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem.


Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.


This approach may be applied in cosmology and astrophysics.[10]

Economics and finance[edit]

Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses.[11] Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63.


In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics.


Since the 1970s, economists have modeled dynamic decisions over time using control theory.[12] For example, dynamic search models are used to study labor-market behavior.[13] A crucial distinction is between deterministic and stochastic models.[14] Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.[15][16]

Electrical engineering[edit]

Some common applications of optimization techniques in electrical engineering include active filter design,[17] stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures,[18] handset antennas,[19][20][21] electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993.[22][23] Optimization techniques are also used in power-flow analysis.[24]

Civil engineering[edit]

Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures,[25] resource leveling,[26][27] water resource allocation, traffic management[28] and schedule optimization.

Operations research[edit]

Another field that uses optimization techniques extensively is operations research.[29] Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods.

Control engineering[edit]

Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.

Geophysics[edit]

Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used.

; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. ISBN 0-521-83378-7.

Boyd, Stephen P.

Gill, P. E.; Murray, W.; (1982). Practical Optimization. London: Academic Press. ISBN 0-12-283952-8.

Wright, M. H.

(2004). A First Course in Combinatorial Optimization. Cambridge University Press. ISBN 0-521-01012-8.

Lee, Jon

; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin: Springer. ISBN 0-387-30303-0.

Nocedal, Jorge

. Links to optimization source codes

"Decision Tree for Optimization Software"

.

"Global optimization"

. Course from Stanford University.

"EE364a: Convex Optimization I"

Varoquaux, Gaël. .

"Mathematical Optimization: Finding Minima of Functions"