Linear programming by Karmarkar's algorithm

Linear programming problem

A linear programming problem is a problem in which both the objective function and the constraints are expressed in a linear form, and the vector $ {\ bf x} $ is minimized as shown below.

\min {\bf c}^T{\bf x}
\\subject \ \ {\bf A}{\bf x}\le {\bf b}

Here, if the dimension of $ {\ bf x} $ is $ n $ and the number of constraints is $ m $, then $ {\ bf x}, {\ bf c}, {\ bf b} $ are $ n $ dimension vectors. , $ {\ bf A} $ is a matrix of $ m \ times n $.

Karmarkar method

[Simplex method] as a solution to linear programming problems (http://en.wikipedia.org/wiki/%E3%82%B7%E3%83%B3%E3%83%97%E3%83%AC%E3% 83% 83% E3% 82% AF% E3% 82% B9% E6% B3% 95) has been around for a long time, but it is inefficient for large-scale problems, and the interior point method introduced here is It is used. The Karmarkar's algorithm is quite famous for the interior point method in linear programming. This algorithm is often mentioned in patents in the software world, and it is said that it is a beautiful algorithm that can be written very simply in code. The following is a function of the Karmarkar method written in python.

#!/usr/bin/python
#coding:utf-8
import numpy as np

def karmarkar_method(x, c, amat, b,
                     gamma=1.0, eps=1.0e-3, nloop=30):
    """
Solving linear programming problems with the Karmarkar method
    object  min z = c^T * x
    subject Ax <= b
    """
    for n in range(nloop):
        vk = b - amat * x
        gmat = amat.T * np.diag(1.0 / np.square(vk.A1)) * amat
        d = np.linalg.pinv(gmat) * c
        if np.linalg.norm(d) < eps:
            break
        hv = -amat * d
        if np.max(hv) <= 0:
            print "Unbounded!"
            x = None
            break
        alpha = gamma * np.min(vk[hv > 0] / hv[hv > 0])
        x -= alpha * d
        yield x
    #return x


if __name__ == "__main__":
    c = np.matrix([[-1.0],
                   [-1.0]])
    amat = np.matrix([[1.0, 1.0],
                      [1.0, -1.0]])
    b = np.matrix([[0.5],
                   [1.0]])
    #drawing
    from pylab import *
    ax = subplot(111, aspect='equal')
    x = np.arange(-3.0, 3.01, 0.15)
    y = np.arange(-3.0, 3.01, 0.15)
    X,Y = meshgrid(x, y)
    t = np.arange(-3.0, 3.01, 0.15)
    func = lambda x, y : c[0, 0] * x + c[1, 0] * y
    const = [lambda x : -amat[0, 0] / amat[0, 1] * x + b[0, 0] / amat[0, 1],
             lambda x : -amat[1, 0] / amat[1, 1] * x + b[1, 0] / amat[1, 1]]
    Z = func(X, Y)
    s = [const[i](t)foriinrange(2)]
    pcolor(X, Y, Z)
    for i in range(2):
        ax.plot(t, s[i], 'k')
    for x in karmarkar_method(np.matrix([[-2.0], [-2.0]]), c, amat, b, gamma=0.5, eps=1.0e-3, nloop=30):
        print x
        ax.plot([x[0,0]], [x[1,0]],'ro')
        axis([-3, 3, -3, 3])
    show()

This time, not only the minimum value but also the progress is returned for drawing. The figure below shows how the value is approaching the minimum value.

karmarkar.png

You can see that the blue area shows the area where the objective function becomes smaller, and the red dot points in the blue direction and stops just before the constraining line. Currently, the interior point method is also used for quadratic programming problems and nonlinear programming problems, but the Karmarkar method is an algorithm that can be said to be a precursor to that.

Recommended Posts

Linear programming by Karmarkar's algorithm
Linear Programming with PuLP
Linear programming + hands-on of pulp
Pronunciation memos examined by programming
Feature selection by genetic algorithm
Coordinator and integer linear programming
Making sound by programming Part 2
How to solve dynamic programming algorithm problems (as seen by beginners)