nnls#
- scipy.optimize.nnls(A, b, maxiter=None, *, atol=None)[source]#
- Solve - argmin_x || Ax - b ||_2for- x>=0.- This problem, often called as NonNegative Least Squares, is a convex optimization problem with convex constraints. It typically arises when the - xmodels quantities for which only nonnegative values are attainable; weight of ingredients, component costs and so on.- Parameters:
- A(m, n) ndarray
- Coefficient array 
- b(m,) ndarray, float
- Right-hand side vector. 
- maxiter: int, optional
- Maximum number of iterations, optional. Default value is - 3 * n.
- atol: float
- Tolerance value used in the algorithm to assess closeness to zero in the projected residual - (A.T @ (A x - b)entries. Increasing this value relaxes the solution constraints. A typical relaxation value can be selected as- max(m, n) * np.linalg.norm(a, 1) * np.spacing(1.). This value is not set as default since the norm operation becomes expensive for large problems hence can be used only when necessary.
 
- Returns:
- xndarray
- Solution vector. 
- rnormfloat
- The 2-norm of the residual, - || Ax-b ||_2.
 
 - See also - lsq_linear
- Linear least squares with bounds on the variables 
 - Notes - The code is based on [2] which is an improved version of the classical algorithm of [1]. It utilizes an active set method and solves the KKT (Karush-Kuhn-Tucker) conditions for the non-negative least squares problem. - References [1]- : Lawson C., Hanson R.J., “Solving Least Squares Problems”, SIAM, 1995, DOI:10.1137/1.9781611971217 [2]- : Bro, Rasmus and de Jong, Sijmen, “A Fast Non-Negativity- Constrained Least Squares Algorithm”, Journal Of Chemometrics, 1997, DOI:10.1002/(SICI)1099-128X(199709/10)11:5<393::AID-CEM483>3.0.CO;2-L - Examples - >>> import numpy as np >>> from scipy.optimize import nnls ... >>> A = np.array([[1, 0], [1, 0], [0, 1]]) >>> b = np.array([2, 1, 1]) >>> nnls(A, b) (array([1.5, 1. ]), 0.7071067811865475) - >>> b = np.array([-1, -1, -1]) >>> nnls(A, b) (array([0., 0.]), 1.7320508075688772)