GD TV¶
Gradient descent with built in TV and flexible encoding model.
-
mr_utils.cs.convex.gd_tv.
GD_TV
(y, forward_fun, inverse_fun, alpha=0.5, lam=0.01, do_reordering=False, x=None, ignore_residual=False, disp=False, maxiter=200)[source]¶ Gradient descent for a generic encoding model and TV constraint.
Parameters: - y (array_like) – Measured data (i.e., y = Ax).
- forward_fun (callable) – A, the forward transformation function.
- inverse_fun (callable) – A^H, the inverse transformation function.
- alpha (float, optional) – Step size.
- lam (float, optional) – TV constraint weight.
- do_reordering (bool, optional) – Whether or not to reorder for sparsity constraint.
- x (array_like, optional) – The true image we are trying to reconstruct.
- ignore_residual (bool, optional) – Whether or not to break out of loop if resid increases.
- disp (bool, optional) – Whether or not to display iteration info.
- maxiter (int, optional) – Maximum number of iterations.
Returns: x_hat – Estimate of x.
Return type: array_like
Notes
Solves the problem:
\[\min_x || y - Ax ||^2_2 + \lambda \text{TV}(x)\]If x=None, then MSE will not be calculated.