Linearized Bregman Iteration for Compressed Sensing and related Problems

advertisement
Linearized Bregman Iteration for
Compressed Sensing and related
Problems
Stanley Osher, David Mao, Bin Dong, Wotao Yin
MARCH 4, 2008
Background
min !u!1 s.t. Au = f
Assumption: Am!n with m<n has full row rank.
Example:
Sub-matrix of Fourier matrix
Sub-matrix of Gaussian random matrix
Condition of restricted isometry.
Unconstrained
Formulation
min{J(u) + H(u)}
u
1
J(u) = µ!u!1 H(u) = !Au − f !22
2
Convex optimization Problem
Not differentiable.
n
!
F! (ui )
Approximated by J! (u) = µ
i=1
! 2
ui
|ui | ≤ !
2!
F! (ui ) =
|ui | − !/2 |ui | > !
Bregman Distance
D(u, uk ) := J(u) − J(uk )− < ∂J(uk ), u − uk >
uk
u
uk
u
uk
u
Bregman Iteration
u = arg min J(u) + H(u)
u
u
k+1
p
f
k+1
u
1
2
= arg min D(u, u ) + !Au − f !
u
2
k
k+1
− p + A (Au
k
!
k+1
− f) = 0
= f + f − Au , p = A (f − Au )
k+1
k
k
k
!
k
k
1
k+1 2
= arg min J(u) + !Au − f
!
u
2
Solved by FPC. Converge in finite steps.
Linearization
u = arg min J(u) + H(u)
u
u
k+1
= arg min J(u) + H̃(u, u )
k
u
1
k 2
H̃(u, u ) = H(u )+ < ∇H(u), u − u > + #u − u #
2δ
Approximating H(u) by Taylor expansion at uk
k
k
k
1
k 2
!u − u !
Adding a penalty term
2δ
u
k+1
1
k
k
2
= arg min J(u) + !u − (u − δ∇H(u ))!
u
2δ
Linearized Bregman
u = arg min J(u) + H(u)
u
u
u
k+1
k+1
= arg min D(u, u ) + H̃(u, u )
k
k
u
= arg min
u
1
k
k
2
J(u)− < p , u − u > + "u − (u − δ∇H(u ))"
2δ
k
k
Turns out to be easy to solve.
Linearized Bregman
Take optimality condition
1 k+1
0=p
− p + (u
− uk + δA! (Auk − f ))
δ
1 k
k
k
Let v = p + u
δ
k+1
k
v
k+1
= v + A (f − Au )
k
!
k
vk can be calculated accumulatively.
Linearized Bregman
1
k
k
Solve u from v = p + u
δ
k
k

−
µ)
v
δ(v
 i
i ≥µ
k
k
= δ · (vi ) max{|vi | − δµ, 0} = 0
vik ∈ (−µ, µ)

 k
δ(vi + µ) vik ≤ −µ
k
k+1
ui
−µ
k
µ
Linearized Bregman
!
k+1
k
u
= δ · shrink(v , µ)
v k+1 = v k + A! (f − Auk+1 )
Simple and concise.
Theory on convergence has been (partially)
established recently.
J.Cai, S.Osher, Z.Shen, Linearized Bregman
Iteration for Comrpessed Sensing 2008
Convergence
If uk!u", then Au"=f.
If uk!u" , then u" minimizes
1
2
min{µ!u!1 + !u!2 : Au = f }
u
2δ
Let S = arg min{!u!1 : Au = f }
u
u1 = arg min{!u!22 : u ∈ S}
u
∞
lim
!u
then
µ − u1 ! = 0
µ→∞
Convergence
If uk!u", then Au"=f.
Proof:
Assume Au∞ != f , then A" (Au∞ − f ) != 0.
∃i, A" (Au∞ − f )i != 0, i.e. limk A" (Auk − f )i != 0
k+1
vi − vik = A" (Auk − f )i → c != 0.
k
k
k
On the other hand, {v } = {u /δ + p } is bounded.
Contradiction!
Convergence
If uk!u" , then u" minimizes
1
2
min{µ!u!1 + !u!2 : Au = f }
u
2δ
1
2
˜
Let
J(u)
=
µ!u!
+
!u!
Proof:
1
2
2δ
˜
and u∗ = arg min{J(u)
: Au = f }
!
k−1 "
1
k
k
k−1
˜
∂ J(u ) = p + u = v
=
A (f − Auj ).
δ
j=1
Use the non-negativity of Bregman Distance
˜ k ) ≤ J(u
˜ ∗ )− < u∗ − uk , ∂ J(u
˜ k) >
J(u
!
k−1
∗
∗
k
˜
= J(u )− < Au − Au , j=1 (f − Auj ) >
∞
∗
˜
˜
Let k → ∞, we have J(u ) ≤ J(u ).
Convergence
!
uk+1 = δ · shrink(v k , µ)
v k+1 = v k + A! (f − Auk+1 )
∆uki = δ · qik ∆vik−1
where

=
1



= 1
qik

=
0



∈ (0, 1)
k+1
k
k
uk+1
>
0,
u
>
0,
u
!
=
u
i
i
i
i
k+1
k
k
uk+1
<
0,
u
<
0,
u
!
=
u
i
i
i
i
k+1
ui = uki
else
Convergence
k
k
Q
=
Diag(q
Let
i)
∆u = δQ · ∆v
k
k
k−1
= δQ · A (f − Au )
"
k
k
= −δQ · H (u )
k
#
k
∆uk != 0 ⇒< ∆uk , H ! (uk ) >< 0
If uk+1 != uk and 0 < δ < 2/"AA! ",
then "Auk+1 − f " < "Auk − f ".
Convergence
∆u = δQ · A (f − Au )
k
Au
k+1
!
k
k
− f = [I − δ · AQ A ] · (Au − f )
!
k
k
If the signs of the elements of uk don’t change,
then Qk doesn’t change.
Auk+1 − f = [I − δ · AQA! ] · (Auk − f )
When 0 < δ < 2/!AA !
!
!
−I ≺ I − δAQA # I
Convergence
k
k,0
k,1
Au
−
f
=
w
+
w
Decompose
!
by the eigen-spaces of I − AQA
w
k,0
0
≡ w , "w
k
k,1
" exponentially decays.
0
Au − f → w exponentially.
n
S
=
{x
∈
R
: sign(xi ) = sign(ui ), ∀i}
Let
w ∈ arg min{"Au − f " : u ∈ S}
0
Convergence
If u ∈ S ≡ Suk when k ∈ [T1 , T2 ] with T2 # T1 ,
then uk converges to u∗ ,
where u∗ ∈ arg min{$Au − f $2 : u ∈ S}.
Moreover,
k
2
∗
2
$Au − f $ − $Au − f $ decays exponentially.
k
Usually happens when ! is large.
−µ
µ
Convergence
What happens after uk converges to u*?
If Au*=f, then we are done.
If Au*#f, then uk stays there, vk keeps
changing since
∆v k = A! (f − Auk ) "= 0
Stagnation finishes when some element of vk
cross [-!, !].
Convergence
Kicking Scheme
!
uk+1 = δ · shrink(v k , µ)
v k+1 = v k + A! (f − Auk+1 )
After uk converges, !vk is fixed.
vk increases arithmetically.
!
uk+j ≡ uk+1
v k+j = v k + j · A! (f − Auk+1 )
Kicking Scheme
Estimate the length of stagnation.
si =
!
I0 = {i : ui = 0}, I1 = I0
µ · sign((A (f − Au
))i ) −
(A! (f − Auk+1 ))i
!
k+1
k+1
vi
"
s = min{si }
i∈I0
Predict the end status
!
uk+s ≡ uk+1
k+s
k
!
k+1
v
= v + s · A (f − Au
)
∀i ∈ I0
Algorithm
Algorithm 2 Linearized Bregman Iteration with Kicking
Initialize: u = 0, v = 0.
while “!f − Au! not converge” do
uk+1 = δ · shrink(v k , µ)
if “uk+1 ≈ uk ” then
calculate s as previously defined
vik+1 = vik + s · δ · (A! (f − Auk+1 ))i , ∀i ∈ I0
vik+1 = vik , ∀i ∈ I1
else
v k+1 = v k + δ · A! (f − Auk+1 )
end if
end while
1
1
0.8
0.8
Kicking Scheme
Similar with line search.
Gives a subsequence of the original iteration.
Accelerate the speed.
Although we don’t have a global exponential
decay, the residual decays exponentially in
each phase of the iteration, while the
stagnation between phases are eliminated by
kicking.
Kicking Scheme
+,- ../0!1..
"!
%
#
!
!#
log10u
3
!%
2.5
!*
!)
2
!"!
1.5
!"#
!
"
#
$
%
&
%
'("!
1
0.5
()*"!++,-!.++
$
0
0
20
40
60
80
100
#
"
!
!"
!#
!$
!%
!
"!
#!
$!
%!
&!
'!
Numerical Results
Signal with noisy measurements
'()*+,-./*01.2.34*5,6'7,8,##$""9%
!$&
:4.1/4)(3*,8,#!%5,;)3/<,=11(1,8,!$!%!>?9
#
!$%
!$"
!$#
!
!
!!$#
!!$"
!!$%
!
"!!
#!!!
!#
!
#!!!
%!!!
&!!!
9!!!
Numerical Results
Signal with high dynamical range
*
"!
()"!
+,-./)#*'0
2334!+5
#
'
"01
&
"
%
!01
#
!
!
"!!!
#!!!
$!!!
%!!!
!
!
!
!#
!#
!%
!%
!&
!'
!"!
!"#
#!!!
$!!!
%!!!
6-789)3:);3<"!=>>@!@,.@->>B>>@,.@->>C
;3<"!=-..3.C
;3<"!=.-DEF@8;C
6-789)3:);3<"!=>>?@A!:>>B>>:>>C
"!!!
!&
!'
!"!
!
"!!
#!!
+,-.8,E35D
$!!
!"#
!
"!!
#!!
+,-.8,E35D
$!!
Recovery of Sinusoidal
Waves
u(t) = a sin(αt) + b cos(βt) + n,
with n ∼ N (0, σ)
partial information of u(t) is known.
n is relatively large. SNR could be negative.
equivalent with compressed sensing problem.
high probability to recover the frequencies ", #
exactly.
Numerical Results
Recovery of Sinusoidal Waves in Huge Noise
SNR=1.1644, 20% measurements are taken.
'()*+,-.,+/.01)23.4,5627.809.:.";"<%%
#
96=1+2>(?=>)1+.)+.D(6E?6+=3.B1C,)+
"&!
"
"!!
!
&!
!"
!
!#
!
"!!
#!!
$!!
%!!
!
#!!
%!!
<!!
&!!
96=1+2>(?=>)1+.)+.@A32)=,-.B1C,)+
'+6.F11C!G+
";&
"
"
!;&
!;&
!
!
!!;&
!!;&
!"
!";&
!"
!
"!!
#!!
$!!
%!!
&!!
!
&!
"!!
"&!
Numerical Results
Recovery of Sinusoidal Waves in Huge Noise
SNR=-2.2905, 40% measurements are taken.
'()*+,-.,+/.01)23.4,5627.809.:.!#;#<!&
%
96=1+2>(?=>)1+.)+.E(6F?6+=3.B1C,)+
#!!
$
#
"&!
"
"!!
!
&!
!"
!#
!$
!
!
"!!
#!!
$!!
%!!
!
96=1+2>(?=>)1+.)+.@A32)=,-.B1C,)+
%!!
D!!
'+6.G11C!H+
";&
";&
"
"
!;&
!;&
!
!
!!;&
!!;&
!"
!"
!";&
#!!
&!!
!";&
!
"!!
#!!
$!!
%!!
&!!
!
&!
"!!
Numerical Results
Recovery of Sinusoidal Waves in Huge Noise
SNR=-5.0459, 80% measurements are taken.
()*+,-./-,0/12*34/5-6738/91:/;/!&<!%&=
'
:7>2,3?)@>?*2,/*,/E)7F@7,>4/C2D-*,
"&!
%
#
"!!
!
&!
!#
!%
!
!'
!
"!!
#!!
$!!
%!!
!
:7>2,3?)@>?*2,/*,/AB43*>-./C2D-*,
%!!
'!!
(,7/G22D!H,
"<&
"<&
"
"
!<&
!<&
!
!
!!<&
!!<&
!"
!"
!"<&
#!!
&!!
!"<&
!
"!!
#!!
$!!
%!!
&!!
!
&!
"!!
Thank You!
Download