# Fullscreen View

The author recommends viewing the raindrop in fullsceen.
×

# Raindrop Info

Title:
Compressed Sensing
Authors:
Created:
2015-Mar-19
Updated:
2015-Mar-31
Views:
1650
Pages:
77
Likes:
5
×

# Copy the link below to embed!

×
1

Compressed Sensing
Data Sciences Course, Jan 2015

Ramesh Hariharan

2

The Main Problem

• Given a $x$ of length $n$

Ramesh Hariharan

3

The Main Problem

• Given a $x$ of length $n$

• $n$ is very very large

Ramesh Hariharan

4

The Main Problem

• Given a $x$ of length $n$

• $n$ is very very large

• But $x$ is known to be sparse, say $|x|_0\leq k$, i.e., at most $k$ non-0 entries

Ramesh Hariharan

5

The Main Problem

• Given a $x$ of length $n$

• $n$ is very very large

• But $x$ is known to be sparse, say $|x|_0\leq k$, i.e., at most $k$ non-0 entries

• What if you compress by taking $Ax$ where $A$ is $m\times n$, where $m\lt\lt n$?

Ramesh Hariharan

6

The Main Problem

• Given a $x$ of length $n$

• $n$ is very very large

• But $x$ is known to be sparse, say $|x|_0\leq k$, i.e., at most $k$ non-0 entries

• What if you compress by taking $Ax$ where $A$ is $m\times n$, where $m\lt\lt n$?

• Can $x$ be recovered?

Ramesh Hariharan

7

The Main Problem

• Given a $x$ of length $n$

• $n$ is very very large

• But $x$ is known to be sparse, say $|x|_0\leq k$, i.e., at most $k$ non-0 entries

• What if you compress by taking $Ax$ where $A$ is $m\times n$, where $m\lt\lt n$?

• Can $x$ be recovered?

Ramesh Hariharan

8

Formulating the Problem

• Given $m\times n$ matrix $A$ and given vector $b\in R^m$

Ramesh Hariharan

9

Formulating the Problem

• Given $m\times n$ matrix $A$ and given vector $b\in R^m$

• $m \lt\lt n$

Ramesh Hariharan

10

Formulating the Problem

• Given $m\times n$ matrix $A$ and given vector $b\in R^m$

• $m \lt\lt n$

• Find vector $x\in R^n$ such that $Ax=b$

Ramesh Hariharan

11

Formulating the Problem

• Given $m\times n$ matrix $A$ and given vector $b\in R^m$

• $m \lt\lt n$

• Find vector $x\in R^n$ such that $Ax=b$

• If $A$ has rank $m$, then many solutions exist

Ramesh Hariharan

12

Formulating the Problem

• Given $m\times n$ matrix $A$ and given vector $b\in R^m$

• $m \lt\lt n$

• Find vector $x\in R^n$ such that $Ax=b$

• If $A$ has rank $m$, then many solutions exist

• Find the sparsest of these, the one with the most 0s

Ramesh Hariharan

13

Formulating the Problem

• Given $m\times n$ matrix $A$ and given vector $b\in R^m$

• $m \lt\lt n$

• Find vector $x\in R^n$ such that $Ax=b$

• If $A$ has rank $m$, then many solutions exist

• Find the sparsest of these, the one with the most 0s

• I.e., minimize $|x|_0$

Ramesh Hariharan

14

The Slightly Different Optimization Criterion

• $\min |x|_0$, satisfying constraints $Ax=b$

Ramesh Hariharan

15

The Slightly Different Optimization Criterion

• $\min |x|_0$, satisfying constraints $Ax=b$

• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

Ramesh Hariharan

16

The Slightly Different Optimization Criterion

• $\min |x|_0$, satisfying constraints $Ax=b$

• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

• The latter can be solved fast using linear programming

• $A(u-v)=b$
• $u,v\geq 0$
• $\min (\sum_i u_i + \sum_i v_i)$
• note $x_i$ corresponds to $u_i-v_i$, one of which is 0

Ramesh Hariharan

17

Relationship Between the Two

• How are these two related?

Ramesh Hariharan

18

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$

Ramesh Hariharan

19

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$
• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

Ramesh Hariharan

20

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$
• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

• An example where they are different

Ramesh Hariharan

21

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$
• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

• An example where they are different
• Pick two points, say $(0,100,0)$ and $(1,0,1)$

Ramesh Hariharan

22

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$
• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

• An example where they are different
• Pick two points, say $(0,100,0)$ and $(1,0,1)$
• The first has smaller 0-norm, the second has smaller 1-norm

Ramesh Hariharan

23

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$
• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

• An example where they are different
• Pick two points, say $(0,100,0)$ and $(1,0,1)$
• The first has smaller 0-norm, the second has smaller 1-norm
• Pick any two hyperplanes passing through both points; those are your two constraints

Ramesh Hariharan

24

Relationship Between the Two

• How are these two related?
• $\min |x|_0$, satisfying constraints $Ax=b$
• $\min |x|_1= \sum_i |x_i|$, satisfying constraints $Ax=b$

• An example where they are different
• Pick two points, say $(0,100,0)$ and $(1,0,1)$
• The first has smaller 0-norm, the second has smaller 1-norm
• Pick any two hyperplanes passing through both points; those are your two constraints
• $\min |x|_0\not=\min |x|_1$

Ramesh Hariharan

25

Sufficient Condition for Equality?

• Suppose the $x$ realizing $\min |x|_0, Ax=b$, has $|x|_0\leq k$

Ramesh Hariharan

26

Sufficient Condition for Equality?

• Suppose the $x$ realizing $\min |x|_0, Ax=b$, has $|x|_0\leq k$

• If $|Ax| \sim |x|$, $\forall x$ with $|x|_0\leq k$, then..

Ramesh Hariharan

27

Sufficient Condition for Equality?

• Suppose the $x$ realizing $\min |x|_0, Ax=b$, has $|x|_0\leq k$

• If $|Ax| \sim |x|$, $\forall x$ with $|x|_0\leq k$, then..

• Minimizing $|x|_1$ yields the same result as minimizing $|x|_0$

Ramesh Hariharan

28

Sufficient Condition for Equality?

• Suppose the $x$ realizing $\min |x|_0, Ax=b$, has $|x|_0\leq k$

• If $|Ax| \sim |x|$, $\forall x$ with $|x|_0\leq k$, then..

• Minimizing $|x|_1$ yields the same result as minimizing $|x|_0$

• Say $(1-\alpha)^2|x|^2 \leq |Ax|^2 \leq (1+\alpha)^2|x|^2$ $\forall x$ with $|x|_0\leq k$

Ramesh Hariharan

29

Sufficient Condition for Equality?

• Suppose the $x$ realizing $\min |x|_0, Ax=b$, has $|x|_0\leq k$

• If $|Ax| \sim |x|$, $\forall x$ with $|x|_0\leq k$, then..

• Minimizing $|x|_1$ yields the same result as minimizing $|x|_0$

• Say $(1-\alpha)^2|x|^2 \leq |Ax|^2 \leq (1+\alpha)^2|x|^2$ $\forall x$ with $|x|_0\leq k$
• So singular values of $A_s$ are in the range $1-\alpha\ldots 1+\alpha$ for all submatrices $A_s$ comprising $s\leq k$ columns of $A$

Ramesh Hariharan

30

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$ $.........$Here $x=u-v$

Ramesh Hariharan

31

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$ $.........$Here $x=u-v$

• = $\min_{u,v} \max_{y,a,b} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Replace\ Constraints \ with \ Objectives$

Ramesh Hariharan

32

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$ $.........$Here $x=u-v$

• = $\min_{u,v} \max_{y,a,b} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Replace\ Constraints \ with \ Objectives$

• $\geq \max_{y,a,b} \min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Flip \ Order\ and \ Next, \ Differentiate$

Ramesh Hariharan

33

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$ $.........$Here $x=u-v$

• = $\min_{u,v} \max_{y,a,b} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Replace\ Constraints \ with \ Objectives$

• $\geq \max_{y,a,b} \min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Flip \ Order\ and \ Next, \ Differentiate$

• $=\max_{y} y^Tb$ subject to $y^TA+a^T={\bf 1}^T, -y^TA+b^T={\bf 1}^T, a,b\geq {\bf 0}$

Ramesh Hariharan

34

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$ $.........$Here $x=u-v$

• = $\min_{u,v} \max_{y,a,b} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Replace\ Constraints \ with \ Objectives$

• $\geq \max_{y,a,b} \min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv - y^T[A(u-v)-b]- a^Tu - b^Tv$ subject to $a,b\geq {\bf 0}$ $..................Flip \ Order\ and \ Next, \ Differentiate$

• $=\max_{y} y^Tb$ subject to $y^TA+a^T={\bf 1}^T, -y^TA+b^T={\bf 1}^T, a,b\geq {\bf 0}$

• $=\max_{y} y^Tb$ subject to $y^TA\leq {\bf 1}^T, y^TA\geq {\bf -1}^T$....$Simplify$

Ramesh Hariharan

35

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$

Ramesh Hariharan

36

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$

• Actually ${\bf =}$ $\max_{y} y^Tb$ subject to $y^TA\leq {\bf 1}^T, y^TA\geq {\bf -1}^T$

Ramesh Hariharan

37

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$

• Actually ${\bf =}$ $\max_{y} y^Tb$ subject to $y^TA\leq {\bf 1}^T, y^TA\geq {\bf -1}^T$

• Proof: Consider optimal $u,v$; remember one of $u_i,v_i=0, \forall i$

• If $\exists y$ such that ${\bf 1}^Tu+{\bf 1}^Tv=y^TA(u-v)=y^Tb$ then done

Ramesh Hariharan

38

Certificate of Optimality for $\min |x|_1$: LP Duality

• $\min_{u,v} {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$, $u,v\geq {\bf 0}$

• Actually ${\bf =}$ $\max_{y} y^Tb$ subject to $y^TA\leq {\bf 1}^T, y^TA\geq {\bf -1}^T$

• Proof: Consider optimal $u,v$; remember one of $u_i,v_i=0, \forall i$

• If $\exists y$ such that ${\bf 1}^Tu+{\bf 1}^Tv=y^TA(u-v)=y^Tb$ then done

• In other words, we want a $y$ such that $g^T=y^TA$
• Where $g_i=sgn(u_i-v_i)$ except if $u_i=v_i=0$ in which case $-1\leq g_i\leq 1$

Ramesh Hariharan

39

Certificate of Optimality for $\min |x|_1$: LP Duality

• Prove: $\exists y$ such that $g^T=y^TA$ where $g_i=sgn(u_i-v_i)$ except if $u_i=v_i=0$ in which case $-1\leq g_i\leq 1$

Ramesh Hariharan

40

Certificate of Optimality for $\min |x|_1$: LP Duality

• Prove: $\exists y$ such that $g^T=y^TA$ where $g_i=sgn(u_i-v_i)$ except if $u_i=v_i=0$ in which case $-1\leq g_i\leq 1$

• If not, then all such $g^T$'s are outside the row-space of $A$; and the set $G$ of such $g^T$'s is a convex set (a box really)

Ramesh Hariharan

41

Certificate of Optimality for $\min |x|_1$: LP Duality

• Prove: $\exists y$ such that $g^T=y^TA$ where $g_i=sgn(u_i-v_i)$ except if $u_i=v_i=0$ in which case $-1\leq g_i\leq 1$

• If not, then all such $g^T$'s are outside the row-space of $A$; and the set $G$ of such $g^T$'s is a convex set (a box really)

• So there exists a hyperplane $h$ containing the row-space of $A$ such that $G$ is completely on one side of $h$, i.e., $Ah=0$, and $g^Th<0, \forall g\in G$

Ramesh Hariharan

42

Certificate of Optimality for $\min |x|_1$: LP Duality

• Prove: $\exists y$ such that $g^T=y^TA$ where $g_i=sgn(u_i-v_i)$ except if $u_i=v_i=0$ in which case $-1\leq g_i\leq 1$

• If not, then all such $g^T$'s are outside the row-space of $A$; and the set $G$ of such $g^T$'s is a convex set (a box really)

• So there exists a hyperplane $h$ containing the row-space of $A$ such that $G$ is completely on one side of $h$, i.e., $Ah=0$, and $g^Th<0, \forall g\in G$

• So $A(u-v+h)=b$ and ${\bf 1}^Tu+{\bf 1}^Tv+g^Th<{\bf 1}^Tu+{\bf 1}^Tv$, a contradiction

Ramesh Hariharan

43

Certificate of Optimality for $\min |x|_1$: LP Duality

• Prove: $\exists y$ such that $g^T=y^TA$ where $g_i=sgn(u_i-v_i)$ except if $u_i=v_i=0$ in which case $-1\leq g_i\leq 1$

• If not, then all such $g^T$'s are outside the row-space of $A$; and the set $G$ of such $g^T$'s is a convex set (a box really)

• So there exists a hyperplane $h$ containing the row-space of $A$ such that $G$ is completely on one side of $h$, i.e., $Ah=0$, and $g^Th<0, \forall g\in G$

• So $A(u-v+h)=b$ and ${\bf 1}^Tu+{\bf 1}^Tv+g^Th<{\bf 1}^Tu+{\bf 1}^Tv$, a contradiction

Ramesh Hariharan

44

Summarizing

• We want $\min |x|_0$ subject to $Ax=b$

Ramesh Hariharan

45

Summarizing

• We want $\min |x|_0$ subject to $Ax=b$

• Instead we do $\min |x|_1$ subject to $Ax=b$, using LP $\min {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$

Ramesh Hariharan

46

Summarizing

• We want $\min |x|_0$ subject to $Ax=b$

• Instead we do $\min |x|_1$ subject to $Ax=b$, using LP $\min {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$

• Claim: If $(1-\alpha)^2|x|^2 \leq |Ax|^2 \leq (1+\alpha)^2|x|^2$, $\forall$ $x$ with $|x|_0\leq k$ then...

Ramesh Hariharan

47

Summarizing

• We want $\min |x|_0$ subject to $Ax=b$

• Instead we do $\min |x|_1$ subject to $Ax=b$, using LP $\min {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$

• Claim: If $(1-\alpha)^2|x|^2 \leq |Ax|^2 \leq (1+\alpha)^2|x|^2$, $\forall$ $x$ with $|x|_0\leq k$ then...

• Suppose the $|x|_0$ optimum $x$ has $|x|_0= s< k$ ; there exists $y$ such that $y^TA=g^T$, where $g_i=sgn(x_i)$ (with exceptions if $x_i=0$)

Ramesh Hariharan

48

Summarizing

• We want $\min |x|_0$ subject to $Ax=b$

• Instead we do $\min |x|_1$ subject to $Ax=b$, using LP $\min {\bf 1}^Tu+{\bf 1}^Tv$ subject to $A(u-v)=b$

• Claim: If $(1-\alpha)^2|x|^2 \leq |Ax|^2 \leq (1+\alpha)^2|x|^2$, $\forall$ $x$ with $|x|_0\leq k$ then...

• Suppose the $|x|_0$ optimum $x$ has $|x|_0= s< k$ ; there exists $y$ such that $y^TA=g^T$, where $g_i=sgn(x_i)$ (with exceptions if $x_i=0$)

• It follows that $\min |x|_0 = \min |x|_1$ subject to $Ax=b$

Ramesh Hariharan

49

Determining $y$

• Let $_s$ denote the $s$ indices corresponding to non-zero values of the optimum 0-norm solution $x$; $\overline{s}$ denote the indices in complement.

Ramesh Hariharan

50

Determining $y$

• Let $_s$ denote the $s$ indices corresponding to non-zero values of the optimum 0-norm solution $x$; $\overline{s}$ denote the indices in complement.

• We want $y$ such that $y^TA_s=g^T=sgn(x_s^T)$

Ramesh Hariharan

51

Determining $y$

• Let $_s$ denote the $s$ indices corresponding to non-zero values of the optimum 0-norm solution $x$; $\overline{s}$ denote the indices in complement.

• We want $y$ such that $y^TA_s=g^T=sgn(x_s^T)$

• And then $y^TA_{\overline{s}}$ must have all entries between -1 and 1

Ramesh Hariharan

52

Determining $y$

• Let $_s$ denote the $s$ indices corresponding to non-zero values of the optimum 0-norm solution $x$; $\overline{s}$ denote the indices in complement.

• We want $y$ such that $y^TA_s=g^T=sgn(x_s^T)$

• And then $y^TA_{\overline{s}}$ must have all entries between -1 and 1

• $A_s$ has rank $s$, otherwise there exists $x$, $|x|_0< s$, $Ax=b$

Ramesh Hariharan

53

Determining $y$

• Let $_s$ denote the $s$ indices corresponding to non-zero values of the optimum 0-norm solution $x$; $\overline{s}$ denote the indices in complement.

• We want $y$ such that $y^TA_s=g^T=sgn(x_s^T)$

• And then $y^TA_{\overline{s}}$ must have all entries between -1 and 1

• $A_s$ has rank $s$, otherwise there exists $x$, $|x|_0< s$, $Ax=b$

• So possibly many solutions to $y^TA_s=g^T$

Ramesh Hariharan

54

Solving for $y$

• Find $y$ such that $y^TA_s=g^T$

Ramesh Hariharan

55

Solving for $y$

• Find $y$ such that $y^TA_s=g^T$

• $A_s=U\Sigma V^T$: $U=m\times s$, $\Sigma=s\times s$, $V=s\times s$, $VV^T=V^TV=I$, $U^TU=I$

Ramesh Hariharan

56

Solving for $y$

• Find $y$ such that $y^TA_s=g^T$

• $A_s=U\Sigma V^T$: $U=m\times s$, $\Sigma=s\times s$, $V=s\times s$, $VV^T=V^TV=I$, $U^TU=I$

• Find $y$ such that $y^T (U\Sigma V^T)=g^T$

Ramesh Hariharan

57

Solving for $y$

• Find $y$ such that $y^TA_s=g^T$

• $A_s=U\Sigma V^T$: $U=m\times s$, $\Sigma=s\times s$, $V=s\times s$, $VV^T=V^TV=I$, $U^TU=I$

• Find $y$ such that $y^T (U\Sigma V^T)=g^T$

• Instead, find $y$ such that $y^T (U\Sigma V^T)(V\Sigma^{-1}U^T) =g^T(V\Sigma^{-1} U^T)$

Ramesh Hariharan

58

Solving for $y$

• Find $y$ such that $y^TA_s=g^T$

• $A_s=U\Sigma V^T$: $U=m\times s$, $\Sigma=s\times s$, $V=s\times s$, $VV^T=V^TV=I$, $U^TU=I$

• Find $y$ such that $y^T (U\Sigma V^T)=g^T$

• Instead, find $y$ such that $y^T (U\Sigma V^T)(V\Sigma^{-1}U^T) =g^T(V\Sigma^{-1} U^T)$

• Good unless $y^TA_s-g^T$ is non-zero and orthogonal to the columns of $V\Sigma^{-1}U^T$, need to keep this in mind

Ramesh Hariharan

59

Solving for $y$

• Find $y$ such that $y^TA_s=g^T$

• $A_s=U\Sigma V^T$: $U=m\times s$, $\Sigma=s\times s$, $V=s\times s$, $VV^T=V^TV=I$, $U^TU=I$

• Find $y$ such that $y^T (U\Sigma V^T)=g^T$

• Instead, find $y$ such that $y^T (U\Sigma V^T)(V\Sigma^{-1}U^T) =g^T(V\Sigma^{-1} U^T)$

• Good unless $y^TA_s-g^T$ is non-zero and orthogonal to the columns of $V\Sigma^{-1}U^T$, need to keep this in mind

• So solve for $y$ in $y^T U U^T = g^T(V\Sigma^{-1} U^T)$

Ramesh Hariharan

60

Psudeo-Inverse, and One Solution for $y$

• So solve for $y$ in $y^T U U^T = g^T(V\Sigma^{-1} U^T)$

Ramesh Hariharan

61

Psudeo-Inverse, and One Solution for $y$

• So solve for $y$ in $y^T U U^T = g^T(V\Sigma^{-1} U^T)$

• If $y$ is in the column space of $U$ then $y^T U U^T=y^T=g^T(V\Sigma^{-1} U^T)$

• $V\Sigma^{-1} U^T$ behaves like an inverse in this case!

Ramesh Hariharan

62

Psudeo-Inverse, and One Solution for $y$

• So solve for $y$ in $y^T U U^T = g^T(V\Sigma^{-1} U^T)$

• If $y$ is in the column space of $U$ then $y^T U U^T=y^T=g^T(V\Sigma^{-1} U^T)$

• $V\Sigma^{-1} U^T$ behaves like an inverse in this case!

• So of the many $y$'s, let's take the (unique) one in the column space of $U$.

Ramesh Hariharan

63

Psudeo-Inverse, and One Solution for $y$

• So solve for $y$ in $y^T U U^T = g^T(V\Sigma^{-1} U^T)$

• If $y$ is in the column space of $U$ then $y^T U U^T=y^T=g^T(V\Sigma^{-1} U^T)$

• $V\Sigma^{-1} U^T$ behaves like an inverse in this case!

• So of the many $y$'s, let's take the (unique) one in the column space of $U$.

• Verify $y^TA_s-g^T= g^T(V\Sigma^{-1} U^T)(U\Sigma V^T)-g^T=g^TVV^T-g^T=0$

Ramesh Hariharan

64

Psudeo-Inverse, and One Solution for $y$

• So solve for $y$ in $y^T U U^T = g^T(V\Sigma^{-1} U^T)$

• If $y$ is in the column space of $U$ then $y^T U U^T=y^T=g^T(V\Sigma^{-1} U^T)$

• $V\Sigma^{-1} U^T$ behaves like an inverse in this case!

• So of the many $y$'s, let's take the (unique) one in the column space of $U$.

• Verify $y^TA_s-g^T= g^T(V\Sigma^{-1} U^T)(U\Sigma V^T)-g^T=g^TVV^T-g^T=0$

• Conclusion: $y^T=g^T(V\Sigma^{-1} U^T)$

Ramesh Hariharan

65

What's Left?

• We need $y^TA_{\overline{s}}$ to have all entries between -1 and 1

Ramesh Hariharan

66

What's Left?

• We need $y^TA_{\overline{s}}$ to have all entries between -1 and 1
• Let $A_t$ denote all columns of $A_s$ plus one column of $A_{\overline{s}}$; this extra column can be written as $A_te$, $|e|_0=1$

Ramesh Hariharan

67

What's Left?

• We need $y^TA_{\overline{s}}$ to have all entries between -1 and 1
• Let $A_t$ denote all columns of $A_s$ plus one column of $A_{\overline{s}}$; this extra column can be written as $A_te$, $|e|_0=1$
• We want $g^T(V\Sigma^{-1} U^T)A_te$ to have entries between -1 and 1

Ramesh Hariharan

68

What's Left?

• We need $y^TA_{\overline{s}}$ to have all entries between -1 and 1
• Let $A_t$ denote all columns of $A_s$ plus one column of $A_{\overline{s}}$; this extra column can be written as $A_te$, $|e|_0=1$
• We want $g^T(V\Sigma^{-1} U^T)A_te$ to have entries between -1 and 1

• $g^T(V\Sigma^{-1} [\Sigma^{-1} V^TV \Sigma] U^T)A_te=g^T(V\Sigma^{-2}V^T) A_s^TA_te$

Ramesh Hariharan

69

What's Left?

• We need $y^TA_{\overline{s}}$ to have all entries between -1 and 1
• Let $A_t$ denote all columns of $A_s$ plus one column of $A_{\overline{s}}$; this extra column can be written as $A_te$, $|e|_0=1$
• We want $g^T(V\Sigma^{-1} U^T)A_te$ to have entries between -1 and 1

• $g^T(V\Sigma^{-1} [\Sigma^{-1} V^TV \Sigma] U^T)A_te=g^T(V\Sigma^{-2}V^T) A_s^TA_te$

• $|g|^2=s$, $|z^T|^2=|g^TV\Sigma^{-2} V^T|^2\in \frac{s}{(1\pm \alpha)^4}$, $z^TA_s^T=h^TA_t^T$ where $h^T=[z^T \ \ \ 0]$

Ramesh Hariharan

70

What's Left?

• We need $y^TA_{\overline{s}}$ to have all entries between -1 and 1
• Let $A_t$ denote all columns of $A_s$ plus one column of $A_{\overline{s}}$; this extra column can be written as $A_te$, $|e|_0=1$
• We want $g^T(V\Sigma^{-1} U^T)A_te$ to have entries between -1 and 1

• $g^T(V\Sigma^{-1} [\Sigma^{-1} V^TV \Sigma] U^T)A_te=g^T(V\Sigma^{-2}V^T) A_s^TA_te$

• $|g|^2=s$, $|z^T|^2=|g^TV\Sigma^{-2} V^T|^2\in \frac{s}{(1\pm \alpha)^4}$, $z^TA_s^T=h^TA_t^T$ where $h^T=[z^T \ \ \ 0]$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$

Ramesh Hariharan

71

Showing $-1\leq y^TAe\leq 1$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$

Ramesh Hariharan

72

Showing $-1\leq y^TAe\leq 1$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$
• Does transformation by $A$ preserve near-orthogonality?

Ramesh Hariharan

73

Showing $-1\leq y^TAe\leq 1$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$
• Does transformation by $A$ preserve near-orthogonality?

• Let $A_t=X\Pi Y^T$; since $t\leq k$, items in $\Pi$ $\in (1\pm\alpha)$

Ramesh Hariharan

74

Showing $-1\leq y^TAe\leq 1$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$
• Does transformation by $A$ preserve near-orthogonality?

• Let $A_t=X\Pi Y^T$; since $t\leq k$, items in $\Pi$ $\in (1\pm\alpha)$

• $h^TA_t^TA_te=h^TY\Pi^2Y^Te= (Y^Th)^T\Pi^2 (Y^Te)$ where $Y^Th$ and $Y^Te$ are mutually orthogonal

Ramesh Hariharan

75

Showing $-1\leq y^TAe\leq 1$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$
• Does transformation by $A$ preserve near-orthogonality?

• Let $A_t=X\Pi Y^T$; since $t\leq k$, items in $\Pi$ $\in (1\pm\alpha)$

• $h^TA_t^TA_te=h^TY\Pi^2Y^Te= (Y^Th)^T\Pi^2 (Y^Te)$ where $Y^Th$ and $Y^Te$ are mutually orthogonal
• What happens to the dot-prod of two orthogonal vectors if we scale each vector by $1\pm \alpha$? Of course, $0\leq \alpha<1$.

Ramesh Hariharan

76

Showing $-1\leq y^TAe\leq 1$

• TPT $hA_t^TA_te$ entries $\in [-1,1]$, $|h|^2 \in \frac{s}{(1\pm \alpha)^4},|e|^2=1,h^Te=0$
• Does transformation by $A$ preserve near-orthogonality?

• Let $A_t=X\Pi Y^T$; since $t\leq k$, items in $\Pi$ $\in (1\pm\alpha)$

• $h^TA_t^TA_te=h^TY\Pi^2Y^Te= (Y^Th)^T\Pi^2 (Y^Te)$ where $Y^Th$ and $Y^Te$ are mutually orthogonal
• What happens to the dot-prod of two orthogonal vectors if we scale each vector by $1\pm \alpha$? Of course, $0\leq \alpha<1$.

• in absolute value, at most $2\alpha |h||e|\leq 2\alpha \frac{\sqrt{s}}{(1\pm \alpha)^2}\leq 1$, for $\alpha = \frac{1}{3\sqrt{s}}$...DONE

Ramesh Hariharan

77

What types of $A$ are Length Preserving for Sparse Vectors?

• We want $(1-\frac{1}{3\sqrt{s}})^2\leq |Ax|^2\leq (1+\frac{1}{3\sqrt{s}})^2$, $\forall x$ with $|x|_0= s+1$, $\forall s < k$

• By JL theorem, we know that a $m\times n$ random Gaussian matrix with mean 0, variance $\frac{1}{m}$ Gaussian entries chosen independently are length-preserving

• Show that for $m = \Omega(k^2\log n)$ what we want holds!

Ramesh Hariharan

Share
×