Talk:Matching pursuit

Latest comment: 4 years ago by TshayDee in topic Original Algorithm from Accelerator Physics

what is this in the formula? edit

sorry, what's the _0? Is this the p-norm with p = 0? this is from

 

thanks. --24.130.24.6 (talk) 19:58, 15 September 2008 (UTC)Reply

reference vector of functions edit

I don't want to make any errors, but I think "reference vector A_j (the dictionary)" could be changed to "reference vector of functions A_j", as there doesn't seem to be much dictionary structure (lookup from j to A_j?). Also, perhaps it could be considered making A_j normalized, and getting rid of the ||A_j|| below. --gatoatigrado 20:13, 15 September 2008 (UTC)Reply

algorithm looks off... edit

The algorithm looks a bit weird, because it never involves the original signal x again. All of the parameters for refinement are scalars, not functions. This contrasts with the link below. I will look into it further. Apologies if this is a false alarm. http://www.scholarpedia.org/article/Matching_pursuit#Matching_pursuit_algorithm.

cheers, gatoatigrado 20:39, 15 September 2008 (UTC)Reply

The algorithm is correct. Rf_n is not a scalar, it is the residual vector. 76.17.115.184 (talk) 20:19, 15 August 2012 (UTC)Reply

Main disadvantage missing edit

The article says that the main disadvantage of matching pursuit is the encoder complexity. This may be true, but what I think the main disadvantage is, is the fact that the vectors g (i.e. the columns of D) have to be orthogonal w.r.t. each other, to constitute a valid basis. If this is verified the algorithm converges, otherwise unexpected results are obtained. I'd like to receive a feedback before modifying the page. Squalho (talk) 16:26, 5 June 2009 (UTC)Reply

That does not sound correct. Matching pursuit is specifically designed for situations in which the dictionary is overcomplete, as explained in the article and its references. --Zvika (talk) 18:12, 5 June 2009 (UTC)Reply
The case of the dictionary being an orthogonal basis, or even an under-complete orthogonal set, is indeed pretty trivial, since you can just compute all the coefficients and pick and the largest ones trivially. Zvika is right that the matching pursuit scheme is designed primarily for over-complete bases, though it is sometimes used for non-orthogonal under-complete bases as well. I'm not sure what you mean by "unexpected results are obtained." Dicklyon (talk) 01:58, 6 June 2009 (UTC)Reply

orthogonal matching pursuit edit

The following text appears in the article on least squares spectral analysis.


matching pursuit with post-backfitting[12] or orthogonal matching pursuit.[13]

12.^ a b Pascal Vincent and Yoshua Bengio (2002). "Kernel Matching Pursuit". Machine Learning 48: 165–187. doi:10.1023/A:1013955821559. http://www.iro.umontreal.ca/~vincentp/Publications/kmp_mlj.pdf.

13.^ Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, "Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition," in Proc. 27th Asilomar Conference on Signals, Systems and Computers, A. Singh, ed., Los Alamitos, CA, USA, IEEE Computer Society Press, 1993.




It would be helpful if someone who understands this could incorporate this additional information in the article about matching pursuit.


—Preceding unsigned comment added by Skysong263 (talkcontribs) 19:44, 23 April 2010 (UTC)Reply

citation needed edit

There is no citation under "Orthogonal Matching Pursuit" for the statement

In fact, this algorithm solves the sparse problem :
(problem), with  the L0 pseudo-norm equal to the number of nonzero elements of x.  — Preceding unsigned comment added by 76.17.115.184 (talk) 03:40, 19 November 2011 (UTC)Reply 

The claim is in fact false. The minimization problem posed is known to be NP-Hard. OMP only produces an approximation to the solution. — Preceding unsigned comment added by 132.250.22.10 (talk) 19:29, 1 August 2012 (UTC)Reply

Normalization of a_n edit

It seems to me that "The Algorithm" assumes the dictionary entries are unity-norm. I think a_n should be defined as

 ;

... but I lack the confidence to make the edit without an "amen". --Mborg (talk) 01:51, 18 February 2013 (UTC)Reply


When I first read the article, I got the impression the functions   were intended to be drawn from a normalized set - equivalent to setting  , but I cannot find an explicit reference in the current version. (I don't know if there has been an edit since I first saw it, or if I got the impression from other, linked pages, or if it's something I picked up from previous experience.) Maximizing the inner product suggests that all the   should be normalized. Also, since the inner product is linear in  , either the functions should be normalized, or  . Note I differ from Mborg on the power in the denominator. I do not have a handy reference. Also, there seems to be some confusion in notation. The functions are initially described as  , but in the article they are referred to as "coefficients"  . I suggest removing the "coefficients"   in favor of "functions"   and specifying that the latter be normalized. SMesser (talk) 15:01, 28 February 2014 (UTC)Reply


You need to divide by   because you end up multiplying by   twice: once in the inner product and again in the reconstitution on the next line. If you change the scale of  , you need to divide by the norm squared to get the same approximation and residual.
It looks like the other questions on this page have mostly been addressed. I think that clears things up well enough to remove the "in need of expert attention" tag. Motmahp (talk) —Preceding undated comment added 23:25, 20 June 2016 (UTC)Reply

PCA edit

"In the case that the vectors in   are orthonormal instead of redundant, then MP is a form of principal component analysis"

This is not true. PCA is a method for finding an optimal matrix  . Projecting a vector onto an arbitrary orthonormal   has nothing to do with PCA. Keleti (talk) 10:46, 25 January 2018 (UTC)Reply

Original Algorithm from Accelerator Physics edit

There is an algorithm called MICADO that was developped at CERN in the 70ies which does imho the same thing as the OMP. It can be found here: http://inspirehep.net/record/84507?ln=en

Should it be mentioned? — Preceding unsigned comment added by TshayDee (talkcontribs) 11:12, 7 January 2020 (UTC)Reply