Talk:Edge detection

Latest comment: 5 years ago by North8000 in topic I moved this from the article

Kudos edit

I've read a lot (a LOT) of technical articles on WP. My complements to all the authors of the intro. It is clear and readable and presents the subject in substantially non-specialist language. The first paragraph is a one-paragraph introduction to the topic, and the first sentence is a one-sentence definition of the concept.

Mad Props

Untitled edit

In the practice of digital image enhancement, basing edge detection merely on numerical derivatives is too naive, and unrealistic. For each pixel of a digital image, one wants not only to decide if it is a candidate for membership in an "edge" but also to find the direction of that edge. [In particular, the edge direction is required for true sharpening]. One needs to analize a suitable collection of neighboring pixels (typically, those at horizontal and vertical distances up to 3) with respect to intensity as well as position. Although effective methods of doing this are not very difficult to develop, it seems that commercial software does not provide truly suitable implementations.

Response to unsigned criticism above: Well, edge detection based on image derivatives is not fully naive, subject to the well-known practice of using Gaussian filtering as a pre-processing stage to the computation of image derivatives. This means that the effective support region for image derivative computations are equal to the support regions of first-order Gaussian derivative operators, and thus substantially larger than a distance of three pixels. Moreover, the orientation of an edge within the differential approach to edge detection is given as orthogonal to the orientation of the image gradient as estimated by first-order Gaussian derivative operators. In practice, these approaches have found numerous successful applications in computer vision, however, usually with different goals than mere image enhancement. Tpl

Add some "why"? edit

Lot's of technical "how" but not a lot of explanatory "why" for us non-techies. Just a little would be nice Awotter 23:17, 29 October 2007 (UTC)Reply

Major restructuring of this article edit

Following the tag marked in October 2007, I have now made a first attempt to restructure this article to be more updated with respect to the topic of edge detection and also to give more technical details of basic edge detectors. Question to those of you who have tagged this article, do you find it appropriate to remove the tag? Tpl (talk) 16:40, 22 February 2008 (UTC)Reply

Atomic line filter edit

Why is there a link to "atomic line filter" in the "See Also" section? I skimmed through the article, and I don't see that it really has anything at all to do with edge or line detection. Anyone? 65.183.135.231 (talk) 18:03, 18 July 2008 (UTC) I think that link should be removed since it deals with spectroscopy and not image processing. Line detection is closely related to edge detection and so should be included in this article. I'll try to add something soon. Zen-in (talk) 21:29, 4 June 2009 (UTC)Reply

I agree that the link to atomic line filter is not suitable. Regarding the topic of "line detection", there is an article on ridge detection on this topic. For detecting straight lines or circles, there is also an article on the Hough transform. Tpl (talk) 06:29, 5 June 2009 (UTC)Reply

Trivial edit

The "Why edge detection is a non-trivial task" section is a total failure. It shows a trivial example and then says "but most of it is not this trivial". Come on! Give a non-trivial example! Obviously!! Don't lead off with an easy one and then just wave your hands around saying that there are others that are harder. That's not explanation! --98.217.8.46 (talk) 17:25, 5 September 2008 (UTC)Reply

I agree that the text can be misunderstood if not read with a positive spirit. Now, I have added an additional sentence to explain where the problem is. Probably a better illustration would help, but the current illustration is used for of historical reasons (because it is available). If you know how to set the grey-levels in the presumable hexadecimal (?) notation used by previous authors, please help to generate a better illustration Tpl (talk) 14:54, 6 September 2008 (UTC)Reply

Steps to detect an edge map 1- get the original image

      a=imread('rice.png');

2- define the x and y gradient operators(such as prewitt or sobel):

   xo=[-1 0 1;-1 0 1;-1 0 1];
   yo=[-1 -1 -1,0 0 0;1 1 1];

3-smooth the image and get the x and y components of the gradient by correlating the original image by each operator:

    gx=filter2(xo,a);
    gy=filter2(yo,a);

4- compute the gradient magnitude:

    g=sqrt(gx.*gx+gy.*gy);

5- choose a threshold value t and convert the gray edge to binary(for example t=0.3)

    gb=im2bw(g/255,t)

6- show the edge map

     figure,imshow(gb);

with my wishes Dr. Ziad Alqadi Jordan amman Albalqa unversity Faculty of engineering technology

Out of touch edit

The article seems to have been written by a group of pompous intellectuals who are out of touch with their readers. Hyperlinking to articles like "shading" as though we've never heard of it, then just mentioning formulas without explaining them properly... The purpose of an encyclopaedia is to teach people about detailed areas of interest. I've studied computer vision myself at university and I find most of this article to be too vague, intellectual and just unhelpful. Specifically, the most notable things are that the explanation of non-maximum suppression is unhelpful and the illustration with the grey pixels should have the rows labelled (it's unclear for a while as to what the numbers mean) and it's not even necessary anyway - you could just refer to the image of the girl.Owen214 (talk) 10:02, 16 June 2010 (UTC)Reply

Please, note that different parts of the article have been written by different people. In particular, some local edits seem to have been performed without updating other parts. For example, the text you refer to was written before the image of the girl was added. Tpl (talk) 11:55, 17 June 2010 (UTC)Reply

Center of Luminance feature detection edit

This is a fast algorithm that has been used to acquire data about the shapes of cells (1981). It has also been used by 3-D surface feature laser scanners to detect the location of a laser line. It has also been called the CG (1994) and CM (2002) algorithm because the calculation is the same. I think it would be good to include a description of this algorithm, along with a few references. However since I haven't been a contributor to this page I thought it would be good to have a discussion first. Zen-in (talk) 07:37, 13 September 2010 (UTC)Reply

Removed unnecessary "weasel words" tag. edit

I removed the weasel words tag for "we" in the section "Why edge detection is a non-trivial task" as this is clearly an editoral we or author's we, and does not refer to a person or group of people in particular, but rather could be compared to a generic "one", as in "one may assume", ... 91.176.185.58 (talk) 20:41, 8 April 2011 (UTC)Reply

Edges should be defined as lines or curves edit

I suggest that the definition (the first paragraph of the article) should be changed. The edge is not a collection of points but rather a line or a curve that separates the two differently colored regions. So that the vectorization methods like those used in the Outliner[1] utility should be classified as edge detection too. --Wladik Derevianko (talk) 12:45, 25 April 2011 (UTC)Reply

With the edge definition for continuous images based on the differential formulation in terms of the zero-crossings of   that satisfy  , it follows that the edges will generically be connected curves. With Canny's original in terms of non-maximum suppression applied to a discrete image, the edges will defined as a collection of points, which are not in the form of curves unless a complementary edge tracking procedure is applied.

The current formulation in terms of collection of points comprises both of these definitions, with the implicit understanding that connected edge curves can be obtained by complementary reasoning for continuous images and by complementary operations for discrete images. Tpl (talk) 15:18, 25 April 2011 (UTC)Reply

External links edit

Edge detection in humans edit

The article says nothing about how edge detection works in human vision.
http://www.sciencedirect.com/science/article/pii/0042698989900060
http://www.physorg.com/news174147986.html
http://www.jstor.org/pss/36246 --78.48.72.169 (talk) 11:28, 12 July 2011 (UTC)Reply

Undesirable self promo of CORF edit

The recently added description of CORF edge detection looks like undesirable promo of specific work, probably by the authors of the work? Quite unmodestly, this new operator is even listed before the Canny operator. A similar entry COSFIRE as an interest point detector has been inserted in the template on feature detection.

For an edge detector to be included in this edge detection article, it should established in the field. Therefore, I would suggest removal of CORF from this article. Tpl (talk) 08:29, 9 May 2012 (UTC)Reply

Agree that it is too prominent. I don't have the expertise to say that it should be removed but that could also be the case. North8000 (talk) 11:30, 24 May 2012 (UTC)Reply

I moved this from the article edit

IanOverington posted talk page material into the article. I moved it here: North8000 (talk) 17:09, 3 April 2019 (UTC)Reply

Attention editors: Ian Overington (1992). Computer vision: a unified, biologically-inspired approach. Elsevier. ISBN 978-0-444-88972-0. could be used to improve section ' Sub-Pixel '. This section could be improved by noting and effectively copying concepts embodied in items covered in ' Computer Vision … ', particularly as discussed in Chapter 3 (pages 30 - 36 & 42 - 66) and 4 (pages 64 - 94). There it is described and shown (with copious references) that almost the entire capability for sub-pixel edge detection is contained in the dioptrics and simple retinal neural mechanisms within the human eye. The concepts of the only additional component (equally simple & dealt with in the foregoing parts of the book) can also be readily copied.. Following directly from the foregoing are related visual mechanisms which can detect local motion and also stereo disparity between the images from the two eyes with similar sensitivity. These concepts embodied in those items can equally be readily copied and are dealt with later in the book (again with copious references):- Chapters 5, 6 & 9 pages 95 - 105, 122 - 139 & 177 - 311 [Motion] Chapter 10. pages 217 - 220, 224 - 226 & 228 - 229 [Stereo] Other related topics which may be relevant are discussed in Chapter 7. pages 147 - 156 [Image sharpness] and Chapter 8. pages 159 - 175 [Image pre-processing]. Where illustration is necessary or desirable, any of the following figures from the book may be considered appropriate:- 3.16 - 3.18, 3.20, 3.21, 4.14, 4.15, 4.16, 5.4 - 5.6, 5.12, 5.13, 6.6 - 6.9, 10.3 &/or 10.5. Meanwhile Chapter 12 possibly warrants a completely new Section(?). It deals with a miscellany of additional edge processes which can readily be carried out (by very simple software), after generating the basic set of local sub-pixel edge vectors. Of possible use for illustration of these might be Figs. 12.12, 12.13, 12.14 &/or 12.15. Also parts of App 12 A & 12 B might be of use? Full disclosure, I'm the author Ian Overington. In addition, should they be considered usable, some 'punchy' figures from a document (Overington I. (2004), ' Canny and SHV comparison .pdf ') available for download on my website "www.simulatedvision.co.uk", but otherwise not published in open literature, serve to illustrate rather forcibly the massive difference between ' nearest pixel ' and 0.1 pixel. All the images in this said document are my own copyright but I would be more than happy to have any of them copyrighted also in Wikipedia, if such a thing is possible.