Talk:Unit testing/Archive 1

Latest comment: 2 years ago by Walter Görlitz in topic Missing 'Costs'
Archive 1

Removed links

These are all by the same author. — Edward Z. Yang(Talk) 23:07, 12 October 2006 (UTC)

As the author, I don't see the problem in that. these are quality links, to different sites. Why did you remove them? Roy Osherove.


Techniques: minor cleanup requested

The first two paragraphs of the section, "Techniques" don't run together. The IEEE standard for Software Unit Testing, albeit dated, isn't foreign to what occurs today for unit testing (a practice that is most decidedly automated). Perhaps the text regarding the standard should be removed altogether? .digamma 00:08, 28 October 2006 (UTC)

Or if the section is fine as is remove the cleanup-requested banner..digamma 22:27, 31 October 2006 (UTC)
I have cleaned this section up - hopefully it is less confusing now. Cleanup tag removed.--Michig 14:42, 24 November 2006 (UTC)

Clarification requested re role of mock objects in 'separation'

In the lead we currently have the following clause: "constructs such as mock objects can assist in separating unit tests". Could someone clarify what this means? Are we separating unit tests from each other, or from other 'modules' in the system. I guess this is something to do with being able to run unit tests on modules in isolation from other possibly buggy modules, but this isn't clear (at least to me) from the way it is currently worded. Stumps 15:18, 24 November 2006 (UTC)

While this article perhaps does not make this clear, the linked Mock object article expands on this. Does it need any further explanation in this article, or would this be unnecessary duplication?--Michig 15:37, 24 November 2006 (UTC)
Thanks. I've tried to make the lead section a little clearer, without unnecessarily duplicating information from the mock object article. 83.5.247.168 06:37, 25 November 2006 (UTC) [oops ... wasn't logged in! Stumps 06:38, 25 November 2006 (UTC)]

Code Hygiene

The section on facilitating change touches on this obliquely, but I think that code hygiene is a separate goal. Most people have worked on codebases that are either deteriorating or have already deteriorated, with the dreaded "don't touch that, nobody understands it" sections. —The preceding unsigned comment was added by 131.107.0.73 (talk) 18:38, 18 January 2007 (UTC).

whats the point in this sentence: The IEEE[1] prescribes neither an automated nor a manual approach.

If the IEEE prescribes neither an automated nor a manual approach, then why even mention it? My dog prescribes neither an automated nor a manual approach. —The preceding unsigned comment was added by Ronnystalker (talkcontribs) 09:16, 7 April 2007 (UTC).

I think the point that the IEEE doesn't favor one over the other is much more significant than the fact that your dog doesn't favor one over the other. The IEEE is much more authoritative. DRogers 00:26, 8 April 2007 (UTC)

I think that's a much better way of putting it. Perhaps the sentence "The IEEE[1] prescribes neither an automated nor a manual approach." should be changed to "The IEEE[1] does not favour one approach over the other." I'm a newbie so I'm not confident enough to go editing pages (especially code that has a little [1] in it, in a topic that i know little about). But, I do know that I stumbled over that sentence as a reader. E.g "prescribing neither something nor another" seems a little odd to me. Ronnystalker 06:03, 8 April 2007 (UTC)

Ok, I'll make that change. DRogers 14:58, 8 April 2007 (UTC)

Test Centric Languages

I think the critique of all existing test frameworks as not being built in to the language is both a bit brutal and perhaps a marketing point for the D programming language, which was referenced without any evidence that it improved productivity.

  • Java's TestNG adds tests to the language through annotations, instead of procedural code.
  • functional languages (Lisp, scheme, haskell) can be good for testing when their functions are side effect free -its very easy to test functions when their whole output is visible and testable.
  • the PLUnit test framework for prolog uses prolog clauses as test statements, which is incredibly high level.

The current content should be rolled back (or backed up with auditable claims), and/or the area broadened to look at other languages. SteveLoughran 10:48, 17 April 2007 (UTC)

The statement referring to the D programming language was inserted on 20:55, 20 July 2004 by 24.16.52.122, who also added similar irrelevant pro-D comments to a number of other Wiki pages. I suggest it be removed. jon 11:48, 14 May 2007 (UTC)

Testing Abstract Classes

I can understand separating Unit Testing from specific languages, but "... which may be a[n] ... abstract class or ..." seems a little unusual since few languages allow instantiation of abstract classes. Or is this to imply that an abstract class would be tested through a derived class? How would you test an abstract class (generally no code) in an automated way? —The preceding unsigned comment was added by Mweddle (talkcontribs) 16:02, 16 May 2007 (UTC).

I think the point is that an abstract base class is something you might want a unit test for - abstract classes can contain code, however they can't be directly instantiated. To instantiate one for the purposes of testing, you would need to create an instance of a concrete derived class. If one isn't readily available, then you might define a dummy subclass purely for the purposes of testing. jon 16:11, 21 May 2007 (UTC)

Alternative Terminology

Some, me among them, have begun to call XP's version of unit testing microtesting. I'm wondering if the text below is appropriate to add to the article under the Extreme Programming sub-head:

A move has been made in the XP world to re-christen the practice of unit testing as microtesting. The case is made that unit testing already has a rich pre-XP meaning that incorporates many un-extreme practices, and that a more precise term is needed. The movement has gained a very small amount of ground. A few hundred XP practitioners and one XP consultancy (Industrial Logic, Inc).

This is a fact, especially if you emphasize 'very small'. There are probably only a few hundred people who have adopted the term, and It is an attempt to be NPOV. The mentioned link self-proclaims its adoption.

  1. Is this a valid citation?
  2. Am I a spammer because, in fact, I invented the term?
  3. Am I a spammer because, in fact, I work for said company, and convinced them of the need for the name change?
  4. Am I being foolish to think that one can (or should) say that 'the movement has gained a very small amount of ground' on the basis of the few hundred people and dozen or so corporations who use the term?
  5. Am I going to live in fear forever???

No. In fact, I'm going to just leave this note right here and let someone else decide what to do with this data point. Cheers! GPa Hill 09:03, 22 September 2007 (UTC)

Cited content added

I added some cited content from Albert Savoia (who has joined forces with Kent Beck, the father of JUnit). I apologize for the repeated versions that I committed, but I struggled to get the note formatting to work. Apparently, the citation template doesn't like extraneous spaces. —Preceding unsigned comment added by MickeyWiki (talkcontribs) 17:47, 29 November 2007 (UTC)

How much code it takes

  • for every line of code written, programmers often need 3 to 5 lines of test code

Maybe I'm doing something wrong, but I don't find I need this much test code. I've been running about 1 line of test per line of production code for the last few years.

I don't test everything that is testable. I only make sure that the program passes a few key tests. Like if I wrote a program to convert between Centigrade and Fahrenheit, I would write 4 test cases for each conversion:

  • -40C = -40 F
  • 0C = 32 F (freezing/melting point of water)
  • 20C = 68 F (room temperature)
  • 100C = 212 F (boiling point of water)

Of course, if you consider the only "real" line of code to be the conversion itself:

centigrade = (fahrenheit - 32) / 1.8

... then maybe you do need 3 to 5 lines of code. But I count every non-blank line, so this routine has 4 lines.

float toCentigrade(float fahrenheit) {

    float centigrade = (fahrenheit - 32) / 1.8;

    return centigrade;
}

Golly, I hope this is not WP:Original research. :-) --Uncle Ed (talk) 18:32, 23 January 2008 (UTC)

I believe what Savoia is getting at is that in order to *fully* test one line of code, you need to write 3 to 5 lines of test code. You might apply your own discretion and not test every conceivable scenario. Heck, it wouldn't be cost effective to do so. He's trying to illustrate the complexity of properly testing code. Covering all outcomes of all decision points is costly (which he why he has become a champion for what Michael Feathers calls "characterization tests").MickeyWiki (talk) 00:21, 24 January 2008 (UTC)

Well, if that's some published author's view, then it should go into the article whether I agree or not - obviously! :-)

I'll dig through the textbooks and articles I learned refactoring and unit testing from. Maybe this will shed some light on the proportion question. But the focus was not on "testing lines of code" but on "making sure the program works".

The thing being tested is not code per se, but the functionality of the program. That is why refactoring can work so well, in conjunction with unit testing. We are never testing any particular line of code. Rather, we are testing whether the program gives the right response when you tell it to do something.

We might create an elaborate set of routines with dozens of lines of code, just to pass a one-line test. If that is what it takes. Just this morning, I created an entire new class in Java because of a failing test that was only 4 lines long. That's lopsided in the other direction! --Uncle Ed (talk) 02:53, 24 January 2008 (UTC)

I don't think anybody is always writing tests to target a specific line of code. "3 to 5" is an average. Clearly, you don't need 5 lines to validate a simple assignment. But, you might need 30 to test one of the those cases that's "lopsided in the other direction".
It's an average - and it doesn't mean you're going to write 3,000 to 5,000 lines of code to test a 1,000 line module. Shooting for 100% code coverage is not cost effective. Some testing is most appropriately deferred to the QA team. Developers get paid big bucks to cut corners - their skills come in knowing exactly what corners can safely be cut (i.e., what doesn't really need a test case).
Consider this: a simple if-block like "if (i > 10) then...". You need to write two tests - one where the expression is true, and one where it is false. What if the if-block contains another if-block with a simple boolean expression? What about multiple if-blocks? The number of paths through the code quickly multiplies.
Find a tool that'll meaure the McCabe Cyclomatic Complexity of your code. You'll be suprised how truly complex some simple looking code segments really are!
This is all true even in the TDD environment you describe. Whether you write tests first or code first, the code coverage challenges are the same.MickeyWiki (talk) 19:06, 24 January 2008 (UTC)

Yeah, it sounds like you're talking about "code coverage" - which is the angle from which ABC, Inc. was exploring test software. I'm more interested in finding out whether the program does its job correctly. It's a subtle difference. --Uncle Ed (talk) 17:28, 25 January 2008 (UTC)

Definition of Unit

I believe the defintion of "unit" in the first paragraph is incorrect. In object oriented programming a unit should be the method, not the class. Not only is a method the smallest testable part but from a practical point of view, testing at that level reduces the tendency to do functional black box testing and encourages making sure each line of code does what it was intended to do. Sspiro 18:07, 12 September 2007 (UTC)


I agree that a "unit" in object-oriented programming is the method/function rather than the class or any larger composite, but it appears that an anonymous revision in October 2009 removed this reference completely rather than making this correction. The result (as of 11/5/2010) is that it mentions what a "unit" is for procedural programming and says nothing about what a unit is for object-oriented programming (or other styles). Since object-oriented programming is at least a major development paradigm at present (if not the predominate paradigm), this absence is conspicuous. Perhaps procedural, class/object-oriented, and functional development styles should all have representation in a separate section at some point. At present, the opening paragraph seems deficient in explanation. Derekgreer (talk) 16:35, 5 November 2010 (UTC)


The definition of a unit test is inconsistent throughout this article; first it's a method, then a class, then a module. Furthermore, you are incorrect when you say that 'unit testing should encourage someone to make sure that each line of code does what is was intended to do'. This is completely opposite to the point of unit testing. A unit test should be a black box test of a unit, not a glass box test of each line of code in just one method; this would make the unit test completely useless as a tool in refactoring and regression testing and is a grave mistake that I have recently seen on a project.

Also wrong in this article is the claim that a unit test should not cross class boundaries. If this were true then writing a unit test would be impossible because pretty much every method uses some other class in some other way. It would be a better unit test that tested that the collaboration with other objects is also correct; writing mocks or stubs for each and every collaborator would be a complete waste of time; you'd have to mock String, StringBuffer, ArrayList etc ad-infinitum. All you end up testing is that the unit interacts with the mocks and stubs as expected, totally pointless. Bagaaz 11:21, 22 December 2008 (UTC)


One more error: the current page state that's a "verification and validation" method. Instead it is a Verification method (it verify the code against the specification). Validation in software engineering and computer programming means to check that the software fit the real need of the users and really work in the context and the environment where it have to operate (this also also things that for any reason are not captured by requirements or expressed in specifications). An unit test definitely is not a Validation method. —Preceding unsigned comment added by 213.89.239.157 (talk) 23:42, 3 January 2010 (UTC)

Why not change it then? Si Trew (talk) 14:39, 4 January 2010 (UTC)

Unit tests without a framework

This was written By SangameswaraRao Udatha

Writing a test cases using a framework such as JUnit is common. One can also write programs that test other programs without using a framework. However unit test does not always mean writing test case programs. Unit tests can be manual too. (Matt Heusser - I edited the definition to include this possibility, eg stepping through code in a debugger.)

Unit test are tests done by programmers to make sure a low level code works as he intended.

" When basic, low-level code isn't reliable, the requisite fixes don't stay at the low level. You fix the low level problem, but that impacts code at higher levels, which then need fixing, and so on." - Andy Hunt and Dave Thomas

Erm... so what do you want to do? — Edward Z. Yang(Talk) 21:20, 25 April 2006 (UTC)
"UnitTests are programs written to run in batches and test classes" - http://c2.com/cgi/wiki?UnitTest. The manual tests you are referring to are not unit tests, they are Programmer Tests or Developer tests. Unit tests must be repeatable and automated. DF
Unit tests must be first written manually before they can be batched and then automated. What is being described in this wiki entry (which are not permitted as sources) are a series of automated regression tests. --Walter Görlitz (talk) 18:15, 9 December 2010 (UTC)

Definition (manual/automated) relating to procedural unit tests

"Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. Its implementation can vary from being very manual (pencil and paper) to being formalized as part of build automation."

Indeed: When writing my own code (in Lisp or PHP), I manually unit-test each line of code, before even writing the next line of code in most cases. (In fact, if a single line of code involves nested expressions, I often test sub-expressions within that line of code before testing the line-of-code as a whole, and sometimes even before finishing writing the rest of that one line of code.) Only when I am finished writing and unit-testing all the lines of code to complete a function definition, then I finally (again manually) unit-test the function as a whole. If I were to *also* do automated unit testing, that would work well only for functions as a whole, not for lines of code within a function definition.

Accordingly, it can be generally said that manual unit testing can be on units as small as a single line of code, or even sub-expressions within a single line of code, whereas automated unit testing is feasible only on units at the level of function/method or larger.

In that context, the following remark is a half-truth, i.e. a lie: "In procedural programming a unit may be an individual function or procedure." It would be better to qualify that to say that for *automated* unit testing a unit may be an individual function or procedure, while for *manual* unit testing a unit may be as small as a single line of code or even a sub-expression within a single line of code. (As an aside, via assertations, Common Lisp provides a "fail-soft" way to unit-test single lines of code and/or sub-expressions for some types of errors, such as wrong datatype: During program operation, if such an assertion fails, an error is signalled.)

Unfortunately in this Wiki article, the note about procedural programming comes before the remark that unit testing can be either manual or automated (I reversed the two remarks in my discussion above to make it more sensible i.e. reader-friendly), so I can't see any graceful way to make this edit to repair the halftruth/lie, so I leave some expert to perhaps re-organize the introductory material to make this all truthful and in an appropriate sequence to be reader-friendly. In particular I think it should be said right at the top that unit testing can be either manual or automated, to make that absolutely clear, and *then* during the rest of the article the qualifier "manual" or "automated" or "either manual or automated" can be added to each section where the distinction makes a difference. 198.144.192.45 (talk) 16:22, 8 March 2011 (UTC) Twitter.Com/CalRobert (Robert Maas)


I would take issue with a great many statements here - I certainly wouldn't want them to make it into the article.
  • "I manually unit-test each line of code"
That's not unit testing, because the "unit" isn't complete yet and will be further modified before it is. Although this testing might test some algorithms within the unit, it's an unreliable extrapolation to claim that because a test passed on the half-completed unit, then it will also pass for the completed unit.
  • "If I were to *also* do automated unit testing, that would work well only for functions as a whole, not for lines of code within a function definition."
There are two problems with this statement. Firstly that a "unit" is scoped so that externally testing its function is a reasonable test. If it isn't, then (it is frequently claimed that) that reason alone can be reason to further decompose the unit, until it is. IMHE, this is a moot point - "units" that can't be tested well as such always turned out to be badly designed and incoherent anyway.
Secondly, the point about the need to test every LoC is not because lines equate to distinct functions, but merely because we have a distrust of untested lines (which is itself a smokescreen when you can have in-line blocks or branching, the ternary function etc.). Although I agree that it's important to be able to test every line, and to demonstrate that you're achieving this, any modern and competent unit testing framework will have the ability to do so. Usually this involves compiling to an object code representation with line numbers included in the symbol table, then running a coverage evaluation tool separate from the unit testing framework (try Cobertura for a Java example) that examines the set of lines when they're being executed by the unit tests.
The idea that demonstrating line by line coverage under automatic testing is simply false (believe me, this is much of what I do all day). The idea that line by line coverage is remotely near possible with manual testing (how make kLoC are you dealing with?) is ludicrous.
Being charitable, your view of the SotA for automated testing is about 10 years behind the curve. But then this is why I work in the Java world, not the PHP world. Andy Dingley (talk) 02:02, 9 March 2011 (UTC)
+1 Andy. --Walter Görlitz (talk) 02:10, 9 March 2011 (UTC)


Limits of unit testing

What are the limits of unit testing? How can you unit test "graphical" output or GUI stuff? -- Hahih (talk) 09:57, 30 July 2008 (UTC)

You can't. See "A Survey of Unit Testing Practices", by Per Runeson, Lund University, in IEEE. 130.92.9.55 (talk) 10:02, 18 April 2011 (UTC)

I've edited the definition to make it clear that unit testing is an activity designed to build confidence that the unit is fit for use - not the verification of correctness. Given the number of possible inputs of even a trivial function (say, input two doubles, output a double) verification of correctness is impossible. To quote Glenford Meyers "The only exhaustive testing occurs when the tester is exhausted." —Preceding unsigned comment added by Mheusser (talkcontribs) 20:48, 3 March 2009 (UTC)

Two aspects of the limits of unit testing: 1. Trying to exhaustively test every possible combination of a set of inputs. This maybe is not the best use of unit testing. Model checking may be a better approach if you are trying to do something like that. 2. Limits of imposed by encapsulation. I generally think of unit testing like this: When you run a unit test, the functionality is being expressed on a different platform.

Therefore, the functionality to be unit tested must be encapsulated in a way so as to define it as an encapsulation of platform independent functionality. That imposes various constraints, but it also requires certain amount of discipline.

However, it is a good kind of discipline. A process imposes discipline in a rather arbitrary manner that either lacks flexibility or is ambiguous. Phillip Armour's second law of software process: "We can only define software processes at two levels: too vague or too confining." But if you have to create functionality that runs in an equivalent manner within two or more different contexts, the discipline comes about naturally because without it, the thing doesn't work.

Getting back to your example of a GUI, the GUI would probably have to be built following something like a Model-View-Controller (MVC) design pattern. In this way, the controller could be tested on its own without having anything to do with buttons, dialog boxes, menus, etc, etc, all the objects typically associated with GUI. The view element becomes a dumb face plate that sends and recieves events. It contains all the GUI objects of whatever platform you developing on. So testing the view becomes very simple and straightforward.

The Model portion becomes simpler too. It is simply the repository of data being collected or displayed. It is often more a matter of database design rather than reactive behavior. There isn't really much to test.

The unit tests are written mostly for the Controller portion of the system. (Entropy7 (talk) 18:20, 16 August 2009 (UTC))

A practical example of this: I worked for a company building a multiprocessor controller for refrigeration equipment. They had a central control unit (one of the processors) and a control console (another processor). There was an asynchronous com-link between the console and the central controller. It was sort of like RS-232 at TTL voltage levels (it was legacy, but I thought this is probably not very noise resistant) Com packets went back and forth between them with an 8 bit checksum verification. Anyway, the console was being completely redesigned with a spiffy new capacitance touch panel behind a plate of tempered glass. But it would be another 4 weeks before hardware would be ready and the customer wanted to start elaborating requirements NOW!! So I did a SWT Java GUI that looked just like the front of the console. All the different packet commands I converted to functions in a another file. The functions could either interface to the Java GUI or the com-link of the hardware, depending on which file you included. The development environment was Eclipse 3 for the Java. The code for the actual product was written in C. There was a bit of messiness in going back and forth between Java and C that I won't go into here. It was actually not that big of a deal because the functionality was mostly static. Had it been more dynamic in C++, things could have been a lot more ugly. However, the upshot is that they had a desktop version of their product 3 weeks before the hardware was even finished. A month later, code that was well tested and reviewed by all stakeholders was programmed into the new console and we had full verified functionality running in about 15 minutes. The JUnit framework was used with Eclipse to do unit testing. Unit tests were also cross-referenced against a requirement set using a plug-in called JFeature. (Entropy7 (talk) 22:06, 17 August 2009 (UTC))

Unit tests of abstract class METHODS?

Seriously? Do you unit test your variable declarations as well?

- Jacob, San Diego —Preceding unsigned comment added by 76.88.0.180 (talk) 02:38, 16 April 2009 (UTC)

That sounds like a contradiction. A test checks for an implementation, but abstract by definition does not yet have an implementation. If you do in fact have other concrete methods in an abstract class, then to test that class you will need to create a concrete instance of that class. To do so you stub the abstract methods to return known values. — Preceding unsigned comment added by Dvanatta (talkcontribs) 05:15, 9 December 2011 (UTC)

Extreme programming's use of unit tests

"Extreme Programming and most other methods use unit tests to perform white box testing."

Maybe I'm just splitting hairs here. But XP and TDD use unit tests more as black box. From an outside point of view, you figure how you want to use the code. Then, you write the code that makes it happen. White box implies looking at the code and determining test cases to test it. Maybe this is a gray area (pun partially intended.) DRogers 18:39, 14 September 2006 (UTC)

It's more of a terminology issue. From what I've heard, eXtreme Programming founded Test-Driven Development, and advocates under-the-hood testing to make sure everything gets covered. — Edward Z. Yang(Talk) 22:57, 14 September 2006 (UTC)
Well, it's under-the-hood of the project, but each module is still a black box as far as the tests are concerned... right? I guess the text refers to "project white box testing", since the tests "see" the inner modules. --Pedro 23:57, 4 October 2006 (UTC)

It has to be whitebox. If it's blackbox then you basically have to test every possible combination of inputs. For most modules this is not feasible unless you are prepared to wait until the end of the universe. 07:00, 22 February 2012 (UTC) — Preceding unsigned comment added by 203.41.222.1 (talk)

You don't understand black-box testing. --Walter Görlitz (talk) 07:04, 22 February 2012 (UTC)

Honesty

I'm somewhat disheartened to see my changes undone. Not because they were mine, but because I tried to emphasise the limits that are way too often ignored. The one comment I wish could be reinstated is the one that says that unit testing are only as good as the *API* under test, and that designing good API is hard, thus creating good unit tests is even harder. But who am I, right? Just a software practitioner with only 30 years of experience: I surely cannot match the wit and youthful ardor of a self proclaimed editor ... Oh my ... — Preceding unsigned comment added by Verec (talkcontribs) 14:32, 28 March 2012 (UTC)

These are the edits you're talking about.
First, guidelines indicate "the initial letter of a title is capitalized (except in rare cases, such as eBay)" so changing "Unit testing limitations" to "Unit Testing Limitations" is not acceptable.
Second, the "Applicability" section is not professionally written. While the advice is great for a blog, it's not encyclopedic nor is it referenced.
Third, "Design Considerations" suffers from the capitalization issue as t he first point and the unencyclopedic nature of the second. And "potentially ill designed APIs" isn't even a good English phrase, although if that was the only issue, someone could have copy-edited the additions to improve it.
Fourth, it's not referenced. This article already suffers from a great deal of unreferenced material and adding more probably wouldn't hurt, but it certainly wouldn't help.
I could go into greater detail of the writing style and other issues with the prose, but let me break-down my edit summary in removing the material instead: "removing introduction of WP:OR incorrect formatting". Original research is short-hand for material that is unreferenced and probably just opinion. It's not that the opinion is incorrect, but it's not referenced. Feel free to follow that link. And I've already explained the formatting issue. I hope that clears things up. If you would like help in adding similar material, I could give you a few pointers in improving it.

Go

Can Go_(programming_language) can be added to the list of "Language-level unit testing support"? It has a has a package in its standard library for it, just as Python does. — Preceding unsigned comment added by 174.1.81.63 (talk) 04:00, 25 March 2013 (UTC)

Clarification? Citation? Unit Testing Frameworks

Section "Unit Testing Frameworks": "Unit testing without a framework is valuable in that there is a barrier to entry .. scant unit tests is hardly better than having none at all ...",

This 2nd paragraph sentence seems to contradict it'self. Is the author advocating FOR or AGAINST frameworks? And is this fact or opinion? The citation [10] "Bullseye Testing Technology" does not support this. That source does not discuss "framework", "barrier to entry", "valuable", "easy" or "scant" coverage. Clarification please? 69.248.214.71 (talk) 14:16, 22 May 2013 (UTC)

Regression

Should 'regression' point to the page on 'regression testing'? — Preceding unsigned comment added by Cbyneorne (talkcontribs) 15:17, 20 July 2006 (UTC)

"Language-level unit testing support" section quality

At this moment the section title and paragraph contents don't correlate with the list of programming languages well.

Among the listed programming languages only D (http://dlang.org/unittest.html) and Cobra (http://cobra-language.com/docs/quality/) do have built-in language level support for unit testing. Other languages do have it through non-specific language constructs (like python through docstring does) or via annotations/other extensions (C# and Java).

So I think that either title and text should be reworded significantly, or the list of language should be reduced to 2: D and Cobra.

Correct me if I'm wrong. — Preceding unsigned comment added by Zerkms (talkcontribs) 01:20, 13 November 2013 (UTC)

I think that section confused people, because the list is labelled as "Languages that support unit testing include", even though it is in the subsection about languages with built in support. (So it's not completely clear whether that list is supposed to be all languages that support unit testing, or only those that have built-in support for unit testing.) I'll try splitting it into two lists and hope that will help. Aij (talk) 14:59, 21 December 2015 (UTC)

Problem with basic definition in summary

The definitions given in the summary of this article includes testing within a single unit and between multiple units which conflicts with the definitions given by others:

  • ISACA defines unit testing as "A testing technique that is used to test program logic within a particular program or module.". ISACA defines integration testing as "evaluat[ing] the connection of two or more components that pass information from one area to another."
  • softwaretesting.com gives a definition of unit testing as "a level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed."
  • NIST provides synonyms of instance testing and conformance testing and provides the definition "testing an artifact ... against the rules defined in the specification. This form of testing does not directly involve a system under test, but rather a testing artifact that was produced by the system under test."

Revision or verifiable supporting citations are needed. Stephen Charles Thompson (talk) 21:13, 17 October 2018 (UTC)

Softwaretesting.com is not a reliable source. What are your actual concerns? The definition here is in-line with dozens of books and the NIST definition. Walter Görlitz (talk) 21:32, 17 October 2018 (UTC)
cite the NIST definition, please, if you will. I'm looking for it now (e.g. [1]) Stephen Charles Thompson (talk) 21:35, 17 October 2018 (UTC)
I was making reference to what you linked to above. Perhaps the following will help: https://www.google.com/search?q=%22Unit+test%22+site%3Anist.gov Walter Görlitz (talk) 23:43, 17 October 2018 (UTC)

Missing 'Costs'

There is a 'Benefits' section, for balance the 'Costs' should also be noted. Such costs include :-

  • Additional development effort, often exceeding the cost of the feature being developed.
  • Ongoing maintenance liability. Any future changes to the code also requires a review and update of the tests.

Given there is a cost to implementing unit tests, there is a risk of a negative return on investment if the test suite doesn't actually identify software defects in less time than it took to write and maintain the unit tests. If the development team are sufficiently adept, it is very rare that defects will exist in a 'unit' in isolation. It is more likely that defects will be introduced at integration or system level where module testing would be of greater value.

Not all software developers agree that unit testing is worthwhile. — Preceding unsigned comment added by Johnpwilson (talkcontribs) 15:50, 11 November 2013 (UTC)

"negative return on investment if the test suite doesn't actually identify software defects in less time than it took to write and maintain the unit tests."
Not true, and a basic fallacy of testing. If code luckily happens to be perfect, there is still a value in testing it as it shows that the code is perfect and would also show if the code was imperfect. There is value to this and that value does not disappear just because high quality code isn't making use of it. The value of testing is greater than its obvious value from the bugs that are found, it also brings added value from demonstrating that some parts remain bug free. Viam Ferream (talk) 10:29, 1 September 2014 (UTC)
If not using unit testing, how exactly does one determine that the feature works, and that any previous, still-desired functionality remains intact?
Unit testing is not solely an additional cost; it is a tradeoff vs. the different sort of testing that would otherwise be required -- or releasing untested code. It generally replaces repetitive manual testing. — Preceding unsigned comment added by 2601:601:9900:11A0:41A8:4C8E:7078:97A0 (talk) 05:19, 3 March 2022 (UTC)
This is not a forum for discussing things like this, but you are missing the option of other forms of testing. Walter Görlitz (talk) 05:26, 3 March 2022 (UTC)