Module talk:UnitTests

(Redirected from Module talk:UnitTests/doc)
Latest comment: 1 year ago by BrandonXLF in topic ID fix for all strip markers

Test group order

edit

At the moment, the test groups appear on the page in random order. Is there any way of getting them to appear in the order that they are defined? — Mr. Stradivarius ♪ talk ♪ 15:30, 3 April 2013 (UTC)Reply

As far as I know it's not possible to determine the order in which tests were defined, unless perhaps the test framework reads the module page (Lua source) itself. It's also not clear this is always the desired behaviour. There are a couple other approaches: always using alphabetical order, and naming your tests alphabetically in order using numbers; or giving you an option to explicitly specify the order. Dcoetzee 17:09, 3 April 2013 (UTC)Reply
Putting them in alphabetical order would be a good way around the problem. The test cases on Module talk:Delink/testcases don't appear in alphabetical order, though - it's just random, as far as I can tell. I think it's probably the natural table order that pairs() found, although I don't know the code well enough to try and change it. (And I might cause some disruption to other people's testing if I do.) — Mr. Stradivarius ♪ talk ♪ 19:15, 3 April 2013 (UTC)Reply

Don't treat the result off a call as Wikitext

edit

Hi UnitTesters. I'm failing to find the proper format for the nesting tests at Module:Delink/testcases function test_nesting. How do I do that? What I want is the expanded result to be compared to the non-expanded expected cases. Martijn Hoekstra (talk) 15:50, 3 April 2013 (UTC)Reply

I've fixed the problem using the nowiki option. The option isn't documented yet, so I might go through and add it when I have a second. — Mr. Stradivarius ♪ talk ♪ 19:10, 3 April 2013 (UTC)Reply
Thanks for adding nowiki option. It was very useful at commons:Module_talk:Coordinates/testcases. --Jarekt (talk) 18:19, 13 December 2013 (UTC)Reply

Alternative implementation

edit

This module has several shortcomings:

  • logic and presentation are mixed together, making it impossible to present the tests in different formats
  • as a consequence, it is not possible to run tests in the debug console (which is quite convenient when you need to change the tests)
  • there is line number for failed assertions
  • errors thrown by tested code are not caught

I created an alternative test module in hu.wikipedia and would welcome any comments or feature requests. The module is at hu:Modul:Homokozó/Tgr/ScribuntoUnit, a usage example is at hu:Modul:Homokozó/Tgr/Set/tests (the documentation is in Hungarian, but comments and variable names are in English, and the code follows xUnit conventions, so understanding it shouldn't be a problem). It throws exceptions from failed assertions, builds a result table based on which tests throw exception/error, and can then present the results in any way; I believe the separation of actual testing and display code makes it more maintainable and reusable. --Tgr (talk) 15:17, 25 May 2013 (UTC)Reply

@Tgr: didn't have time to fully read it, would it be much work on the test cases to convert them to your new module ? —TheDJ (talkcontribs) 20:53, 4 June 2013 (UTC)Reply
I created Module:UnitTests/sandbox which right now only mixes logic and presentation in three places. I created Module talk:Citation/CS1/testcases2 to make sure it still works and for comparison purposes. testcases2 uses 6.91 MB of memory and takes 3.964 seconds to process tests compared to testcases which uses 7.23 MB of memory and takes 4.933 seconds to process tests. Maybe with full separation of the logic and presentation the memory footprint and processing time can be decreased further. I think another approach to unit testing would be better, but that will require rewriting current tests, which like in the case of the citation module, could take a bit of effort to do. --darklama 15:11, 5 June 2013 (UTC)Reply

Test a string contains expected text

edit

Please see Module talk:ScribuntoUnit#Test a string contains expected text for an enhancement request. --Derbeth talk 21:41, 1 January 2014 (UTC)Reply

Compare template vs. module

edit

When comparing template vs. module, going with "==" seems wrong. The template and module may differ in e.g. number and types of HTML whitespaces which don't matter, or HTML representations (&nbsp, etc..). I think this is a good sample: Module:Sandbox/Dts/testcases. At first stage, I'd suggest making first_difference a member function. If this member function returns nil then strings are considered identical. This will allow tests to define their own method. Second stage will be to offer some pre-created options.. Tsahee (talk) 20:44, 19 January 2014 (UTC)Reply

Add a value to nowiki to show the wikitext only if the actual result does not contain a script error

edit

I suggest making the nowiki option support a string like "if no errors" as a value that would make mw.text.nowiki not be applied on the actual result if a script error can be detected in it. If there's a script error, the wikitext is of no use (it will be the same regardless of the error), while the rendered result can be clicked on to show the error message, making it easier to fix. --Mark Otaris (talk) 16:42, 13 October 2015 (UTC)Reply

{{#invoke:UnitTests/testcases/frame | _test}} give different result

edit

Why {{#invoke:UnitTests/testcases/frame | _test}} give different result for direct #invoke vs. invoke via UnitTests module? --Ans (talk) 13:58, 11 October 2017 (UTC)Reply

templatestyles

edit

Module:Citation/CS1 supports some 25 live templates and Module:Citation/CS1/sandbox supports an equal number of sandbox templates. We could have added WP:TemplateStyles markup (<templatestyles src="<name>/styles.css" />) 25× to the live templates and 25× to the sandboxen but that just seemed dumb so each of the modules concatenate template styles to the end of the cs1|2 template rendering using this (where styles is the name of the appropriate css page):

frame:extensionTag ('templatestyles', '', {src=styles})})

and that works great.

Except in Module:Citation/CS1/testcases.

Where every test fails. 318 failures. There are differences between the live and sandbox modules but not that many.

Because of TemplateStyles. Why? Because TemplateStyles inserts a stripmarker at the end of every cs1|2 template rendering and each stripmarker has a unique id number. So, this always fails:

{{#ifeq:{{cite book |title=Title}}|{{cite book |title=Title}}|ok|FAIL}}
FAIL

even though the two {{cite book}} templates are identically written. Here are two transclusions of identical templates; note the stripmakers at the ends:

'"`UNIQ--templatestyles-00000006-QINU`"'<cite class="citation book cs1"></cite> <span class="cs1-visible-error citation-comment"><code class="cs1-code">{{[[Template:cite book|cite book]]}}</code>: </span><span class="cs1-visible-error citation-comment">Empty citation ([[Help:CS1 errors#empty_citation|help]])</span>
'"`UNIQ--templatestyles-00000008-QINU`"'<cite class="citation book cs1"></cite> <span class="cs1-visible-error citation-comment"><code class="cs1-code">{{[[Template:cite book|cite book]]}}</code>: </span><span class="cs1-visible-error citation-comment">Empty citation ([[Help:CS1 errors#empty_citation|help]])</span>

To get round this, I have hacked Module:UnitTests function preprocess_equals_preprocess() (called only by preprocess_equals_preprocess_many()) to accept a new option.templatestyles. When that option is set to true, the code looks at the content of expected and extracts the templatestyles stripmarker identifier (an 8-digit hex number). It then overwrites the templatestyles stripmarker identifier in actual so that they both have the same identifier. Only then does preprocess_equals_preprocess() compare actual against expected.

If you are looking to text changes in ~/sandbox/styles.css compared to ~/styles.css, this change won't help you – and Module:UnitTest is probably the wrong tool anyway because stripmarkers are replaced with their actual content after this module has run.

I suppose that there might be reasons that we might want to expand the capabilities of this functionality though I'm not sure just what those reasons might be. For example these possibilities:

none – remove templatestyles stripmarkers from both actual and expected; no styling applied to the renderings
actual – replace templatestyles stripmarker identifier in expected with the templatestyles stripmarker identifier from actual; both use ~/sandbox/styles.css for styling

Perhaps there are others.

Trappist the monk (talk) 19:27, 28 March 2019 (UTC)Reply

Good work, life is getting complicated! Johnuniq (talk) 23:14, 28 March 2019 (UTC)Reply

Table of Contents

edit

What about generating a table of contents? Trigenibinion (talk) 14:54, 12 March 2021 (UTC)Reply

Conflicts with 'Module:No globals'

edit

A handful of functions in this module are not marked 'local', but could and (arguably) should be. --86.143.105.15 (talk) 10:36, 27 January 2022 (UTC)Reply

Tests not failing when they should

edit

I'm creating testcases for Module:GetShortDescription and Module:Annotated link and due to my own derping, left some copy-pasta typos which should have caused a series of tests to fail, but they did not. I fixed the typos but purposefully altered one test to fail and it sailed through running {{#invoke:AnnotatedLink/testcases|run_tests}}. Am I doing something wrong, or is there a problem with this module? Fred Gandt · talk · contribs 09:42, 27 January 2023 (UTC)Reply

"Test methods like test_hello above must begin with 'test'". I knew that. Fred Gandt · talk · contribs 09:45, 27 January 2023 (UTC)Reply
Might I suggest not outputting "All tests passed." when no tests have been run? Fred Gandt · talk · contribs 11:18, 27 January 2023 (UTC)Reply
  Done. I've also included the total amount of tests ran in general. Aidan9382 (talk) 11:49, 27 January 2023 (UTC)Reply
Very nifty; thank you 😊 Fred Gandt · talk · contribs 13:01, 27 January 2023 (UTC)Reply

Allow nowiki option to have three states

edit

Currently it appears that nowiki is a boolean option; could we have a third option to display both the <nowiki>...</nowiki> and parsed results? We could of course double all tests where this might be desirable, but 1) they might not end up anywhere near each other, and 2) it'd be inefficient. Suggest: {nowiki = 2} (semantically nice and easy math) for the third state. Fred Gandt · talk · contribs 20:36, 27 January 2023 (UTC)Reply

For the sake of ease in handling the code, and the fact I'd rather keep those options as just "truthy" checks instead of exact == checks (the only reason its =1 in the doc is probably because its shorter than typing =true), does something like a seperate nowikiplus or combined option sound better? I'll probably have to standardise the module a little to make adding this not mean pasting the same code in 5 different functions, but it should be doable (I'll just have to think about how to lay it out in the output). Aidan9382 (talk) 09:37, 28 January 2023 (UTC)Reply
Sounds good to me! Definitely think of the maintainability of the code ahead of the minor convenience of only having one option to consider. Personally; I think combined is better. Thank you for your consideration 😊 Fred Gandt · talk · contribs 09:46, 28 January 2023 (UTC)Reply
@Fred Gandt: I've managed to get some initial work done on this (currently got the main functions preprocess_equals(_many) and preprocess_equals_preprocess(_many) running under the new system idea in the sandbox) - Does the format I've given seen fine in your testcases? I don't want to start working on the more complicated to convert functions unless it's all working fine. You can test these by changing require('Module:UnitTests') to require('Module:UnitTests/sandbox') and specifying combined instead of nowiki. Aidan9382 (talk) 14:41, 28 January 2023 (UTC)Reply
Really; that looks great! Very neat table organisation. I hope all your effort is appreciated by more than just me, but rest assured I am impressed. Seeing the raw markup is great for technical analysis, but seeing the result is great for rapid detection of issues. Having them side by side (well over and under but same thing) like that is a quality of life improvement. Thank you 😊 Fred Gandt · talk · contribs 16:07, 28 January 2023 (UTC)Reply

Present failed tests together at the top

edit

Another suggestion: present all failed tests at the top of the results. This might be achieved multiple ways and someone with greater familiarity with the code might be best suited to decide exact what approach is best. As a user; seeing two sections – the uppermost for failed and the next for passed tests – would be ideal. Section depth should be unimportant unless the results are substed (but who would do that?); lvl 3 sections should be fine since the whole lot can be placed under a standard lvl 2 section for posterity. Diverting the results as their condition becomes known to the appropriate section should be trivial (easy for me to say right? I'm a tad busy right now but will tackle it myself if necessary). Fred Gandt · talk · contribs 09:24, 28 January 2023 (UTC)Reply

I'll try theory this in the sandbox too. Aidan9382 (talk) 09:37, 28 January 2023 (UTC)Reply
I will prepare a WikiLove goat for you! Seriously though; thanks again for you consideration ❤ Fred Gandt · talk · contribs 09:46, 28 January 2023 (UTC)Reply
Some wild yellow backgrounds have appeared 👀 😉 Fred Gandt · talk · contribs 16:13, 28 January 2023 (UTC)Reply
Ooooh yes, forgot I implemented that. There was a feature only in preprocess_equals_preprocess where failed tests were highlighted orange/yellow so they were easier to spot - I decided to give it a test run in the sandbox to make it apply the highlighting to more functions and completely forgot I did so. I'll probably make it an opt-in argument of the #invoke: (maybe |highlight=, a bit like |differs_at=).
(changed to an invoke option - Aidan9382 (talk) 16:43, 28 January 2023 (UTC))Reply
Oh, and I'm currently working on doing the split of failed and successful results from above, though I'm gonna have to think about how to do multi tests if they fail in the middle (I'm not sure they split too well right now, but we'll see). Aidan9382 (talk) 16:32, 28 January 2023 (UTC)Reply
I understand and fully appreciate how complex it is; there's no hurry or even need for this nice-to-have feature request. Don't forget to have a good day while you're working on it. Fred Gandt · talk · contribs 16:44, 28 January 2023 (UTC)Reply
Alright, I've decided not to implement the splitting with headers - screwing with the positional layout, especially with functions that do multiple checks in one run, is a bit more complicated and finicky than I think it's worth. Hopefully the highlight feature helps enough with finding the errors enough in that regard. As for everything else, I'll be moving that from the sandbox version to the live version some point soon when im free, and I'll also make sure to update the doc page (it's missing both the new stuff and some already existing stuff). Aidan9382 (talk) 18:56, 28 January 2023 (UTC)Reply
Understood. I hope it didn't trouble you too much trying, and again, I really appreciate the effort. I made a little helper script, moveFailedModuleTestsToTop.js, that shifts all the failures to the top on load. It's dirt simple and could do with extra qualification; would you mind if the tables include something like class="wikitable module-unit-tests-result" so the script can be more particular? I nearly went ahead and stuck it in there myself, but considered that might be a bit rude. I should have the script to pick out sets of tables where results of multiple invocations are present... Fred Gandt · talk · contribs 01:51, 29 January 2023 (UTC)Reply
Don't worry about any trouble I had doing this, coding is a big hobby of mine and I enjoy fixing up stuff like this. I've gone ahead and added the class to the table headers and called it unit-tests-result. I don't think it would've been rude to add the class yourself, it's a simple minor improvement and it doesn't screw with the existing layout, so it's completely fine. Aidan9382 (talk) 06:07, 29 January 2023 (UTC)Reply
Many thanks again 😊 I know what you mean; I love coding too. I hoping one day to be good at it 😉 Fred Gandt · talk · contribs 07:48, 29 January 2023 (UTC)Reply

ID fix for all strip markers

edit

@Pppery Hello, I noticed you reverted my edit. Could you share an example of a test that should be failing passing. Thanks, BrandonXLF (talk) 16:34, 25 May 2023 (UTC)Reply

The case that brought this to my attention was Module talk:YouTubeSubscribers/testcases, but AFAIK any test using preprocess_equals to compare unequal strings should trigger the bug. * Pppery * it has begun... 16:37, 25 May 2023 (UTC)Reply
I've reimplemented the stripping for the expected with the bug hopefully fixed (the expected was accidentally replaced with the actual, causing it to just see if equals itself). Aidan9382 (talk) 17:04, 25 May 2023 (UTC)Reply
Oh I didn't catch that, thanks! BrandonXLF (talk) 23:17, 25 May 2023 (UTC)Reply