I have read and summarized Chapter 21 of the TDD book (*), so make a note of it. I hope it will be helpful for those who read the TDD book and are asked "What do you mean here?" ※https://www.amazon.co.jp/dp/B077D2L69C/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
-[] Call tearDown even if the test method fails -[] Run multiple tests -[] Output the collected test results ◀ ︎ NEXT
I want tearDown to be called even if the test method fails. However, if tearDown is called without falling when it fails, it will not fall and you will not know where it failed. So you need to catch the exception where it failed.
The order in which the tests are written is important, and when choosing the next test to write, I choose one that "has some awareness" or "provides confidence when it moves." → "If you are a little worried about the test you are trying to write, subdivide the test target (TDD is basically divided into smaller steps) and write the test with a clear purpose and next step first. I thought that I wanted to say, "I should move it," but honestly, I still don't understand it well. .. I will update it if I understand it while reading.
If a test works, but you're stuck writing the next test, take two steps back.
(Two steps before) Think about the test and write (the test you wrote one before) ↑ (1 step before) The test works (the test I wrote one step before) ↑ (Currently) Thinking about the next test and writing → The hand to write the test has stopped
I think it means going back to the step of "thinking about the test I wrote one before" two steps ago and thinking again.
First of all, I want to get a test tool-like execution result with the feeling of "number of test executions, number of failures, details of failures". However, it is difficult for the framework to automatically report all test results (which method failed for what reason, etc.) (the steps are too large).
As a first step, why not return a TestResult that records the result when running one test? With this, only one result can be handled, but it is important to start with such a small step.
[Source Update] Added TesyResult test method and test execution code.
Next, temporary implementation.
[Source update] Add TestResult class and create summary method that returns the result content (because it is a temporary implementation, the return value is a solid character string). Updated run method to return TestResult.
Now that the test has passed, bring the summary method of the TestResult class that was temporarily implemented closer to the real thing.
[Source Update] Assign 1 to runCount (number of test executions) in the TestResult constructor. (First, proceed in small steps.) Make runCount displayed in the display part of "Number of test executions" of the return value that was a solid character string.
RunCount, which is a constant for the time being, should be increased with each test execution, so modify it.
[Source Update] Added testStarted method to set the initial value of runCount to 0 and increment each test execution.
Call the testStarted method just created in the run method.
[Source Update] Execute the testStarted method on the TestResult instance generated in the run method and increment runCount.
I want to make another solid number (the number of failures) real. Let's write a test.
[Source Update] Added a test (testFailedResult) when the test fails. Added a test failure (raise an exception) method (testBrokenMethod) that is tested in ↑.
-[] Call tearDown even if the test method fails -[] Run multiple tests -[x] Output the collected test results -[] Output failed test ◀ ︎ TODO added
When I run the test, it fails because I haven't caught a "raise Exception". I want to catch and record the Exception, but let's put it on the shelf (because the steps are big) and add a smaller test.
Go to Chapter 22 >>
Recommended Posts