1

Disclaimer; beginner question!

My project structure, highly simplified for sake of the question, looks like this:

Project/
|-- main.py
|-- test_main.py

After reading Jeff Knupp's blogpost on unit testing and writing an assortment of tests I wanted to see how much of my code was now covered by tests. So I installed coverage.py and the following confuses me:

$ coverage run main.py (shows me the prints/logging from the script)

$ coverage report main.py

Name, Stmts, Miss, Cover

main.py, 114, 28, 75%

The thing is, I don't run unit tests from within the main script, nor did I think I should. I manually run all tests from test_main.py before a commit and know for a fact they do not cover 75% of all my statements. After reading coverage documentation I am doubting my unit test implementation ... do I need triggers from the main.py that run tests?

So I tried the same on my test script:

$ coverage run test_main.py (shows me an 'OK' test run for all tests)

$ coverage report test_main.py

Name, Stmts, Miss, Cover

test_main.py, 8, 0, 100%

But this is simply showing me I've "hit" 100% of my code in the test statements during execution of that script. So why then is coverage listed under "increase test coverage" if it simply displays what code has been used.

I would really like to see how much of my main.py is covered by test_main.py and am pretty sure I am missing some basic concept. Could someone elaborate?


On my Ubuntu machine running "coverage run test_main.py; coverage report" only gives me a report on test_main.py. On my Windows machine this gives:

Name, Stmts, Miss, Cover


main.py, 114, 74, 35%

test_main.py, 8, 0, 100%


TOTAL, 122, 74, 39%

The coverage report still doesn't make sense:

  1. The test_main covers 9 out of 134 lines of code and 1 out of 10 functions in main - coverage is not 35%
  2. Why is it reporting the coverage of test_main, these are the tests and it would be weird if this wasn't 100% since I'm running all tests to see my coverage ...
  3. I am doing something wrong here or this way of looking at it is bollocks, calculating an average of "coverage" while summing the tests with the code itself offers no insight and in my beginner opinion is wrong
6
  • It's hard to answer without seeing your code. All coverage.py can do is tell you what code you ran, and what code you could have run, and show you the difference. You've shown two reports, both of which claim to be about main.py, but the first says there are 114 statements, and the second says there are 8 statements. That shouldn't change. Can you double-check the details? Commented Feb 22, 2017 at 18:58
  • @NedBatchelder sorry, small mistake - the bottom report was relevant for test_main.py. I still don't understand how coverage is useful in the test context, it looks simply useful in checking what code might never be run or other uses. I want insight in which parts of my code are covered (hence coverage I thought ...) by unit tests, but this is simply showing me what code was run in 2 separate reports. So the only way I could accomplish what I want with coverage.py is to duplicate all my main.py code in test_main.py and test within the script? Commented Feb 23, 2017 at 9:10
  • You use coverage.py to measure the execution of your test suite. It will then tell you what lines in your product code were run, and which were not. Do this: "coverage run test_main.py; coverage report". Don't put a file name on coverage report if you don't want to limit the report to just one file. Commented Feb 23, 2017 at 15:20
  • @NedBatchelder added the run to the bottom of the question ... if this is "normal" output I am either missing the point of coverage or I am missing some configuration/project setup here. Commented Feb 23, 2017 at 19:59
  • Without seeing your code, I can't comment on why the percentages are what they are. Ditto for why the reports are different on the different machines. Many lines are executed simply by importing the file, which might account for why the numbers are higher than you expect. I prefer to include the tests in the coverage measurements; doing so can catch problems, for example, if the coverage is less than 100%. Finally, try "coverage report -m" or "coverage html" to see details of the specific lines missed. Commented Feb 24, 2017 at 0:22

1 Answer 1

1

To answer and close my own question - even though I still do not agree with quite some coverage logic, the 35% is accurate and thank you @Ned for pointing out lines are counted when merely imported. It also includes top-level file description, the argparser and main reference to the function leading up to this percentage. This totals up to 40 out of 114 lines of code - even if the function itself which I import directly is only 9 lines of code.

I don't really like this way of reporting, since I do not use all imports in the test statement, the argparser is untouched and still it says these are "covered" - mostly resulting in a semantic discussion, I would say these are "seen" or "passed", but not actually "covered by tests".

I also made another test coverage run with only a different filename test_main_2.py testing the same function in the exact same manner ... resulting in a (35+100+100)/3 = 78% coverage average instead of the before (35+100)/2 = 68% coverage.

But I do understand how it counts coverage (average) now allowing me to interpret the numbers in a more correct manner. Maybe this can help an early beginner interpret his or her own first results.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.