-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Good First Issue] [NNCF] Cover NNCF code with least coverage percentage with unit tests to at least 80% #2496
Comments
Feel free to split this task into multiple tasks by e.g. implementing it per file. We can help to create sub-tasks in such case. |
@vshampor @MaximProshin |
.take |
Thank you for looking into this issue! Please let us know if you have any questions or require any help. |
yes sir! Discord |
@Saharsh365 , thanks for your interest! The description is quit detailed and there are also tips. My recommendation would be to start from the analysis of the current codecov results, selecting one of the weak files and implementing the first simple test. If it works, the rest could be done by analogy. @vshampor will support you in case of questions. Don't hesitate to bring your questions. |
@vshampor The code coverage analysis reports are not working for me anymore. Could you please help me solve this issue |
@Saharsh365 we won't be able to help you unless you describe the problem you are encountering in a more specific fashion. I checked that the reports are visible for non-authenticated viewers. |
Oh, I'm not sure what happened but I believe that from Friday evening to yesterday the reports were not being displayed. I can however see them now and I believe that may have been due to the weekend? Apologies for taking up your time. |
I would like to drop this issue since I am not sure about how to proceed with this task. I used multiple resources to research how to write code coverage tests and I am still unsure about how to apply my knowledge to this codebase to write coverage tests |
Hello @Saharsh365, I'm unassigning you, but feel free to repick the task if you wish to - we're here to answer questions, perhaps asking them on Intel DevHub channel would be easier. |
.take |
Thank you for looking into this issue! Please let us know if you have any questions or require any help. |
Hello! I am a first-time contributor and I'm excited to help! I have written unit tests for nncf/common/utils/decorators.py and nncf/common/utils/os.py. May I know if there are other ways to check the code coverage before making the PR, other than coverage.py? Would it be advised for me to open a PR for just these 2 files, to check if I am on the right track? :) I'm also partway through making test for nncf/common/utils/patcher.py and I'd like to ask some questions on it. Particularly, I'm not clear regarding the For the Could this method possibly be bugged or am I understanding it incorrectly? Thank you for reading, really appreciate any help. |
Greetings! The implementation may have subtle bugs, in particular because the coverage is missing. I suggest that you add a test case for it. Here's how the "instance method" patching supposed to start from: class Model:
def forward(self, x):
return x
model = Model()
assert model.forward(7) == 7 # `forward` is looked up on the Model.__dict__, i.e. the class object
# now someone (not us) patches the forward with its own implementation:
def new_forward(x):
return 0
model.forward = new_forward # creates a new *instance* attribute `forward` which masks the original function definition from the class
assert model.forward(7) == 0
# now `model.forward` is what we call here an "instance method", and now we want to patch it
def our_forward(x):
return 42
patcher = Patcher()
patcher.patch(model.forward, our_forward)
assert model.forward(7) == 42 Try implementing that test case, maybe debugging the flow of the patcher for this case to see if it works as expected (or not). If you have further questions, feel free to post these. I think @nikita-savelyevv might also pitch in with some help on these. |
Hi @Candyzorua. Regarding your question, if you call
for the |
@vshampor @nikita-savelyevv Thank you both for your input. Duly noted and I will implement the ideas you have mentioned at my earliest availability. |
Hello, I wrote unit tests for the first 5 files and have made the following PR, am continuing with the rest of the tests :) |
### Details - Added unit tests for the first 5 files (listed in the issue) to reach at least 80% coverage - Addresses https://github.com/openvinotoolkit/openvino/issues/22194 - Tests for remaining files in progress --------- Co-authored-by: Vasily Shamporov <[email protected]>
Hello again, I have been writing tests for nncf/quantization recently and encountered an issue while making tests for nncf/quantization/algorithms/accuracy_control/algorithm.py. The function I am testing is I get a synthetic linear model from nncf/tests/openvino/native/models.py and quantize it. However, when I pass this to Not sure if the nodes of the quantized graph's having None for layer attributes is a bug, or if I have done something incorrect in the testing setup. Any help would be appreciated! |
In addition, I would like to clarify if the method through which I am obtaining synthetic models and quantizing them is good practice, or if I should hardcode quantized models. |
@andrey-churkin to advise on the accuracy control algorithm requirements |
Hello, I would like to follow-up on this, please :) |
Hi @Candyzorua, Sorry for the delay in response. An instance of the from nncf.openvino.quantization.backend_parameters import BackendParameters
quantized_model = nncf.quantize(
initial_model,
dataset,
advanced_parameters=nncf.AdvancedQuantizationParameters(backend_params={BackendParameters.COMPRESS_WEIGHTS: False})
) Please let me know if it works for you. |
Hi @andrey-churkin, thank you very much for your help. With your suggestion, I was able to test I have opened a PR with tests for 5 more files regarding quantization and accuracy control functions. |
### Details - Added unit tests for quantization and accuracy control functions - Files tested: * [nncf/quantization/algorithms/accuracy_control/evaluator.py](https://app.codecov.io/gh/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/evaluator.py) * [nncf/quantization/algorithms/accuracy_control/subset_selection.py](https://app.codecov.io/gh/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/subset_selection.py) * [nncf/quantization/algorithms/accuracy_control/openvino_backend.py](https://app.codecov.io/gh/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/openvino_backend.py) * [nncf/quantization/algorithms/accuracy_control/ranker.py](https://app.codecov.io/gh/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/ranker.py) * [nncf/quantization/algorithms/accuracy_control/algorithm.py](https://app.codecov.io/gh/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/algorithm.py) - For [ranker.py](https://github.com/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/ranker.py), I am not sure how to test the case with multithreading, so I have left that out for now. - For [algorithm.py](https://github.com/openvinotoolkit/nncf/blob/develop/nncf/quantization/algorithms/accuracy_control/algorithm.py), I am not sure how to test some of the more complex functions like `apply`, so I have left that out for now. - Addresses #2496
Hello! @Candyzorua Are all tests/Tasked covered ? If not can i pick some ? @vshampor I wish to work on this issue :). As said by @MaximProshin that we can split this among sub tasks. Thank you. |
@Candyzorua Hello! Thank you for replying :), That's great! i was already looking for to pick up TensorFlow and quantization related files, so i will pick up working on these files! Thank you! |
Hi, so if @RitikaxShakya takes Tensorflow and Quantization I'll cover the remaining ones |
Hi @DaniAffCH , since I have started on nncf/torch/quantization, I think it would be awesome if you could work on nncf/torch/sparsity, nncf/torch/pruning, as well as nncf/telemetry. I think nncf/torch/binarization has been removed. Please let me know what you think, thanks! |
@Candyzorua thanks for letting me know! Yes it's ok, I'll cover the unassigned files :) |
@vshampor @Candyzorua Hi, what's the status of the files covered? I would like to contribute as well. |
Hello! I've almost covered up TensorFlow related files, and finishing on Quantization files. |
Hi @anzr299 , sorry for the late reply. Please feel free to take over nncf/torch/quantization! I will be stopping work on this issue right now due to work responsibilities. Thank you! |
Sure, no problem. I will work on it. |
Context
NNCF uses
coverage
to run its precommit checks and currently has an admirable coverage rate of 90% across the repository. However, there are still some files that have <80% coverage, some - with no coverage at all. Some of these are located in theexperimental
folder and do not have a strict coverage requirement, some are special files such as__init__.py
orsetup.py
which are not easily unit-tested, but most have actual production code inside and many of those have exactly 0.00% coverage. We need such files covered by precommit-scope unit tests in order not to let potential bugs get away.Task
Review the current per-file coverage percentage at Codecov (sort by coverage)
https://app.codecov.io/gh/openvinotoolkit/nncf?search=&displayType=list
For the following files in the list (click to see per-line coverage for each on Codecov), add unit tests (using
pytest
style) under thetests
directory (in a subdirectory corresponding to the tested backend, e.g. a test fornncf/torch/foo.py
would go totests/torch/test_foo.py
, fornncf/common/bar.py
- totests/torch/test_bar.py
).Tips
Start with the files having the least amount of coverage.
Make sure that the test cases you are adding actually execute the lines that are marked as "missed" in the Codecov per-line display (for instance, nncf/quantization/algorithms/accuracy_control/ranker.py).
If at all possible, try to extend existing parametrized tests with proper parameters triggering the missed line logic.
Try to add new test cases to the existing test files that contain the tests for the tested component, otherwise - create a new test file in a proper subdirectory with a proper name, use existing test files as a guideline.
If you encounter dead code (for example, helper functions that aren't used anywhere, only defined) - delete it instead of covering it with tests. This will raise the coverage for the corresponding file better than anything.
Follow the general coding guidelines in PyGuide.md when preparing the test code.
The text was updated successfully, but these errors were encountered: