You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to apply filter pruning with LeGR to yolov8 model. It was able to run the pruning itself, but I have few questions about the implementation and result.
In train_step during pruning, NNCFNetwork is trained for few epochs. However, in the early step of pruning, the model already shows almost 0 map even before training is started. is this normal for LeGR? I know normal filter pruning would not work like that, but not sure for LeGR.
In first epoch of train_step, I observed the weight and other things are same with original model. However, the inference output of NNCFNetwork is totally different with original one. The only difference between two models is pruning mask inserted in the model. Is the output difference caused by the mask?
There are few examples about LeGR in the branch https://github.com/mkaglins/nncf_pytorch/tree/mkaglins/RL_experiments. Unfortunately those examples are too old, so it looks sample codes are not fit with the current version. Is there any sample code for current version of LeGR?
As mentioned in issues like parameter count and model size are still the same. #1038, pruned pytorch model have same size with original one. Will the model size decreased based on the pruning mask if I export the model to ONNX?
Training results during each generation looks completely identical. It looks pruning mask does not affect anything to output of network, which is weird since the output itself is different from original network.
Predict action for the current episode. Works as described above.
:return: new generated action
"""
np.random.seed(self.random_seed)
episode_num=self.cur_episode
action= {}
ifepisode_num<self.population_size-1:
In line number 106, random seed is reassigned every time _predict_action called. Due to the line, actions within population number always have same value. Is this thing intended or just a bug? Even after I commented out that line, training result and reward are always same.
The text was updated successfully, but these errors were encountered:
Hello, @Hyunseok-Kim0! Sorry for a such a long delay, I've lost your questions in my mail and found it only now. Let me try to answer some of your questions:
1-2) Filter pruning is applying pruning masks to the model during create_compressed_model. Since masks operations are applied, they are zeroing some channels of convolution and linear operations, so despite the fact that all model parameters are the same, activations computation is modified by zero channels of some conv and linear layers.
3) You can try the Torch classification sample with a LEGR config.
4) Unfortunately no, the only way to remove parameters from the model is to use Openvino pruning transformation as it described here
5) The answer for this question requires some experiments and investigations as the LeGR pruning code was modified long time ago last time, please contact me again if you really need the answer for this question.
Hope that will help! Please don't hesitate to contact me directly by mail in case of missed messages
I am trying to apply filter pruning with LeGR to yolov8 model. It was able to run the pruning itself, but I have few questions about the implementation and result.
In train_step during pruning, NNCFNetwork is trained for few epochs. However, in the early step of pruning, the model already shows almost 0 map even before training is started. is this normal for LeGR? I know normal filter pruning would not work like that, but not sure for LeGR.
In first epoch of train_step, I observed the weight and other things are same with original model. However, the inference output of NNCFNetwork is totally different with original one. The only difference between two models is pruning mask inserted in the model. Is the output difference caused by the mask?
There are few examples about LeGR in the branch https://github.com/mkaglins/nncf_pytorch/tree/mkaglins/RL_experiments. Unfortunately those examples are too old, so it looks sample codes are not fit with the current version. Is there any sample code for current version of LeGR?
As mentioned in issues like parameter count and model size are still the same. #1038, pruned pytorch model have same size with original one. Will the model size decreased based on the pruning mask if I export the model to ONNX?
Training results during each generation looks completely identical. It looks pruning mask does not affect anything to output of network, which is weird since the output itself is different from original network.
nncf/nncf/torch/pruning/filter_pruning/global_ranking/evolutionary_optimization.py
Lines 101 to 110 in 35e21e9
_predict_action
called. Due to the line, actions within population number always have same value. Is this thing intended or just a bug? Even after I commented out that line, training result and reward are always same.The text was updated successfully, but these errors were encountered: