We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'NoneType' object has no attribute 'save' , i donot know why ?
this is all of my issue (sadtalker) iec@iec-Default-string:~/sontung/xtalker$ python inference.py --driven_audio examples/driven_audio/bus_chinese.wav --source_image examples/source_image/art_0.png --result_dir ./result using safetensor as default start to generate video... 1712734309.613055 device========= cpu ---------device----------- cpu 0000: Audio2Coeff 0.08464241027832031 No CUDA runtime is found, using CUDA_HOME='/usr' 0001: AnimateFromCoeff 1.8580634593963623 3DMM Extraction for source image landmark Det:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.36s/it] 3DMM Extraction In Video:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 24.55it/s] 0002: preprocess_model generate 4.304378986358643 eyeblick? pose? None None mel:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84/84 [00:00<00:00, 41439.84it/s] audio2exp:: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 57.17it/s] 0003: audio_to_coeff generate... 0.6201522350311279 ./result/2024_04_10_14.31.49/art_0##bus_chinese.mat rank, p_num: 0, 1 84 2024-04-10 14:31:58 [INFO] Start auto tuning. 2024-04-10 14:31:58 [INFO] Quantize model without tuning! 2024-04-10 14:31:58 [INFO] Quantize the model with default configuration without evaluating the model. To perform the tuning process, please either provide an eval_func or provide an eval_dataloader an eval_metric. 2024-04-10 14:31:58 [INFO] Adaptor has 5 recipes. 2024-04-10 14:31:58 [INFO] 0 recipes specified by user. 2024-04-10 14:31:58 [INFO] 3 recipes require future tuning. 2024-04-10 14:31:58 [INFO] *** Initialize auto tuning 2024-04-10 14:31:58 [INFO] { 2024-04-10 14:31:58 [INFO] 'PostTrainingQuantConfig': { 2024-04-10 14:31:58 [INFO] 'AccuracyCriterion': { 2024-04-10 14:31:58 [INFO] 'criterion': 'relative', 2024-04-10 14:31:58 [INFO] 'higher_is_better': True, 2024-04-10 14:31:58 [INFO] 'tolerable_loss': 0.01, 2024-04-10 14:31:58 [INFO] 'absolute': None, 2024-04-10 14:31:58 [INFO] 'keys': <bound method AccuracyCriterion.keys of <neural_compressor.config.AccuracyCriterion object at 0x703e32671ca0>>, 2024-04-10 14:31:58 [INFO] 'relative': 0.01 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'approach': 'post_training_static_quant', 2024-04-10 14:31:58 [INFO] 'backend': 'default', 2024-04-10 14:31:58 [INFO] 'calibration_sampling_size': [ 2024-04-10 14:31:58 [INFO] 100 2024-04-10 14:31:58 [INFO] ], 2024-04-10 14:31:58 [INFO] 'device': 'cpu', 2024-04-10 14:31:58 [INFO] 'diagnosis': False, 2024-04-10 14:31:58 [INFO] 'domain': 'auto', 2024-04-10 14:31:58 [INFO] 'example_inputs': 'Not printed here due to large size tensors...', 2024-04-10 14:31:58 [INFO] 'excluded_precisions': [ 2024-04-10 14:31:58 [INFO] ], 2024-04-10 14:31:58 [INFO] 'framework': 'pytorch_fx', 2024-04-10 14:31:58 [INFO] 'inputs': [ 2024-04-10 14:31:58 [INFO] ], 2024-04-10 14:31:58 [INFO] 'model_name': '', 2024-04-10 14:31:58 [INFO] 'ni_workload_name': 'quantization', 2024-04-10 14:31:58 [INFO] 'op_name_dict': None, 2024-04-10 14:31:58 [INFO] 'op_type_dict': None, 2024-04-10 14:31:58 [INFO] 'outputs': [ 2024-04-10 14:31:58 [INFO] ], 2024-04-10 14:31:58 [INFO] 'quant_format': 'default', 2024-04-10 14:31:58 [INFO] 'quant_level': 'auto', 2024-04-10 14:31:58 [INFO] 'recipes': { 2024-04-10 14:31:58 [INFO] 'smooth_quant': False, 2024-04-10 14:31:58 [INFO] 'smooth_quant_args': { 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'layer_wise_quant': False, 2024-04-10 14:31:58 [INFO] 'layer_wise_quant_args': { 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'fast_bias_correction': False, 2024-04-10 14:31:58 [INFO] 'weight_correction': False, 2024-04-10 14:31:58 [INFO] 'gemm_to_matmul': True, 2024-04-10 14:31:58 [INFO] 'graph_optimization_level': None, 2024-04-10 14:31:58 [INFO] 'first_conv_or_matmul_quantization': True, 2024-04-10 14:31:58 [INFO] 'last_conv_or_matmul_quantization': True, 2024-04-10 14:31:58 [INFO] 'pre_post_process_quantization': True, 2024-04-10 14:31:58 [INFO] 'add_qdq_pair_to_weight': False, 2024-04-10 14:31:58 [INFO] 'optypes_to_exclude_output_quant': [ 2024-04-10 14:31:58 [INFO] ], 2024-04-10 14:31:58 [INFO] 'dedicated_qdq_pair': False, 2024-04-10 14:31:58 [INFO] 'rtn_args': { 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'awq_args': { 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'gptq_args': { 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'teq_args': { 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'autoround_args': { 2024-04-10 14:31:58 [INFO] } 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'reduce_range': None, 2024-04-10 14:31:58 [INFO] 'TuningCriterion': { 2024-04-10 14:31:58 [INFO] 'max_trials': 100, 2024-04-10 14:31:58 [INFO] 'objective': [ 2024-04-10 14:31:58 [INFO] 'performance' 2024-04-10 14:31:58 [INFO] ], 2024-04-10 14:31:58 [INFO] 'strategy': 'basic', 2024-04-10 14:31:58 [INFO] 'strategy_kwargs': None, 2024-04-10 14:31:58 [INFO] 'timeout': 0 2024-04-10 14:31:58 [INFO] }, 2024-04-10 14:31:58 [INFO] 'use_bf16': True 2024-04-10 14:31:58 [INFO] } 2024-04-10 14:31:58 [INFO] } 2024-04-10 14:31:58 [WARNING] [Strategy] Please install mpi4py correctly if using distributed tuning; otherwise, ignore this warning. 2024-04-10 14:31:58 [INFO] Attention Blocks: 0 2024-04-10 14:31:58 [INFO] FFN Blocks: 0 2024-04-10 14:31:58 [INFO] Pass query framework capability elapsed time: 182.33 ms 2024-04-10 14:31:58 [INFO] Do not evaluate the baseline and quantize the model with default configuration. 2024-04-10 14:31:58 [INFO] Quantize the model with default config. 2024-04-10 14:31:59 [INFO] Fx trace of the entire model failed, We will conduct auto quantization Face Renderer:: 0%| | 0/8 [00:01<?, ?it/s] 2024-04-10 14:32:00 [ERROR] Unexpected exception RuntimeError('cannot call get_autograd_meta() on undefined tensor') happened during tuning. Traceback (most recent call last): File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/quantization.py", line 234, in fit strategy.traverse() File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/strategy/auto.py", line 140, in traverse super().traverse() File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/strategy/strategy.py", line 508, in traverse q_model = self.adaptor.quantize(copy.deepcopy(tune_cfg), self.model, self.calib_dataloader, self.q_func) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/utils/utility.py", line 306, in fi res = func(*args, **kwargs) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/adaptor/pytorch.py", line 3643, in quantize q_func(q_model._model) File "/home/iec/sontung/xtalker/src/facerender/modules/make_animation.py", line 129, in calib_func out = generator(source_image, kp_source=kp_source, kp_driving=kp_norm) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/iec/sontung/xtalker/src/facerender/modules/generator.py", line 251, in forward out = self.decoder(out) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped return self._wrapped_call(self, *args, **kwargs) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/fx/graph_module.py", line 281, in call raise e File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/fx/graph_module.py", line 271, in call return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc] File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "<eval_with_key>.74", line 8, in forward g_middle_0_norm_0_param_free_norm = self.G_middle_0.norm_0.param_free_norm(activation_post_process_1) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/instancenorm.py", line 74, in forward return self._apply_instance_norm(input) File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/instancenorm.py", line 34, in _apply_instance_norm return F.instance_norm( File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/functional.py", line 2495, in instance_norm return torch.instance_norm( RuntimeError: cannot call get_autograd_meta() on undefined tensor 2024-04-10 14:32:00 [ERROR] Specified timeout or max trials is reached! Not found any quantized model which meet accuracy goal. Exit. Traceback (most recent call last): File "inference.py", line 217, in main(args) File "inference.py", line 152, in main result = animate_from_coeff.generate(data, save_dir, pic_path, crop_info, File "/home/iec/sontung/xtalker/src/facerender/animate.py", line 182, in generate predictions_video = make_animation(source_image, source_semantics, target_semantics, File "/home/iec/sontung/xtalker/src/facerender/modules/make_animation.py", line 135, in make_animation generator.save(f"generator_int8") AttributeError: 'NoneType' object has no attribute 'save'
mpi4py
The text was updated successfully, but these errors were encountered:
No branches or pull requests
AttributeError: 'NoneType' object has no attribute 'save' , i donot know why ?
this is all of my issue
(sadtalker) iec@iec-Default-string:~/sontung/xtalker$ python inference.py --driven_audio examples/driven_audio/bus_chinese.wav --source_image examples/source_image/art_0.png --result_dir ./result
using safetensor as default
start to generate video... 1712734309.613055
device========= cpu
---------device----------- cpu
0000: Audio2Coeff
0.08464241027832031
No CUDA runtime is found, using CUDA_HOME='/usr'
0001: AnimateFromCoeff
1.8580634593963623
3DMM Extraction for source image
landmark Det:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.36s/it]
3DMM Extraction In Video:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 24.55it/s]
0002: preprocess_model generate
4.304378986358643
eyeblick? pose?
None
None
mel:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84/84 [00:00<00:00, 41439.84it/s]
audio2exp:: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 57.17it/s]
0003: audio_to_coeff generate...
0.6201522350311279
./result/2024_04_10_14.31.49/art_0##bus_chinese.mat
rank, p_num: 0, 1
84
2024-04-10 14:31:58 [INFO] Start auto tuning.
2024-04-10 14:31:58 [INFO] Quantize model without tuning!
2024-04-10 14:31:58 [INFO] Quantize the model with default configuration without evaluating the model. To perform the tuning process, please either provide an eval_func or provide an eval_dataloader an eval_metric.
2024-04-10 14:31:58 [INFO] Adaptor has 5 recipes.
2024-04-10 14:31:58 [INFO] 0 recipes specified by user.
2024-04-10 14:31:58 [INFO] 3 recipes require future tuning.
2024-04-10 14:31:58 [INFO] *** Initialize auto tuning
2024-04-10 14:31:58 [INFO] {
2024-04-10 14:31:58 [INFO] 'PostTrainingQuantConfig': {
2024-04-10 14:31:58 [INFO] 'AccuracyCriterion': {
2024-04-10 14:31:58 [INFO] 'criterion': 'relative',
2024-04-10 14:31:58 [INFO] 'higher_is_better': True,
2024-04-10 14:31:58 [INFO] 'tolerable_loss': 0.01,
2024-04-10 14:31:58 [INFO] 'absolute': None,
2024-04-10 14:31:58 [INFO] 'keys': <bound method AccuracyCriterion.keys of <neural_compressor.config.AccuracyCriterion object at 0x703e32671ca0>>,
2024-04-10 14:31:58 [INFO] 'relative': 0.01
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'approach': 'post_training_static_quant',
2024-04-10 14:31:58 [INFO] 'backend': 'default',
2024-04-10 14:31:58 [INFO] 'calibration_sampling_size': [
2024-04-10 14:31:58 [INFO] 100
2024-04-10 14:31:58 [INFO] ],
2024-04-10 14:31:58 [INFO] 'device': 'cpu',
2024-04-10 14:31:58 [INFO] 'diagnosis': False,
2024-04-10 14:31:58 [INFO] 'domain': 'auto',
2024-04-10 14:31:58 [INFO] 'example_inputs': 'Not printed here due to large size tensors...',
2024-04-10 14:31:58 [INFO] 'excluded_precisions': [
2024-04-10 14:31:58 [INFO] ],
2024-04-10 14:31:58 [INFO] 'framework': 'pytorch_fx',
2024-04-10 14:31:58 [INFO] 'inputs': [
2024-04-10 14:31:58 [INFO] ],
2024-04-10 14:31:58 [INFO] 'model_name': '',
2024-04-10 14:31:58 [INFO] 'ni_workload_name': 'quantization',
2024-04-10 14:31:58 [INFO] 'op_name_dict': None,
2024-04-10 14:31:58 [INFO] 'op_type_dict': None,
2024-04-10 14:31:58 [INFO] 'outputs': [
2024-04-10 14:31:58 [INFO] ],
2024-04-10 14:31:58 [INFO] 'quant_format': 'default',
2024-04-10 14:31:58 [INFO] 'quant_level': 'auto',
2024-04-10 14:31:58 [INFO] 'recipes': {
2024-04-10 14:31:58 [INFO] 'smooth_quant': False,
2024-04-10 14:31:58 [INFO] 'smooth_quant_args': {
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'layer_wise_quant': False,
2024-04-10 14:31:58 [INFO] 'layer_wise_quant_args': {
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'fast_bias_correction': False,
2024-04-10 14:31:58 [INFO] 'weight_correction': False,
2024-04-10 14:31:58 [INFO] 'gemm_to_matmul': True,
2024-04-10 14:31:58 [INFO] 'graph_optimization_level': None,
2024-04-10 14:31:58 [INFO] 'first_conv_or_matmul_quantization': True,
2024-04-10 14:31:58 [INFO] 'last_conv_or_matmul_quantization': True,
2024-04-10 14:31:58 [INFO] 'pre_post_process_quantization': True,
2024-04-10 14:31:58 [INFO] 'add_qdq_pair_to_weight': False,
2024-04-10 14:31:58 [INFO] 'optypes_to_exclude_output_quant': [
2024-04-10 14:31:58 [INFO] ],
2024-04-10 14:31:58 [INFO] 'dedicated_qdq_pair': False,
2024-04-10 14:31:58 [INFO] 'rtn_args': {
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'awq_args': {
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'gptq_args': {
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'teq_args': {
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'autoround_args': {
2024-04-10 14:31:58 [INFO] }
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'reduce_range': None,
2024-04-10 14:31:58 [INFO] 'TuningCriterion': {
2024-04-10 14:31:58 [INFO] 'max_trials': 100,
2024-04-10 14:31:58 [INFO] 'objective': [
2024-04-10 14:31:58 [INFO] 'performance'
2024-04-10 14:31:58 [INFO] ],
2024-04-10 14:31:58 [INFO] 'strategy': 'basic',
2024-04-10 14:31:58 [INFO] 'strategy_kwargs': None,
2024-04-10 14:31:58 [INFO] 'timeout': 0
2024-04-10 14:31:58 [INFO] },
2024-04-10 14:31:58 [INFO] 'use_bf16': True
2024-04-10 14:31:58 [INFO] }
2024-04-10 14:31:58 [INFO] }
2024-04-10 14:31:58 [WARNING] [Strategy] Please install
mpi4py
correctly if using distributed tuning; otherwise, ignore this warning.2024-04-10 14:31:58 [INFO] Attention Blocks: 0
2024-04-10 14:31:58 [INFO] FFN Blocks: 0
2024-04-10 14:31:58 [INFO] Pass query framework capability elapsed time: 182.33 ms
2024-04-10 14:31:58 [INFO] Do not evaluate the baseline and quantize the model with default configuration.
2024-04-10 14:31:58 [INFO] Quantize the model with default config.
2024-04-10 14:31:59 [INFO] Fx trace of the entire model failed, We will conduct auto quantization
Face Renderer:: 0%| | 0/8 [00:01<?, ?it/s]
2024-04-10 14:32:00 [ERROR] Unexpected exception RuntimeError('cannot call get_autograd_meta() on undefined tensor') happened during tuning.
Traceback (most recent call last):
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/quantization.py", line 234, in fit
strategy.traverse()
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/strategy/auto.py", line 140, in traverse
super().traverse()
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/strategy/strategy.py", line 508, in traverse
q_model = self.adaptor.quantize(copy.deepcopy(tune_cfg), self.model, self.calib_dataloader, self.q_func)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/utils/utility.py", line 306, in fi
res = func(*args, **kwargs)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/neural_compressor/adaptor/pytorch.py", line 3643, in quantize
q_func(q_model._model)
File "/home/iec/sontung/xtalker/src/facerender/modules/make_animation.py", line 129, in calib_func
out = generator(source_image, kp_source=kp_source, kp_driving=kp_norm)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iec/sontung/xtalker/src/facerender/modules/generator.py", line 251, in forward
out = self.decoder(out)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/fx/graph_module.py", line 281, in call
raise e
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/fx/graph_module.py", line 271, in call
return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc]
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.74", line 8, in forward
g_middle_0_norm_0_param_free_norm = self.G_middle_0.norm_0.param_free_norm(activation_post_process_1)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/instancenorm.py", line 74, in forward
return self._apply_instance_norm(input)
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/modules/instancenorm.py", line 34, in _apply_instance_norm
return F.instance_norm(
File "/home/iec/anaconda3/envs/sadtalker/lib/python3.8/site-packages/torch/nn/functional.py", line 2495, in instance_norm
return torch.instance_norm(
RuntimeError: cannot call get_autograd_meta() on undefined tensor
2024-04-10 14:32:00 [ERROR] Specified timeout or max trials is reached! Not found any quantized model which meet accuracy goal. Exit.
Traceback (most recent call last):
File "inference.py", line 217, in
main(args)
File "inference.py", line 152, in main
result = animate_from_coeff.generate(data, save_dir, pic_path, crop_info,
File "/home/iec/sontung/xtalker/src/facerender/animate.py", line 182, in generate
predictions_video = make_animation(source_image, source_semantics, target_semantics,
File "/home/iec/sontung/xtalker/src/facerender/modules/make_animation.py", line 135, in make_animation
generator.save(f"generator_int8")
AttributeError: 'NoneType' object has no attribute 'save'
The text was updated successfully, but these errors were encountered: