Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Mask2FormerHead' object has no attribute 'threshold' #3712

Open
Skyninth opened this issue Jun 17, 2024 · 0 comments
Open

Comments

@Skyninth
Copy link

06/17 11:25:03 - mmengine - INFO - Iter(train) [ 50/90000] base_lr: 9.9951e-05 lr: 9.9951e-06 eta: 1 day, 1:16:30 time: 0.9964 data_time: 0.0160 memory: 20881 grad_norm: 2.4602 loss: 1.3251 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 1.3249 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0001 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:25:52 - mmengine - INFO - Iter(train) [ 100/90000] base_lr: 9.9901e-05 lr: 9.9901e-06 eta: 1 day, 1:03:36 time: 0.9953 data_time: 0.0158 memory: 19056 grad_norm: 2.1069 loss: 1.0589 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 1.0588 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0001 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:26:42 - mmengine - INFO - Iter(train) [ 150/90000] base_lr: 9.9851e-05 lr: 9.9851e-06 eta: 1 day, 0:58:56 time: 0.9952 data_time: 0.0152 memory: 19056 grad_norm: 1.7905 loss: 0.8241 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.8240 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:27:32 - mmengine - INFO - Iter(train) [ 200/90000] base_lr: 9.9801e-05 lr: 9.9801e-06 eta: 1 day, 0:56:24 time: 0.9968 data_time: 0.0156 memory: 19056 grad_norm: 1.4811 loss: 0.6155 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.6155 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:28:24 - mmengine - INFO - Iter(train) [ 250/90000] base_lr: 9.9751e-05 lr: 9.9751e-06 eta: 1 day, 1:10:02 time: 1.0503 data_time: 0.0162 memory: 19056 grad_norm: 1.1710 loss: 0.4363 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.4363 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:29:14 - mmengine - INFO - Iter(train) [ 300/90000] base_lr: 9.9701e-05 lr: 9.9701e-06 eta: 1 day, 1:06:42 time: 0.9976 data_time: 0.0153 memory: 19056 grad_norm: 0.8709 loss: 0.2910 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.2910 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:29:15 - mmengine - INFO - Saving checkpoint at 300 iterations
/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmdet/models/layers/positional_encoding.py:103: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats)
Traceback (most recent call last):
File "tools/train.py", line 104, in
main()
File "tools/train.py", line 100, in main
runner.train()
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 294, in run
self.runner.val_loop.run()
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 373, in run
self.run_iter(idx, data_batch)
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 393, in run_iter
outputs = self.runner.model.val_step(data_batch)
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 133, in val_step
return self._run_forward(data, mode='predict') # type: ignore
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/ProjectRoot/painting_seg/mmsegmentation/mmseg/models/segmentors/base.py", line 96, in forward
return self.predict(inputs, data_samples)
File "/ProjectRoot/painting_seg/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py", line 222, in predict
return self.postprocess_result(seg_logits, data_samples)
File "/ProjectRoot/painting_seg/mmsegmentation/mmseg/models/segmentors/base.py", line 192, in postprocess_result
self.decode_head.threshold).to(i_seg_logits)
File "/ProjectRoot/openmmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Mask2FormerHead' object has no attribute 'threshold'

my_custom_dataset:

Copyright (c) OpenMMLab. All rights reserved.

from mmseg.registry import DATASETS
from .basesegdataset import BaseSegDataset

@DATASETS.register_module()
class PaintingDataset(BaseSegDataset):
"""Painting dataset.

The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is
fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset.
"""
METAINFO = dict(
    # classes=('background', 'master'),
    # palette=[[0, 0, 0], [255, 255, 255]])
    classes=('background', 'master'),
    palette=[[0, 0, 0], [255, 255, 255]])

def __init__(self,
             img_suffix='.jpeg',
             seg_map_suffix='.png',
             reduce_zero_label=True,
             **kwargs) -> None:
    super().__init__(
        img_suffix=img_suffix, 
        seg_map_suffix=seg_map_suffix, 
        reduce_zero_label=reduce_zero_label,
        **kwargs)

I only have two class, one background and one foreground. Some of my images only have foreground and no background, so if I set num_class=2, reduce_zero_label=False, my val result is nan. So I set num_class=1, reduce_zero_label=True, But
I encountered this problem. The decode_head of mask2Former is not inherited from decode_head, but referenced from mmdet, so what should I do? Please give me some suggestions. Or how to make my result not nan when num_class=2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant