Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to display segmentation result in the process of reconstruction #31

Open
CharlieLeee opened this issue Jun 26, 2020 · 5 comments

Comments

@CharlieLeee
Copy link

@martinruenz Hi Martin, thanks for the great work and clear instructions, I successfully build the system. However, when I ran the command:
./MaskFusion -run -l /home/charlie/Downloads/teddy-handover.klg

There are no segmentation results displayed on the screen. Could you help to point out the potential cause of this issue?
image

  • output of the command
Calibration set to resolution: 640x480, [fx: 528 fy: 528, cx: 320 cy: 240]
Reading log file: /home/charlie/Downloads/teddy-handover.klg which has 528 frames. 
Initialised MainController. Frame resolution is set to: 640x480
Exporting results to: /home/charlie/Downloads/teddy-handover.klg-export//
* Initialising MaskRCNN (thread: 140646186743552) ...
 * Loading module...
/home/charlie/python-environment/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/charlie/python-environment/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/charlie/python-environment/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:521: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/charlie/python-environment/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:522: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/charlie/python-environment/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/charlie/python-environment/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Created model with max number of vertices: 9437184
Initialised multi-object fusion (main-thread: 140647066766720)
- The background model can have up to 9437184 surfel (3072x3072)
- Object models can have up to 1048576 surfel (1024x1024)
- Using GPU unspecified for SLAM system and GPU 0 for MaskRCNN
- Using frame-queue of size: 30
2020-06-27 01:31:23.576753: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Using TensorFlow backend.
Your GPU "GeForce RTX 2060" isn't in the ICP Step performance database, please add it
Your GPU "GeForce RTX 2060" isn't in the RGB Step performance database, please add it
Your GPU "GeForce RTX 2060" isn't in the RGB Res performance database, please add it
Your GPU "GeForce RTX 2060" isn't in the SO3 Step performance database, please add it

Configurations:
BACKBONE                       resnet101
BACKBONE_STRIDES               [4, 8, 16, 32, 64]
BATCH_SIZE                     1
BBOX_STD_DEV                   [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE         None
DETECTION_MAX_INSTANCES        100
DETECTION_MIN_CONFIDENCE       0.7
DETECTION_NMS_THRESHOLD        0.3
FPN_CLASSIF_FC_LAYERS_SIZE     1024
GPU_COUNT                      1
GRADIENT_CLIP_NORM             5.0
IMAGES_PER_GPU                 1
IMAGE_CHANNEL_COUNT            3
IMAGE_MAX_DIM                  1024
IMAGE_META_SIZE                93
IMAGE_MIN_DIM                  800
IMAGE_MIN_SCALE                0
IMAGE_RESIZE_MODE              square
IMAGE_SHAPE                    [1024 1024    3]
LEARNING_MOMENTUM              0.9
LEARNING_RATE                  0.001
LOSS_WEIGHTS                   {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE                 14
MASK_SHAPE                     [28, 28]
MAX_GT_INSTANCES               100
MEAN_PIXEL                     [123.7 116.8 103.9]
MINI_MASK_SHAPE                (56, 56)
NAME                           coco
NUM_CLASSES                    81
POOL_SIZE                      7
POST_NMS_ROIS_INFERENCE        1000
POST_NMS_ROIS_TRAINING         2000
PRE_NMS_LIMIT                  6000
ROI_POSITIVE_RATIO             0.33
RPN_ANCHOR_RATIOS              [0.5, 1, 2]
RPN_ANCHOR_SCALES              (32, 64, 128, 256, 512)
RPN_ANCHOR_STRIDE              1
RPN_BBOX_STD_DEV               [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD              0.7
RPN_TRAIN_ANCHORS_PER_IMAGE    256
STEPS_PER_EPOCH                1000
TOP_DOWN_PYRAMID_SIZE          256
TRAIN_BN                       False
TRAIN_ROIS_PER_IMAGE           200
USE_MINI_MASK                  True
USE_RPN_ROIS                   True
VALIDATION_STEPS               50
WEIGHT_DECAY                   0.0001


* Initialised MaskRCNN
* MaskRCNN got first data -- starting loop.
Segmentation fault (core dumped)

I also tried the Core/Segmentation/MaskRCNN/offline_runner.py script, and it worked pretty well. So the issue might not be related to mask rcnn and tf.

Btw, I would appreciate it if you could tell me a way to generate a semantic point cloud just like the one in your impressive video?
image

@martinruenz
Copy link
Owner

Hi Charlie

  • There is a macro in MfSegmentation.cpp to enable debug visualizations. I recommend that you enable the visualizations in order to see which data goes in and out of the segmentation method. See:
    //#define SHOW_DEBUG_VISUALISATION
  • Your log shows a segfault, did this happen when shutting down the software or before? This could be a potential cause of the issue.
  • To run the segmentation online you need 2 GPUs, otherwise the performance will be ba (see readme).
  • Check savePly to export objects:
    void MaskFusion::savePly() {

@CharlieLeee
Copy link
Author

@martinruenz Hi Martin, thank you for replying!

The segmentation fault happened when I closed the GUI window.

I tried to run maskfusion offline, but seems like the system still runs the online mask rcnn since MaskRCNN got first data -- starting loop. still shows.

But I somehow got it working offline correctly by pressing ctrl +c in the terminal after mask rcnn starts running (it may not always work but it worked).

Sure, thank you for helping! I'll get back to you once I turn on the debug visualization and run again!

@ZhenghaoL
Copy link

@CharlieLeee Hi, i have the same problem with you, have you solved it now?

@CharlieLeee
Copy link
Author

@ZhenghaoL Sadly, I haven't actually solved the problem, as I've said, the way I made this work is by pressing ctrl+c during the running of the program

@1171257311
Copy link

@CharlieLeee 你好,我也尝试了一下离线进行分割之后再运行而不是在线运行,但是发现效果还是不是很理想,比如物体移动的路径上都会留下来残影,如下图。不知道作者是如何达到视频里面的效果的。请问你有想法吗??感谢。
view0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants