Skip to content
/ DATWEP Public

Source code for the Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery

License

Notifications You must be signed in to change notification settings

fualsan/DATWEP

Repository files navigation

DATWEP

Source code for the Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery paper (currently under review).

Setup

Create a new Python environment

$ conda create -n datwep python=3.11 -y

Activate created environment

$ conda activate datwep

Install dependencies

$ pip install -r requirements.txt

Downloading & Preparing dataset

Download the FloodNet dataset from below (Both Track 1 and 2):

  1. Download track 1 & 2 from https://github.com/BinaLab/FloodNet-Challenge-EARTHVISION2021
  2. Extract “Images” and “Questions” folders from track 2 files
  3. Extract training images from track 1 files. Since validation and test images don’t have annotations, we are going to ignore them.
  4. In “labeled” folder, combine mask folders from “Flooded” and “Non-Flooded” folders in a new directory.
  5. Create three new directories and move files:
    1. “track2_vqa/Images/” move “Images” folder from track 2 files
    2. “track2_vqa/Questions/” move “Questions” folder from track 2 files
    3. “track1_seg/train-label-img” move combined mask files here (mask folders from “Flooded” and “Non-Flooded” folders)
  6. If your prefer other directories, please modify the directories in hyperparameters dictionary:
    1. hyperparameters['DATASET']['IMAGES_ROOT']
    2. hyperparameters['DATASET']['QUESTIONS_ROOT']
    3. hyperparameters['DATASET']['MASK_IMAGES_PATH']
  7. Note: We only use the “Train_Image” folder of the “Images” of track 2. This is automatically set.
  8. If you prefer another directory, then you should set the DATA_ROOT in Training.ipynb notebook.
hyperparameters['DATASET']['DATA_ROOT']

Experiments

  1. After setting up the dataset, run the jupyter server locally:
$ jupyter lab
  1. Open Training.ipynb notebook, make sure correct data folders are set. This notebook will train the model and save the results.

  2. After training is complete, open Evaluation.ipynb notebook and run to see the final results.

Contact

If you have any questions, feel free the contact us:

H. Fuat Alsan (PhD Candidate) [email protected]

Assoc. Prof. Dr. Taner Arsan (Computer Engineering Department Chair) [email protected]

If you use our work, please cite us:

(PAPER STILL UNDER REVIEW, WILL BE UPDATED LATER)

Prediction Examples

VQA Example Prediction

Example VQA Prediction

Segmentation Example 1

Acutal Image

Example Segmentation Image 1

Acutal Masks

Example Segmentation Masks Actual 1

Predicted Masks

Example Segmentation Masks Prediction 1

Acutal Masks (Combined Color)

Colorful Segmentation Masks Actual 1

Predicted Masks (Combined Color)

Colorful Segmentation Masks Prediction 1

Segmentation Example 2

Acutal Image

Example Segmentation Image 2

Acutal Masks

Example Segmentation Masks Actual 2

Predicted Masks

Example Segmentation Masks Prediction 2

Acutal Masks (Combined Color)

Colorful Segmentation Masks Actual 2

Predicted Masks (Combined Color)

Colorful Segmentation Masks Prediction 2

About

Source code for the Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published