This repository provides code for "Unsupervised temporal consistency improvement for video segmentation with siamese networks" Akhmedkhan Shabanov, Daja Schichler, Constantin Pape, Sara Cuylen-Haering, Anna Kreshuk.
Paths to data should be provided in data_config/datasets_config.py (there are some ready examples). In the config file one also should provide information about focus plane indexes. Precomputed (or manually labeled) information about indexes should be contained in focused_frames/ directory
utils contains some additional training/visualization/logging helper functions.
As described in the paper, model training consist of 2 parts:
- training a model on segmentation task (train_model.py),
- training the same model on segmentation and temporal consistency tasks (train_seq_model.py).
Scripts for test:
- run_test_predictions.py - script for saving predictions for all data specified in datasets_config.py
- eval_model.py - model evaluation
First training step:
python train_model.py -name <model name> -NUM_CHAN <num channels to use in z-stack, default 7> -cuda <cuda id> -DATA_TYPE <NUCL for nucleoli, TRITC for nuclei>
Second training step:
python tra_seq_model.py -BASE_MODEL_NAME <simple model name> -TIME_LEN <temporal learning window> -lr_seg <segmentation learning rate> -lr_time <temporal consistency learning rate> -ADD_NAME <additional log comment> -TIME_LOSS <temporal consistency loss type> -DATA_TYPE <NUCL for nucleoli, TRITC for nuclei> -cuda <cuda id>
Saving predictions:
python run_test_predictions.py -name <model name> -TTA -DATA_TYPE <NUCL for nucleoli, TRITC for nuclei> -cuda <cuda id>
Evaluate model:
python eval_model.py -name <model name> -DATA_TYPE <NUCL for nucleoli, TRITC for nuclei>
In case you have any questions about the paper/code feel free to contact Akhmedkhan Shabanov.