Support

Tricks of semantic segmentation

Spring 2021

Senior Algorithm Engineer

As a Z by HP Data Science Global Ambassador, Yuanhao Wu's content is sponsored and he has been provided with HP products.

 

Last month I spent some time doing the “HuBMAP - Hacking the Kidney” competition on Kaggle. The goal of this competition is the implementation of a successful and robust glomeruli FTU detector. It is a classical binary semantic segmentation problem. This is my second semantic segmentation competition and our team finished in 43rd place and won a silver medal.

 

Although a mislabeled sample completely destroyed the public leaderboard and made competitors quite confused, it is still a good competition for illustrating semantic segmentation tricks.

 

I would like to begin with the Lovasz-Hinge loss. This is from the paper “The Lovász Hinge: A Novel Convex Surrogate for Submodular Losses”, and it proved very powerful in many competitions. You can find a good implementation in the famous segmentation_models.pytorch library. Though the loss is powerful, it may be difficult to train in some cases. A work-around is to combine the Lovasz-Hinge loss with some other loss, such as Dice loss or Cross Entropy loss. Some competitors may also use a two-stage approach: first train the model with Dice loss and then switch to the Lovasz-Hinge loss.

 

As for the model part, almost all competitors used Unet and its variants. The winner said he used the hypercolumn and attention mechanism with classical Unet in his solution post. I also tried these two tricks, however, they did not improve my results significantly. As shown below, hypercolumn is very straightforward. It concatenates the resized feature maps from different layers to enhance the model capacity.

Attention Unet is from the paper “Attention U-Net:Learning Where to Look for the Pancreas”. As shown below, the authors added attention gates (AGs) to the decoder part of the Unet. The commonly used attention mechanism in Kaggle competitions is a little bit different to the original paper. You can refer to either the winner’s post or the segmentation_models.pytorch repo.

By the way, there are several common attention structures, such as SCSE and CBAM. The famous competitor Bestfitting said he prefered CBAM. You can find some more detailed instructions here and here.

 

I also tried deep supervision in this competition. Basically, deep supervision calculates loss at different scales. It is easy to implement, while it did not provide much performance gain to me.

 

In this competition, the original input images are very large. Competitors need to first split the original image into small patches and then train and inference the model. To avoid information loss, the sliding step is smaller than the patch size, so that there is some overlap area. However, we found that the model is still weaker for the boundary area. Thus, we used a weighted approach to reconstruct the output. As shown below, If a pixel was predicted several times (green area), we gave smaller weight to the predictions when this pixel is in the boundary area (patch 2) and larger weight to those in the inner área (patch 1). This trick gave us a nice boost.

Training semantic segmentations requires lots of computing. I mainly used U-Net with EfficientNet as the encoder. With my current 24GB RTX 6,000 GPU in the HP Z4 workstation, I could train the models with 640 x 640 images and a batch size of 16. My training set consisted of about 18,000 images, and an epoch took me only 17 or 18 minutes. If you lack computing power, you may also try the sampling tricks. Since the negative samples are much more than positive ones, you can randomly sample some negative samples, instead of using all of them. This could not only reduce the time for converging but also save a lot of training time.

 

Last but not least, always trust your local cross validation. Some competitors tried to overfit the mislabeled public test sample and got poor results in the end. We only tried to improve our CV score, and we finally jumped up more than 100 places. Unfortunately, we missed the submission with the highest private score. Otherwise, we would have won a gold medal :P

Have a Question?
Contact Sales Support. 

Follow Z by HP on Social Media

Instagram

X

YouTube

LinkedIn

Facebook

Monday - Friday

7:00am - 7:30pm (CST) 

Enterprise Sales Support

1-866-625-0242 

Small Business Sales Support

1-866-625-0761

Monday - Friday

7:00am - 7:00pm (CST) 

Government Sales Support 

Federal

1-800-727-5472

State and local 

1-800-727-5472

Go to Site 

Monday - Friday

7:00am - 7:00pm (CST) 

Education Sales Support 

K-12 Education

1-800-727-5472

Higher Education

1-800-727-5472

Go to Site  

Monday - Sunday

9:00am - 11:00pm (CST) 

Chat with a Z by HP Live Expert

Click on the Chat to Start

 Need Support for Your Z Workstation? 

Go to Support Page

Disclaimers

  1. Product may differ from images depicted.

     

    The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.