[R] MimickNet, Matching Clinical Ultrasound Post-Processing via CycleGANs Code Release
Not sure how many Ultrasound or medical imaging folks are in here, but thought this might be useful to this group. I’m part of an ultrasound research lab at Duke University, and we’ve recently open-sourced work on ultrasound image post-processing which allows one to mimic proprietary post-processing black-boxes found on commercial ultrasound scanners. Here is the: Paper, Github, Colab notebook.
When creating an ultrasound image from scratch, it is common to have speckle noise, Gaussian noise, clutter, reverberation, and other undesirable forms of image degradation. While raw ultrasound images are very familiar to researchers, medical providers will typically only look at heavily post-processed images in the clinic. Unfortunately, commercial post-processing is generally proprietary and kept secret. The inaccessibility makes apples-to-apples comparisons of novel methods to current clinical practice difficult. It also makes the translation of novel methods into the clinic difficult. Ideally, the post-processing is not secret, and everyone can always have lovely images to look at as a baseline. We find that it is possible to mimic the post-processing found on commercial scanners through CycleGANs by just using images acquired via regular use. CycleGANs do not require any image registration or image pairing to train, which is very convenient. We are releasing the fully trained models so that any researcher has access to clinical-grade like post-processing. We refer to our trained models as MimickNet.
TLDR: Clinical Ultrasound Post-Processing is kept proprietary and secret. However, by using data collected just via intended ultrasound scanner use, it is possible to mimic the post-processing algorithm found on some of the best ultrasound scanners. We are making these models available to any researcher, so we all have access to clinical-grade post-processing.