I am trying to perform material segmentation (essentially semantic segmentation with respect to materials) on street-view imagery. My datasets only has ground truth for select regions, so not all pixels have a label, and I calculate loss and metrics only within these ground truth regions. I use Semantic FPN (with the ResNet-50 backbone pre-trained on ImageNet), a learning rate of 0.001, momentum of 0.8, and learning rate is divided by 4 if there is no validations loss improvement after three epochs. My loss function is a per-pixel multiclass cross-entropy loss.
My dataset is extremely limited. Not only are not all pixels classified, I also only have 700 images and a severe class imbalance. I tried tackling this imbalance through loss class weighting (based on the number of ground truth pixels for each respective class, i.e. their area sizes), but it barely helps. I also possess, for every image, a depth map, which I (can) supply as a fourth channel to the input layer.
Performance is pretty crappy. What’s more, there is very little difference between results of my four experiments. Why is this? I would expect that the addition of depth information (which encodes surface normals and perhaps texture information; pretty discriminitive information). Besides the overall metrics being rather low, the predictions are very messy, and the networks rarely, if ever, predicts “small” classes (in terms of area size), e.g. plastic or gravel. This is to be expected with such a small amount of data, but I was wondering if there are any “performance hacks” that can boost my network, or if I am missing any obvious stuff? Or is data likely the only bottleneck here? Any suggestions are greatly appreciated!
PS. I also tried a simple ResNet-50 FCN (I simply upsample ResNet’s output until I have the same resolution; there aren’t even skip connections), and the results are worse, but at least they are smooth. Why are these more smooth?
As the title says: Do or have you ever worked on a project for months straight, to almost abandon it because e.g. models do not converge, to finally get “lucky” and get decent results in the end?
The above just happened to me, where i have been working full-time on something for the past 4 months. Without getting decent results and getting completely out of options, almost thinking the whole project would have failed. To finally find out that in the end, due to some ‘luck’, the model turned out quite well with results where i’m happy with.
How often does this happen in the field and what are your experiences with such projects?
Is there any advice you would give out to others who are stuck on such a problem?
I need some advice on choosing a neural network type which is suitable for the application described below.
I have a data set with 39600 samples/entries, each sample has an image and a corresponding vector of variable length.
I want to create a neural network capable of predicting the vector associated with image based solely on the image.
So, I need a neural network which accepts a fixed length input (the image) and outputs a vector of variable length.
How can this be achieved?
I have been working on the BioASQ challenge, Task A which is the large scale semantic indexing of PubMed abstracts. It is supposed to be my Master’s thesis but I have hit a roadblock.
The current state-of-the-art results, that is if we concern ourselves with just the micro-f score, is 68.8% while I can’t seem to get past the 60% mark. I am currently using pre-trained bio-medical FastText word vectors with a bidirectional GRU, the output of which branches out into two parts. The first part computes a document vector using attention mechanism while the second part applies a CNN and then k-max pooling to get yet another document representation. Both vectors are merged along with some additional hand-crafted features which are then finally fed to the output layer which is of size 28,472 (the total number of labels) with sigmoid activation and binary cross entropy loss. Upon training this architecture on 3 million abstracts, I am getting a micro-f score of 58.2%.
I have tried a number of other methods and architectures but none are working. It is extremely frustrating since I have made absolutely no progress for the entirety of this month and I am growing anxious with every passing day as my deliverable deadline keeps coming closer. It would be of immense help if anyone could point me in the right direction on how to proceed further. What to read, what to change, etc. I did read about Label wise attention networks but cannot understand how to implement that in Keras. A small hint or some pseudocode would be of great help.
I looked up if any posts have mentioned Simons Institute youtube channel, and only found 3 results. The seminar seems to provide the very valuable knowledge in a concise matter than reading a whole paper. Each presenter is very talented from essentially prestigious institutions and speaks very well.
But I wonder why I haven’t seen the discussion on it here.
Finally, after almost two years of development, we are excited to release our toolbox for deep probabilistic inference!
Brancher is designed to make the integration between Bayesian statistics and deep learning easy and intuitive. We have prepared tutorials and examples in Google Colab: https://brancher.org/#examples | https://brancher.org/#tutorials
We are curious about your feedback! Either here or on Twitter: @pybrancher
This is a simple project I started a few months ago to organize machine learning projects. I’ve recently added PyTorch support to it and updated the documentations.
It helps you organize experiment results, checkpoints and summaries, and takes care of console outputs.
I’m working on cleaning up the API a little more.