Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Dislike/Disinterest of Machine Learning: rant

This is just my personal opinion/rant of what I inherently dislike about ML and neural networks after working on a research project involving a CNN for little under a year and feeling very frustrated with where it is going. Not claiming to be an expert or professional in the field.

#1: Machine learning is frustrating to code/improve on:

I find the coding of machine learning for a neural network (at least at my level when you are only really applying already existing layers/activation functions) to be very unsatisfying to progress through due to the nature of it. When you are solving a normal coding problem I feel like you can get creative with dealing with problems/issues that occur with your algorithm, as you progress through solving the problem (In my case getting high classification accuracy), but the solution offered by machine learning tends to be a sort of black box where it simply works at a certain percent for whatever rhyme or reason. I feel like the time I put into improving my classification accuracy is not proportional to the rate at which it improves and is more factored by chance than anything. For example I spent weeks changing the shape and setup of the input data, the number of nodes in each layer, etc with little to no improvement, but one day I switched the activation functions on my dense layers from relu to selu, and by literally changing 4 letters, my accuracy jumped past everything else that I had tried the last few weeks. This could just be due to my lack of experience in the field, i.e. a talented ML engineer could just tell based the loss function values that selu activation function will lead to improvement. I dislike how when I make a change to the net I don’t really know how exactly that will translate to the result that I get. I could predict an improvement will happen, but I can’t say for certain or really know to what degree an improvement will occur. This gets more frustrating when you have something that needs a lot of epochs to progress through that takes a long time, preventing you from just trying your more long shot ideas that might work but if they don’t will just waste a lot of time with nothing gained.

#2: Reliance on data set:

This could just be a complaint about data science in general, I don’t really know enough about the field to know where the distinction between the two is. I dislike how the prerequisite to solving problems using machine learning is to have a solid dataset that may be inherently biased in a way you can’t predict. An analogy that comes to mind when I think about this is studying for a test. The non ML way to do it would be to build up an understanding of the material from the ground up that is solid enough to solve any problem thrown at you. The ML way to do it would be to gather up old tests that the professor gave in previous years and use those to study for the test. Sure you might do better on the test in this way, but if a curve ball question ends up on the test it is more likely for the non ML way to be able to solve the question. Even if they don’t solve the question they can see where their logic in solving it was flawed, whereas in the ML case, the flawed logic was that the previous tests didn’t have a question similar to this one. Going off of this if your dataset has an inherent bias or is too small then your solution will share that bias and won’t be trustworthy in a scenario that hasn’t been seen before but still needs to be dealt with. Now I know that there are methods to help with having a smaller dataset and preventing overfitting such as dropout layers and transfer learning, but I still have an underlying issue with needing to rely on this large dataset for solving your problem, especially in industries where the dataset isn’t typically knowledge given to the public.

#3 Disconnect between People suggesting ML learning solutions to problems and actual ML engineers who have to implement solutions:

#4 Constant research needed in newest techniques:

Now some people might like this aspect of ML, but I dislike how you constantly need to be learning about the newest trends in ML in order to stay relevant. It seems like the things that I learn this year will become almost completely irrelevant next year i.e. RNNs were thought to be very good for word processing until they found that CNNs were better suited for it. Now this occurs in all industries obliviously but I feel like it is especially true in ML, where you aren’t just designing a system that will solve a problem, you are also designing a system to find the correct weights for said system, so I feel like there is a higher chance for something that you learned about and specialize in to one day become completely irrelevant and you need to now learn this unrelated new idea that will only last for so long.

#5 Paywall in ML:

To even consider an ML solution to a problem you need two things: a good dataset and a lot of computing power. Just having more computing power allows you to train and test your implementations at a much faster rate than someone without those resources. It ultimately comes down to the solution is inherently tied to having a large amount of money to put down before hand to solve the problem before seeing any real proof of concept (this could be wrong I am not very familiar with if there is a proof of concept somewhere in the ML life cycle). While I recognize that in all industries money always allows for better/quicker development, my problem arises when you consider someone learning ML from the ground up. If you have more money to spend on getting a better rig to learn ML, your shitty mnist dataset classifier will train significantly faster and allow you to learn significantly faster than someone with less money, which I think is an inherent problem of the method. Talking about datasets, paying for datasets, and how companies take your data to be used in datasets for ML is a completely different rant so I’m just going to end it here.

Let me know if you disagree/agree with my points and if the dislike that I feel about working with ML will go away with more of an understanding of the data science behind it.

submitted by /u/random__0
[link] [comments]