"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

February 23, 2020

Interesting Data Science Questions and Answers from Data Science Stack Exchange

Interesting Data Science Questions and Answers from Data Science Stack Exchange

Question #1 When is a Model Underfitted?
Answer

Question #2 What makes columnar databases suitable for data science?
Answer

Question #3 Is it necessary to standardize your data before clustering?
Answer

Question #4 The difference of Activation Functions in Neural Networks in general
Answer

Question #5 Why Is Overfitting Bad in Machine Learning?
Answer

Question #6 Why do convolutional neural networks work?
Answer
ConvNets work because they exploit feature locality. They do it at different granularities, therefore being able to model hierarchically higher level features. They are translation invariant thanks to pooling units. They are not rotation-invariant per se, but they usually converge to filters that are rotated versions of the same filters, hence supporting rotated inputs. I know of no other neural architecture that profits from feature locality in the same sense as ConvNets do.

Question #7 Why are Machine Learning models called black boxes?
Answer

Question #8 How do you visualize neural network architectures?
Answer

Question #9 Is there any domain where Bayesian Networks outperform neural networks?
Answer
One of the areas where Bayesian approaches are often used, is where one needs interpretability of the prediction system. You don't want to give doctors a Neural net and say that it's 95% accurate.

Question #10 What are deconvolutional layers?
Answer
Yes, a deconvolution layer performs also convolution! That is why transposed convolution fits so much better as name and the term deconvolution is actually misleading

Question #11 Does batch_size in Keras have any effects in results' quality?
Answer
Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima.

Question #12 How does Keras calculate accuracy?
Answer

Question #13 What is the significance of model merging in Keras?
Answer

Deep learning basics

Happy Learning!!!

No comments: