Welcome to the blog page of Manjeet Dahiya!
Manjeet Dahiya is the Head of Machine Learning and Data Sciences at Airtel Digital (Wynk and Xstream).
Earlier, he was Principal Data Scientist with Delhivery, and before that he
worked with Agilent Technologies and United Online (Juno Online).
Manjeet obtained his PhD in computer science from IIT Delhi, and BTech in electrical engineering from IIT Kanpur.
Manjeet writes articles/notes on
data science, machine learning and AI, probability and
statistics, programming languages, and computer science in general.
Following are a few recent articles. For the complete list, checkout all
posts.
Goal of extensions of gradient descents
such as Momentum, RMSProp and Adam
is to speedup the gradient descent algorithm and address the challenges
faced by it. This post presents the challenges
and optimizations to handle the same.
Given a setup with $X$ and $Y$ as input and output variable respectively.
The modeling to predict $Y$ from $X$ can be done in multiple ways.
Following are a few common ones: