There are often times where we have lots and lots of data with lots and lots of parameters/features involved. In such cases it becomes very difficult for a Machine Learning algorithm to interpret the underlying data distribution or in other words tasks like classification and regression becomes difficult to perform. But there are some models which improve their accuracy with more data and more parameters(speaking loosely). And yes one such class of models are Deep Neural Networks. As the name implies, Deep Neural Networks consists of many layers and parameters compared to other machine learning algorithms making them more effective with certain tradeoffs. Their effectiveness is however both task and architecture dependent.
Automation is the new trend! There are so many libraries, frameworks built around a particular task that one hardly needs to worry about the underlying structure. While this may be beneficial for many tasks where one cannot afford writing every single line of code, it isn't always the case in DeepLearning. When entering the field of deeplearning one can experiment and play around with different models using high level frameworks like Tensorflow, Pytorch, Keras to name a few. Although these frameworks are highly efficient and effective, they tend to take us away from the underlying architecture. To get a deeper understanding of the way the neural network cycle works , a beginner can always code the neural network without using any of the frameworks!! Yes you read it right, it can be done using only numpy.
This method of training a neural network using numpy solely isn't obviously the most efficient one and it is the most inefficient way of deploying a deep learning model. BUT while implementing your first model in this way, you will be familiarized with every minute step that has to be taken care of while dealing with deeplearning models!
I have trained two models without using any frameworks: