Knowing the theory behind an algorithm is crucial in building your own, using the correct activation functions, which layers fit your problem, how data normalization works, how batch normalization and dropout work/conflict, etc. Then hyperparameter tuning is less important but it's still necessary to get the best result (and if you don't know what you're doing and set them to ridiculous values it could be very detrimental). Things like knowing lower batch size will lead to higher accuracy all things being equal, knowing what learning rate to start at and how to initialize weights to the reasonable values, although with hyperparameters you do a lot more playing around with your values it's still important to get the most out of your model.