The importance of debugging

Where to start

It took some time since my previous post but I’ve been busy doing a host of other things, however, I wanted to wrap up on the previous post on the importance of debugging, reading the manual and incomplete guides. This post will focus a bit on debugging, more as a reminder to myself and maybe as a helper to others and on how to debug .NET Core apps deployed to Azure.

Car being fixed

Credit for image goes to Florian Olivo

In my previous post I mentioned all the issues that I had setting up Managed Service Identities with Azure databases. These will help in removing the need for user secrets or username/password combinations inside configuration files and authentication is completed through Azure Active Directory. This had its own set of challenges as I mentioned in my previous post, from permissions, to user duplication and good old fashioned lack of knowledge.

With all of this out of the way, once you deploy your application to Azure, I faced a new issue entirely, namely - how do I debug an Azure web app? Debugging, in general, is a great way to figure out where stuff goes wrong and it feels very much like you’re a detective following clues. An error message here, a missing letter or comma, they all lead you closer to a solution and getting your application running. Visual Studio is extremely powerful in this regard and it allows users to step through code one line at a time. I believe this skill is entirely insufficiently taught in schools, with most of the code written during courses being very short and generally free of bugs. I would say that a great way to introduce this would be to have teachers provide larger chunks of code with errors scattered across. The homework would be having to figure out where it all goes wrong and how to fix it. But I digress.

After working a while on the code I had to deploy on my VM it was time to deploy my code to the development server. Now, I should have prefaced almost everything in this post and the previous one that I didn’t have anyone I could ask for help. The team was slowly losing people and we had reached a point where business analysts, project managers, testers would outnumber the actual developers on the project. I am fully aware that under ideal circumstances you might not encounter these issues, but there are plenty of places where this might not be the case, so you might have to fend for yourself.

With the code deployed, it was time to see how it worked in the development environment. This consisted of a number of application services and databases running on Azure. Because error logging (another undervalued element in software development in my oppinion) wasn’t great, I didn’t know where to start. The first step was going on a Google spree and searching “how to debug applications on Azure” or variations of this. I found that you can attach Visual Studio to Azure and debug your application this way (Remote debugging Azure App Services). While this sounds great, there are some errors that aren’t showing up (or they didn’t for my instance) and the entire enterprise can be quite sluggish especially if you’re connecting from a remote VM through a VPN to an Azure service. YMMV.

Next up, as I still wasn’t finding the source of the error, I spent some time digging through the different resource groups setup on our Azure subscription. I was saved by a resource group dedicated to resources shared across the company. This group contained an instance of Application Insights which would allow me to check for any failures that were reported through this. The link entitled ‘Failures’ shows exactly this - a breakdown of failed operations on the application I had just recently deployed.

The section provides information on the type of response codes, the top exceptions and failed dependencies. Because I knew the operation that was failing (a retrieval of data from the Azure db), I could click on the operation and drill into the failed operations. Further down the rabbit hole I could find the failed requests and then the end-to-end transaction details and the values of the exceptions thrown. If you have read the previous post you will most likely have figured it out by now - I did not provide the development API (an app service running on Azure) access to the database that was not using Managed Service Identities.

Once the permissions were given to the API, the issue was resolved, but as you can clearly see, without knowledge of what is breaking down, it is hard to figure out where the issue starts. While now I would dive into similar tasks with greater confidence, I can see why, especially when starting out, newer developers are afraid of bigger tasks or projects. They can seem like these large unscrutinable behemots of code that hide traps at every entry point and every database call. However, as time and experience teaches us, complexity will go away slowly the more we drill down.