Debugging Techniques for Public Cloud Applications

Debugging applications deployed in the cloud can be challenging. Microservices have become a popular software architecture, the applications are designed to run independently with their own databases and the logging creates hurdles for the developer when they attempt to trace the bug. There are scenarios where the bugs are not because of the logical issues but the infrastructure or the environment issues which can be found only after deploying the application. In this article, we will know more about the Debugging Techniques for Public Cloud Applications

The Challenge

The way the microservices are deployed debugging them is not a straightforward task, unlike monolith applications the microservices state of the application is spread across multiple services, it is very difficult to get the state of the application and trace the request of the multiple application is very difficult, modern deployment in the public cloud the application running in the Virtual machine instances or the cloud services or the Kubernetes services scale differently in the production 

Techniques to Debug code in the public cloud

  • Application Performance Management
  • Remote Debugging
  • Log Analyser

Application Performance Management

APMs help the DevOps admins to detect the problem early and sometimes even before a problem occurs by helping them by parsing the web application request logs, error rates, incoming requests of the application endpoint and also identify the bottleneck and slowdowns of the cloud Database Services or Queuing / Streaming services ( like AWS RDS, MQ, Azure SQL ) which the application is using.

The first step in the APM is to identify and fix the application performance problems, the key application metric to monitor is the resource availability, is all the instances or the VM in which your application is deployed is still up and running, the cloud database or queues are running or not then trace the application errors what is the frequency of errors and the source of errors

Another key factor is the Traffic level and the response time of the application, is there any slowness of the application due to traffic or the services used by the application have higher response times can be identified and set up the sufficient scalability and tweak the high availability configurations in the cloud

APM tools like New Relic, DataDog, Splunk, etc help the developers or the DevOps admins to stay on top of their application performance,  there are multiple tools that do the native cloud support for the Container monitoring( ECS / EKS ), Function as a Service ( AWS Lambda ) monitoring or Pivotal cloud foundry monitoring

Remote Debugging

In layman’s terms, remote debugging is the concept of debugging an application that is not running on your local machine. The core concept of remote debugging is to open a connection with the application server / POD and look into the problem in the log or the events.

In the microservice deployment or the serverless architecture if your code is running the Function as a Service model remote debugging will help the developer to root cause and fix the bugs. Remote debugging tools can be used as silo applications to analyze the log trace, flow of execution, and memory dump or they can be used along with your favorite IDEs.

Usually remote debugging using remote PowerShell or the SSH into the production machine and look into the logs or add a breakpoint and re-run the application to understand the values of the variables or input to the functions. There are multiple ways to do remote debugging, but the tools like Lightrun simplify the process of remote debugging and allow the developers to manage and debug multiple nodes, and also help the developers to add the new log statement dynamically on the fly without the code change and re-deployment.

Source: Google Cloud

Log Analyzers

Analyzing the log for the issues is one of the ways to debug the applications, it’s simple and easy. Developers need to trace the flow of sequence and the values of variables written into the log, the logs are generated by using the Log frameworks available for every programming language ( like Log4J, Log4Net ) but analyzing of the logs becoming more and more complex with the muti threaded and multi-node deployments of the applications

There are plenty of log analyzers like ELK, Nagios, etc can help to break down the problem when you have large sets of data, it simplifies the log collection to the representation of data with the alerts and creates a ticket in the bug tracking tools when a certain condition triggered.

If your application is running on pods the Kubernetes services will scale the pod depending on your requirement which in turn your application may be running on multiple pods and if you have event-driven architecture any of the pods will take the Job for execution to get the frictionless experience in log analyzing the data ingestion must be streamlined and the logs of all your container should be managed in the single pane of glass. There are tools that provide the cloud-native integration with the AWS, Azure, or GCP which does the streamlining of the log and the event shipping,

Take Away

Debugging is an art and it takes discipline and time to master it, In the modern microservice and cloud deployment model, it becomes more and more complex. Using third-party tools for each technique mentioned in the post helps the developers to track and create a more observable environment by adding the application metric and tracing, remote debugging using the in-production breakpoints and the log analyzers 

Also published on Medium.

This site uses Akismet to reduce spam. Learn how your comment data is processed.