Application performance optimization
Performance optimization is a very important part of the web app development these days. Users don’t want to waste time waiting for page loading and rendering. In a dynamic and complex web applications we have to deal with calculations, database requests and complex operations which may take some time and their speed and performance has a direct impact on speed of HTTP request.
As we add features and logic to our app we have to deal with higher abstraction and complexity which can increase the performance requirements and it may slow down the application. And of course, even the user might feel the impact of our performance troubles while working with the app.
How it can happen?
The app is as fast as its slowest part. We cannot fix the performance issues at once, but we have to come with continuous and complex strategy. Even everything was fast and great on the begging, by adding more logic and by real using of the app, we can find out that the app has bottlenecks which slow down the whole app. Generally speaking, we can say that bottlenecks can appear on three layers
- Data layer - bad data manipulation design where data might be too large (please do not mismatch with big data), badly formatter or inappropriately stored
- App logic layer - code, which represents the logic of the app (e.g. submitting an order) is inefficient and it executes too much (sometimes even pointless) operations
- Server layer - apps have allocated some limited resources (CPU, RAM...). If the server is slow (not enough resources) or it has too many incoming requests to serve we can face a bottleneck on the server side.
Dealing with performance issues on different layers needs different tools and techniques. Let’s see how we can find bottlenecks on the application layer and where to start with optimization.
Where to find bottlenecks
We can find bottlenecks on multiple stages of the development process:
Code review - first at all, we have a chance to find the issue (in form of bad code or design) during the code review. On this stage we can identify the problem and fix it even earlier that it really happens. Unfortunately, as we said, performance issues are usually quite complex so we cannot rely just on this technique.
Monitoring - we can monitor apps, which are in our infrastructure (servers) and with our technology. By using various tools we can monitor the performance of the app in time and track the request time and the server load (how busy the server is). Then we can analyse the outputs and react on found issues and fix it. As one representative of such tools we can name Dynatrace.
Profiling - if there is a performance issue in the app, soon or later somebody will find it out and feel its impact in a form of slow app. In a better scenario it will be a developer or in a worse one, it will be a user who will suffer from the slow app. But the problem will show up and we can feel it by slow response from the system. Since we know that there is somewhere an issue, the best way to find it are profiling tools, which can tell us where exactly the problem is.
A profiler is a software service or tool which can give us detailed information about a situation in the app. What is happening and why. If we use a framework (e.g. Yii2 or Nette), there is probably a simple profiler already. So we can find out how long each plugin takes to initialize or how much memory it consumes. It can give us an interesting overview, but we have to go deeper.
There are a couple of tools for deep performance analysis and we use Blackfire.io in our case.
When we have found a scenario when handling HTTP request takes too long (during the development or later on during monitoring), the real investigation starts. In this case we are looking for a piece of code where the app spends most of its runtime. Blackfire.io can show us all the steps (function calls) which the app is going through in a form of oriented graph where each node represents executed functions and each arrow represents their calls. Each function (node) displays number of calls, consumed time and memory.
On the figure above we can see part of such a graph. It is a real situation which we had to deal with - just a simple HTTP request to render a filter form and results of the searching. The problem is that 42.55% took a translating (i18n) function. In the graph we can see a suspicious behaviour when from 80 times called function attributeLabels there is a function _tF called for 6320 times.
Why? Just because the function attributeLabels always calls _tF for new data instead of calling it once and keeping the output (results of _tF don’t change during the runtime). Yes, you are reading correctly - it translates the string, returns it and when it is called again, it calls the _tF again instead of reusing the old value. Just a simple caching of the value in the memory of the object might save us a lot of time.
As you can see from the figures below, this change decreased the execution time from 1.03s to 50.4ms and instead of 6923 calls, the _tF function is called just 302x (call number mismatch is due other callers of _tF function).
Blackfire can proceed a specific request (HTTP from browser or curl, or even the CLI calls) so we don’t have to go line by line and try to identify the problem manually, but we can just call Blackfire and it shows it up.
In the same way as blackfire can show time consumption it can show how the app works with memory or DB. It provides even a feature to compare two calls so you can see the results “before and after”. There are many more features such as network, DB queries, 3rd party calls… Just go to Blackfire.io and try it out because you have to feel how cool it is :)
Blackfire integration to development process
Blackfire installation isn’t difficult at all. It is an extension library to PHP which you can install and it is very well documented. Since we have all our projects dockerized, we have chosen to integrate it to our Docker image so all devs have it integrated without any complications.
I cannot really say that I use Blackfire on a daily basis but it is very helpful to know what kind of tool use when it is needed. It definitely depends on the character of the issues we face. It is very helpful for refactoring, development of key backend features or optimization of old or legacy code.