What is performance Testing?
Performance testing can generally be summarized in three aspects: application performance testing on the client,on the network, and on the server.Usually, an effective and reasonable combination of the three aspects can achieve a comprehensive analysis of system performance and bottleneck prediction. The purpose of performance testingand optimizing is to verify whether the software system can achieve the performance indicators proposed by users,and to find the performance bottlenecks in the software system, optimize the software, and finally, optimize the system. Specifically, the purpose of performance testing mainly includes the following aspects.One, assessing the capability of the system.The data upload and response time obtained in a test can be used to validate the capability of the planned model and help developers to make decisions.Two, identifying weaknesses in the system.Controlled laws can be increased to an extreme level and broken through the rapier bottlenecks or weaknesses in the system. Three, optimizing the system.Developers can repeat performance testing and verifying that the activity of the tuning system have achieved the desired results, so that the performance of the system can be improved. Four, detecting problems in software. Long-term task to execution can lead to program failure due to memory leaks, revealing hidden problems or conflicts in the program so as to provide a reference for improving the application.Five, confirming the reliability and resilience of the system. The only way to assess whether the stability and reliability of the system meet the requirement is to perform the task for a certain period of time under a production load. Performance testing and optimizing needs to find the bottlenecks first, then what are the bottlenecks in the system? Some common bottlenecks are listed here.Hardware’s performance bottlenecks include the problem of CPU,memory, and a disk IO. Performance bottlenecks of middleware can be parameter configuration, database, web server, and so on.For example, the parameter setting of a middleware platform is unreasonable, resulting in the bottleneck. Performance bottlenecks of applications generally refer to the new application developed by engineers. For example, the architecture of the application is unreasonable, or the design of the program itself has problems such as without enough threads for serial processing and request processing, so that the bottleneck caused by the low performance of the system when a large number of users are connecting at the same time. Then, the Windows, Unix, Linux, and other operating systems can be performance bottlenecks. For example, in performance testing, when there is insufficient physical memory, the virtual memory settings are unreasonable. The switch in efficiency of virtual memory will be greatly reduced, resulting in a large increase in response time of behavior. At this time, it is considered that there’s a performance bottleneck on the operating system. Performance bottlenecks of network mainly includes firewalls, dynamic load balancers, switches, and other devices. What we need to do in performance testing is to consider various factors comprehensively and then assist developers or operators to locate performance bottlenecks together. There are a lot of objects of performance testing,and optimizing from the top of the application to the bottom of the application. It also becomes a complex and a difficult work for performance tuning
Common performance Testing Methods
Some main types of performance testing are listed as follows, including load testing,stress testing, concurrent testing, benchmark testing and so on.A load test is usually conducted to understand the behavior ofthe system under a specific expected load.This load can be the expected concurrent number of users on the application,performing a specific number of transactions within a set duration.These tests will give out the response times of all the important indexes of business.The database, application server and the network are also monitored during the test.That will assist in identifying bottlenecks in the application software and hardware.Stress testing is normally used to understand the upper limits of capability within the system.This kind of test is done to determine the system’s robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum.The key word of stress testing is extreme through the extreme pressure of the system, we can observe the performance problems of the system.Then the performance problem is analyzed to achieve the purpose of the system optimizing.So stress testing is to make the system to go wrong.The concurrent testing verifies the concurrent performance of the system.Through a certain amount of concurrent connections this test can observe the behavior characteristics of the system.In the case of concurrency determine whether the system meets the concurrency needs of the design.Concurrent testing is a test of system viewpoint.That means concurrent testing is executing for the whole system rather than a single application or function.The benchmark testing is needed when a new model is added to the software system.The purpose is to determine the impact of the new model on the performance of the entire software system.As the name implies, benchmark testing should have a benchmark for comparison.And according to the benchmarking method,new models need to be turned on and turned off at least one time.The performance indicators of the system before closing the model should be recorded as a benchmark.And then compared with the system performance indicator in the open model state.To judge the performance of the model on the system.Stability testing means under a long term load to determine the stability of the system.Specifically, it is usually done to determine if the system can sustain the continuous expected load.For example, during stability testing we can monitor the memory utilization to detect potential leaks.Also the performance degradation during long term load is important butoften overlooked.The system should ensure that the throughput and response times after long period of sustained activity are as good at the beginning of the test or even better than that.The recoverable testing will test whether the system can quickly recover from the wrong state to the normal state.For example, in a system which load balancing when the host is under a heavy load and cannot work properly. The backup machine must take over the load quickly. And recoverable testing is usually done in the conjunction with stress testing.That means when an application system is done because of the stress testing,this system recovers in a period of time.And the time is an important target of recoverable testing.Full chain testing includes upstream and the downstream of the system.Testing the system by full chain stress testing and stability testing.Some large internet companies will simulate online traffic by stimulating the approximate real user traffic and user behavior.It is a very effective safeguard before large scale activities or before large scale versions are launched.For example, Alibaba Cloud uses full chin testing to identify system problems so as to ensure the smooth network of large scale e commerce activities such as the biggest online shopping holiday at November 11th, also known as double 11.However, full chain testing also has high requirements for companies such as too many system involved.Two many developers involved inconsistent between simulation test data and access traffic and so on.Therefore a full evaluation and complete planning should be carried out before full chain testing.
Demo
First, you need to create a Kubernetes cluster and deploy the application of NGINX.Here our Kubernetes cluster has been created.If you have not been clear about how to create a cluster in Kubernetes,you can refer to the documentation and related courses.Next, deploy an NGINX based on the cluster.Click on “Application” and click on “Deployment.”You can use either image ora template to deploy the NGINX.Click on “Create by Template” here.The template to deploy the NGINX is very simple.The main requirement is that youshould specify the number of copies,the image, and its version of the container and the exposed port.
When the configuration is complete, click on “Deploy”.You can see that a basic containerized application,NGINX is deployed to the cluster.Next, you should add a load balancer of NGINX to exposeNGINX service to the public network so that other users can access the NGINX service.Similarly, the load balancer can be deployed by our template.In the template file for deploying a load balancer,you need to specify the application you just deployed by the selector.In addition, you need to specify the container ports to be monitored and the ports exposed by the load balancer.After setting up, click on “Deploy” again.In this way, the NGINX can be accessed through the public network.You can click on the external endpoint tosee the results of the deployment. It works.Next, you can use ctop,the container monitoring tool as mentioned to monitor containers.The ctop can be started either as an application or as a Docker container.Here we started as a container.Firstly, you need to log to the master noderemotely through the SSH IP address of the cluster.Next, you need to pullthe ctop image on the master node.You can search the ctop image and you can also pull one of the images of ctop directly.
Once the image is pulled successfully,you can run the ctop to test the performance of containers on the master node.You can start the container directly by using the command,docker run, but you need to be careful to configure some necessary options and parameters.For example, specify the name and path of the container for mounting a volume.After running the container of ctop,you can see the main performance of containers running on the master node.You may have noticed that there wereno containers of endings in the list of containers.That’s because business applications are not deployed on the master nodes by default,only the containers with functions of the cluster and the system are installed into master nodes.You also need to log to the worker node to observe the containers of endings.You can access to the worker node remotely by the master node using the private IP address of the worker node.
In the web council, click on,“Nodes” and find the worker node.You can see the private IP address of the worker.Then, use the command of SSH in the master node to remotely log to the worker node.The log password is also the server password set when creating the cluster.In the worker node,you should pull the ctop image again and run the container of the ctop.
Now you can see that there are two nginx deployment containers running which provide the nginx services directly.In the interface of ctop,you can set different assault ways by pressing S with your keyboard.For example, set it to sort by CPU usage.In this way, you can monitor the containerized application deployed in a cluster at the basic level.Next, the draft testing tool,WRK is used to test the performance of the endings.The principle of WRK is to continuously access endings by testing machines outside the cluster so as to simulate the scenario when a large name of users access to the service of nginx concurrently.Usually, the testing machine can beeither a virtual machine or an ECS instance.If you need to simulatea large number of concurrent connections,such as 10,000 or 10 million connections,you may need more tasks to machines You should keep the window of the carbon to worker node alive to continually monitor the performance of containers in the following steps.You should open a new window and remotely log to the master node again,which acts as a test machine now.Then you start a tool of WRK on the test machine.The WRK can also be installed as an application or a container.In this demo we’ll install it as a container.Similarly, you can search for images of the WRK.You can also pull a specific image directly.Then ran the WRK on the tester machine.You can use basic commands to view the usage and the parameter descriptions of WRK.For example, you can specify the number of threads used for connections,the number of concurrent connections, and the duration of tests. You can use the instruction docker run to start to the WRK directly and specify the parameters of the stress testing at the same time.For example, you can configure it like this command.Use two threads, set the number of connections to 20,and set that test-time to one minute.It should be noted that in this way the container ofWRK will automatically stop after this command is finished.If you need to test in many times you can also log to the container and then test it again so that container will run all the time.You can set different tasks to parametersto implement suggest testing of nginx.For example, using two threads while the number of connections is 50,the test of time is one minute.After executing this command,return to the ctop window on the worker node.You can see that the CPU utilization ofnginx deployment containers increases rapidly.
In practical business applications,if the parameters are too high,engineers should consider expanding the application to ensure the availability and stability of the system.For the results turned by the WRK,we can see several parameters.For example, the last item in the positive and negative ratio of standard deviation,which indicates the discreteness of the results.The larger the number,the more unstable the system.
You can analyze the test results by setting a different number of connections and the test time.You can also increase the number of copies of endings pod and then implement stress testing to verify that multiple copies can improve the usability and the stability of the application.You can practice it and record the data.Then you can compare the different data of performance.Thank you for watching this video.You can use the tools of performance testing and optimizing by experimenting and practicing so that you can have a deeper understanding of the performance testing and optimizing of containerized applications.