Planning a performance test

addtoany linkedin

Questionnaire with checkboxes, filling survey form online, answer questions

If you're reading this post, you likely understand the importance of performance testing software applications but may need some ideas on how to properly plan out a performance test to ensure you are successful.

There are many things to consider in planning a performance test or a series of performance-related testing projects. The success of a testing project depends on designing and implementing a well thought-out and structured plan. 

Below, I've outlined a several important elements to consider and document prior to starting your test project.

Planning a performance test

Like any project or significant endeavor, planning is crucial if you want to be successful.

Some key areas to consider when planning a performance test include:

  • Objectives: Identify and agree upon a clear set of objectives. This allows for the proper methodology to be allotted to ensure your goals are properly covered. Clear objectives help identify when the project is complete and can be exited.
  • Requirements: Document what is necessary to successfully execute and complete the testing project.
  • Topology: Develop a topology diagram including hardware and software specification to ensure all parties understand the environment to be tested.
  • Test matrix: Outline a matrix of test cases, user loads and configuration changes to ensure everyone involved understands what is being executed and compared.
  • Methodology: Writing a detailed outline of how the tests will be executed ensures that the correct testing is performed to achieve the expected goals.
  • Observation and results: Designate one place to post test results, caveats, assumptions and observations to facilitate clear communication.
  • Conclusions and takeaways: This section helps draw all interested parties to the conclusion and any required follow-ups.
  • Administrative details: These include things like JIRA objects associated with the project or links to post-project artifacts.
  • Stakeholders: Persons who are contributing in some form, signing off on the project or have vested interests in the project.
  • References: Any documentation or links used to acquire information or details that were used to educate or assist with the project.

When to plan a performance evaluation

When is the ideal time to plan a performance evaluation? The short answer: as early as possible. I would argue planning how you want the application to scale, perform, and its level of robustness is just as essential as identifying the functionality of the application.

Building an enterprise-level application implies that it can scale to an enterprise level of users and data, perform at the speed of thought and is accessible 24/7/365. Having a functionally-rich enterprise application that does not have those performance characteristics will result in customer dissatisfaction, loss of sales and ultimately a loss in revenue for the company that built the application.

In addition, development and customer support teams require the ability to support customers in real time. Neglecting to build in the infrastructure early to readily diagnose issues for fast resolution can make that task onerous and can significantly increase the time to find and resolve issues. Early performance planning can also highlight weakness such as a lack of logging. A strategic level of logging allows for granular outputs at each API, thus giving a deeper understanding of bottlenecks.

During the planning phase one can uncover gaps in the test plan, missing requirements, insufficient resources, and more. Hence, it is better to identify these issues early in the project so you can react and recover in time. To ensure that your plan is clear, accurate and has the correct mandate, it's best to have key stakeholders review and sign off on the plan.

Performance testing should be integrated in all phases of the development cycle. It needs to identify what to test, how to test it and what the success criteria needs to be established and documented in the plan for approval and posterity.

What to evaluate

Performance testing of a complex enterprise application can have a plethora of metrics to capture and evaluate. Before you commence testing, it is a good idea to identify which metrics are required to monitor and compare; how and where to get these metrics; what is the capture frequency and how to evaluate them. Being unsure of which metrics to capture and at what frequency can result in having to rerun tests to capture the required metrics, leading to a loss of time and efficiency. 

Often what you want to capture and evaluate and at what frequency is dependent upon the project's objectives. Evaluating different metrics can give multiple insights from a test, so knowing this prior to the tests can increase the test's effectiveness.

It is good practice to identify the various SMEs required for the project, either as contributors or consultants in the plan, prior to commencing the project. That will improve communication and efficiency of the project. 

Performance test engineers will often need to work with various subject matter experts (SMEs) to determine which metrics to capture and how to extract the information. Different technical experts will want to see and evaluate different metrics. A system engineer may be very interested in the server's system metrics such as CPU usage, memory consumption, IO activity, and other factors, where a software engineer can be more interested in locks and thread counts, error messages and information from the application's logs. Therefore, careful documentation of system and application metrics is paramount for future evaluation for different objectives.

Even minor information such as browser version, web server type and gateway settings can add valuable perspective to test results and findings -- the more information, the better. 

What does success look like?

One of the most difficult aspects of performance testing to nail down is determining what is good enough. When testing an application or software upgrade, you can often assume the previous application or software is your baseline, so evaluating your outcome against the previous version can be insightful. But how do you figure out what is good enough if the test is against a new product or application?

Assuming the testing is against a new product/application or version of software that has no previous baseline metrics to compare to, here are a few questions to consider:

  • Are there customer expectations already defined?
  • Are there any industry standards to refer to and compare to?
  • Are there logical user expectations to infer, such as "all navigation or dropdown lists should render in <=1 second?
  • Ask yourself, "how long would you expect or be comfortable waiting for an action?"
  • Are the scaling characteristics determinant and reasonable (for example, linear)?
  • Does your application scale vertically and horizontally as expected?
  • Do customers lose service in a failover test? If so, how quick is it to recover?
  • Do end users receive warnings or information for longer-running actions? Without that, the end user may think the action has timed out or been orphaned.
  • Does one slow action impact other actions, a cascading effect?
  • What happens at the upper limits of the application's threshold?
  • Does the application recover from spikes as expected?
  • Do resources and user response times remain constant at a steady state, or do they increase?
  • Are resources released as expected and when no longer in use?

The key is to have as much clarity and specifications as possible around success criteria documented in the test plan prior to engaging in the testing. Otherwise completing and exiting from the project can be difficult and filled with uncertainty. 

Analysis of results

Analyzing results can often require careful consideration and discussion. Different subject matter experts (SMEs) look at different metrics or care about different results, so evaluating the results from various mindsets can be useful in order to draw a variety of conclusions.

Comparing various combinations of results can lead to different conclusions and insights. Overlaying of result metrics can offer great insight -- for example, transaction response times vs. user load vs. system resource usage gives great insight into how they are all related. This type of analysis assists with proper sizing of environments and future resource allocations, and metrics like user loads vs. locks and threads may expose potential runtime issues and bottlenecks.

Proper use of dashboards, charts and reports can facilitate results comparison and allow for easier understanding of the relationships. The creation of those dashboards, charts and reports may need to be factored into the overall project timeline and project test plan.

What to do with the results

Lastly, what to do with all these great results and insights? Running performance tests, analyzing results and drawing conclusions without any follow-up on actions can often be a significant waste of time and resources. In other words, what you do with the results and conclusions is key to the project's success.

Here a few ideas of what can be done with performance test results:

  • Product defects or action items: Follow up on any anomalies or software deficiencies found.
  • Product enhancements: Potential improvement areas maybe uncovered and if implemented could result in faster performance, reduction of resources and/or better scaling characteristics.
  • Internal technical papers: Sharing useful information with various internal teams can be advantages to those teams. For instance, if you uncover new best practices they should be shared with SaaS, DevOps and support teams, so they'll know the new best practices to implement.
  • External technical papers: Sharing useful information with customers and partners can ensure they're also aware of new best practices, tuning parameters, h/w impacts, etc. Sharing externally should always be vetted with the correct experts and signed off on prior to releasing the information.
  • Marketing papers: Great performance and scaling findings may be enticing information for marketing teams to share with customers, partners, industry experts, etc. to garner sales. This type of information should be signed off on prior to release.
  • POC Decision: Sometimes performance-based testing is used as an integral part of decision-making for new products, applications or functionality. Sharing results with the key stakeholders will allow for more informed decision-making.

Summary

Performance testing is a broad discipline of testing methodologies and can be quite complex in nature, but it is absolutely essential to the health and success of enterprise software. It should be considered early in the development cycle and properly planned and sized to be successful.

In an ever more demanding and competitive software industry, a key differentiator can be a product's ability to perform and scale. To do this successfully, it requires the proper level of consideration and discipline.
 

Display option
Main

Discussions

Ettore GIallaurito
- January 25, 2023 at 10:31am
Hi there,

I'm trying to figure out if Kinaxis exposes REST/WSDL APIs for integration purposes, in particular for the Sales and Operations Planning module.and objects like purchase requisition, planned order etc.

Yours Sincerely,
Ettore Giallaurito
Kinaxis
- January 26, 2023 at 1:25pm
Hi Ettore,
There are some APIs that are available, but what's best will depend on what you're trying to do. You can reach out using our General Inquiries line, and we'll connect you with someone who can assist with your specific needs. Hope to hear from you soon!
Phone: +1 613.592.5780
Toll Free: +1 877.KINAXIS (546.2947)

Leave a Reply

CAPTCHA

Get blog updates

Stay up to date with blog posts by email:

Eloqua webform