Skip to Content

User Acceptance Testing Strategies for Large Data Volume Scenarios

A group of people seated at a table in a virtual meeting.

When you start storing tens and thousands of records in Salesforce, you may encounter a Large Data Volume scenario. As a definition, Large Data Volume means:

  • You have more than 5 million Records
  • You have thousands of users with concurrent access
  • You have Parent Objects with more than 10,000 child records
  • You use more than 100GB of storage space

Today, we’d like to recommend some steps to help ensure the functionality you rely on in Salesforce can perform as expected with your anticipated data volumes. You’ll do this the same way you validate new functionality of a major software release: through User Acceptance Testing.

What is User Acceptance Testing? It’s Part of Performance Testing

User acceptance testing, or UAT, is a form of performance testing that is an integral part of modern software projects. The idea is to test whether what the developers built is actually useful to an organization, before it goes live. End users (such as people at a business, nonprofit, or education institution who are usually not software developers) are invited to test the upcoming functionality in a test environment, and say “yes, this works for us!” or “no, this isn’t the functionality we expected.” UAT typically includes a series of test cases which are representative of real-world situations and processes. In this form of performance testing, the anticipated results are compared against actual results. UAT should be completed and signed off by the business users before the project can go live.

When would you want to do some performance testing? The scenarios listed below are good suggestions:

  • You are anticipating some event (such as a new enrollment season) and think that you may have to deal with a substantially large volume of data in Salesforce than your normal thresholds
  • You are already dealing with large data volume (as qualified by one or more criteria indicated above) and concerned about the ability of Salesforce to perform under a new feature release or functionality update
  • You are going live with your Salesforce platform for the first time and want to make sure that the platform can handle the anticipated data load
  • Because User Acceptance Testing takes time to prepare, plan and execute, you should consider whether or not you’ll have a period of UAT early in your project planning phase

Suggestions for User Acceptance Tests

One of the first and foremost considerations while planning for performance tests is to identify and debate all the performance concerns anticipated and define the scope of test, accordingly. Since performance tests can be performed at various degrees, resulting in different levels of complexity, deciding the scope of tests and having a general agreement with all the key stakeholders will help to set the right expectations up front.

The scope of tests should include:

  • Volume: What is the data volume to consider for the tests? Should the test simulate a full production like load and measure performance or is it sufficient to simulate a sample peak load and see how the platform behaves? For example, if you are anticipating to handle 100K new requests, should the performance test simulate the 100K or 50K? How realistic is the 100K projection? Should the test be conducted for 125K volume (25% more) just in case the original projection is exceeded?
  • Average vs Peak performance: Are you concerned with peak load? i.e. specific periods of time in a day in which you are anticipating higher than normal traffic? If so, should the performance test simulate both average and peak load and measure associated performance? What is the average performance during normal hours and what is the extent of load increase anticipated during peak hours? Here’s an example of a nonprofit doing disaster relief work that had to handle a lot of demand for hurricane relief.
  • Load simulation pattern and equipment: How is the performance test going to simulate the required loads and how realistic is the simulation? What are the software and hardware requirements necessary to simulate loads that are closer to production? For example, if your end consumers are going to use mobile devices and tablets, do you have the right software and test environment to test such load patterns?
  • Actual production data vs test data: Does the performance test have all the variants of data points that an actual production data exhibit? How to ensure that the test data is closer to production? Is the test data derived from actual production or is it sufficient to test against simulated data?
  • Timing of the tests: Performance tests should be conducted at least three to four weeks before the actual live date, so that there is sufficient time for the developers to fix bugs or enhance the code if necessary. You should ensure that all the critical feature development activity is complete before the tests, so that there are no surprises in the end.

Tips for User Acceptance Test Preparations

There are several preparatory steps involved in conducting performance tests. Some of the key steps are detailed below:

  • Test environment: It is typical to designate a dedicated sandbox environment for conducting the load tests. If the solution involves additional cloud platforms and software (such as Heroku), then the test environment should also include those platforms. In case you are planning to reuse an existing UAT or development environment for performance testing, it should be noted that the environment will not be available for any other functional testing during the designated load testing period.
  • Static load simulator: In actual production, the platform is going to handle load under certain conditions. Developing upon the previous example, if you are handling 100K new requests per day, by Day 10, there will be 1M static records on the platform in addition to the new 100K requests coming for the day. Hence, the performance test should have a way to simulate static load anticipated on the platform. If data security is not a concern, then the actual production data can be used to simulate the static test bed. Otherwise, tools and scripts will be necessary to simulate the static test data on the test bed before the performance test begins.
  • Dynamic request simulator: The methodology and frequency by which the dynamic requests are simulated are important. There are several cloud-based tools available in the market, that simulate end user logins and submission of different varieties of requests. However, these platforms need to be scripted suitably so that the actual variety of requests anticipated in actual production can be simulated during the testing period. Sometimes additional equipment and simulators might be necessary, such as iPads or mobile phones with associated software running on them.
  • Metrics and measuring performance: It is important to make sure that there is clarity in terms of metrics that are going measured during tests and the associated tools and data points are suitably captured. Since performance tests are complex exercises to conduct, any errors/failures in capturing the required metrics might prove to be a costly mistake. Hence, it is important to iterate through the tools and capture methodologies to make sure that there are sufficiently robust to perform under the load. Metrics can be measured at the front end as well as back end. For example, the time it takes for a lightning page to load (esp. complex pages with lots of data) is a classic indicator of how the front end behaves under load. However, there could be complex situations such as the overall time it takes for a request to be processed, as it goes through various stages of processing. In such situations, additional logging needs to be done on custom objects so that the time taken by various components of the system to do their processing is measured suitably.

Several iterations and fine-tuning might be necessary before the performance testing methodology is perfected and is able to simulate and measure an actual production like load. The results of previous performance tests can be compared against the current production metrics to see how realistic the results are and work on suitable improvements as needed.

To learn more about engaging a Salesforce.org Customer Success Architect in your organization, please contact your Account Executive.

Gokul Seshadri headshot
Gokul Seshadri Technical Architect Director

Gokul Seshadri is a Technical Architect Director at Salesforce.org Advisory Services. He helps Higher Ed and Nonprofit customers to succeed in their multi-channel marketing, digital transformation, and customer 360 initiatives using Salesforce.org Nonprofit Cloud, Education Cloud, and other related technologies.

More by Gokul

Get the latest articles in your inbox.