NashTech Blog

REST API Test Strategy: What aspects of the API should we test?

Table of Contents

REST API testing is a crucial process that involves evaluating the functionality, performance, and security of RESTful APIs. It verifies whether the API meets the required specifications, responds correctly to different requests, handles errors gracefully, and interacts seamlessly with other components of the software ecosystem. Several methods and resources help with HOW to test APIs — manual testing, automated testing, test environments, tools, libraries, and frameworks. However, regardless of what you will use — Postman, supertest, pytest, JMeter, mocha, Jasmine, RestAssured, or any other tools of the trade — before coming up with any test method you need to determine what to test.

What aspects of the API should we test?​

Whether you’re thinking of test automation or manual testing, our functional test cases have the same test actions, are part of wider test scenario categories, and belong to three kinds of test flows.

1. API Test Actions

​Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions:

1.1 Verify correct HTTP status code

Returned status code is according to spec. For example, 200 OK for GET requests; 201 for POST or PUT requests creating a new resource; 200, 202, or 204 for a DELETE operation, etc.

1.2 Verify response payload 

  • Check valid JSON body 
  • Check valid response structure according to data model (schema validation: correct field names and field types, including nested objects; correct values; non-nullable fields are not null, etc.) 
  • Verify that error description is correct for this error case and error format is according to specification
  • In addition, check the following parameters: filter, sort, skip and limit for filtering, sorting, and pagination

1.3 Verify response headers. HTTP server headers have implications on both security and performance.

  • HTTP headers should include content-type, connection, cache-control, expires, access-control-allow-origin, keep-alive, HSTS and other standard header fields – according to specification.
  • HTTP headers should NOT include X-Powered-By

1.4 Verify correct application state. This is optional and applies mainly to manual testing, or when a UI or another interface can be easily inspected.

  • For GET requests, there is NO STATE CHANGE in the system (idempotence)
  • For POST, DELETE, PATCH, PUT operations: Ensure action has been performed correctly in the system by:
    • Performing appropriate GET request and inspecting response
    • Refreshing the UI in the web application and verifying new state (only applicable to manual testing)

1.5 Verify basic performance sanity
Response is received in a timely manner (within reasonable expected time) – as defined in the test plan.

2. Test Scenario Categories

Our test cases fall into the following general test scenario groups:

2.1 Basic positive tests (happy paths)

Basic positive tests, often referred to as happy paths, representing the ideal flow of operations.

  • Essence: Ensuring that the system behaves as expected under perfect conditions.
  • Example: In an e-commerce application, a basic positive test would involve a user successfully adding an item to the cart, entering valid payment details, and completing the purchase.
  • Technical Insight: This involves validating HTTP status codes (expecting 200 OK), checking response payloads for correctness, and ensuring that database entries are created with the correct data.

Best Practice: Start with basic positive tests to validate the fundamental functionality before delving into more complex scenarios. Automate these tests to serve as smoke tests for your continuous integration pipelines.

2.2 Extended positive testing with optional parameters

Extended positive testing involves exploring the variations of success by incorporating optional parameters. (filtersortskip and limit for filtering, sorting, and pagination)

  • Essence: Validating that the system gracefully handles additional valid inputs.
  • Example: In a search engine, extended positive testing would involve using various filters, sorting options, and pagination and ensuring they yield the correct results.
  • Technical Insight: This involves sending requests with different combinations of query parameters and verifying that the response data is filtered, sorted, and paginated as expected.

Best Practice: Use extended positive testing to simulate diverse user interactions and validate the system’s versatility. Ensure that boundary values for optional parameters are tested to validate edge cases.

2.3 Negative testing with valid input

Negative testing with valid input involves testing the system’s response to valid but out-of-scope inputs.

  • Essence: Ensuring the system appropriately handles and rejects valid inputs that do not meet specific criteria.
  • Example: In a booking system, trying to reserve more seats than are available while still using valid input data.
  • Technical Insight: This involves sending API requests that are technically valid but should be rejected based on business rules. It is essential to check that the system returns the appropriate error codes and messages.

Best Practice: Use this testing to ensure the system’s constraints are adequately enforced. This is crucial for maintaining data integrity and preventing unintended behavior.

2.4 Negative testing with invalid input

This involves testing the system’s resilience against invalid and unexpected inputs.

  • Essence: Ensuring the system is robust enough to handle erroneous inputs without crashing.
  • Example: In a registration form, entering symbols or numbers in a name field or pasting an image file or a million characters into an address field.
  • Technical Insight: This involves input fuzzing, where random and unexpected data is provided to test the robustness of input validation and error handling mechanisms.

Best Practice: Employ extensive input validation checks to ensure data integrity. Monitor logs for unhandled exceptions and ensure error messages do not expose sensitive system information.

2.5 Destructive testing

Destructive testing pushes the system to its limits, often to the point of failure.

  • Essence: Understanding the breaking points and ensuring the system fails safely.
  • Example: In a web application, handling extreme loads to observe at what point the system crashes and how it recovers.
  • Technical Insight: This involves stress testing, where the system is subjected to loads and conditions far beyond its operational requirements. The goal is to observe how the system handles extreme conditions and whether it can recover gracefully from failure.

Best Practice: Use destructive testing in a controlled environment to understand the system’s limits. Implement safeguards to ensure that data is not lost in the event of failure and the system can recover quickly.

2.6 Security, authorization, and permission tests

Security, authorization, and permission tests are the vigilant guardians that protect the sanctity of the system.

  • Essence: Ensuring that the system is impervious to unauthorized access and that users can only perform actions they are permitted to.
  • Example: In a document management system, ensuring that a user can only access documents they are authorized to view and cannot perform administrative actions unless they have the appropriate permissions.
  • Technical Insight: This involves testing authentication mechanisms (such as OAuth tokens), authorization checks (such as role-based access control), and input validation to prevent injection attacks.

Best Practice: Regularly update security libraries, perform penetration testing, and educate developers on secure coding practices. Implement fine-grained access controls and regularly audit permissions.

3. Test Flows​

Let’s distinguish between three kinds of test flows which comprise our test plan:

3.1 Testing requests in isolation: 

Executing a single API request and checking the response accordingly. Such basic tests are the minimal building blocks we should start with, and there’s no reason to continue testing if these tests fail.

3.2 Multi-step workflow with several requests

Testing a series of requests which are common user actions, since some requests can rely on other ones.

For example, we execute a POST request that creates a resource and returns an auto-generated identifier in its response. We then use this identifier to check if this resource is present in the list of elements received by a GET request. Then we use a PATCH endpoint to update new data, and we again invoke a GET request to validate the new data. Finally, we DELETE that resource and use GET again to verify it no longer exists.

3.3 Combined API and web UI tests 

This is mostly relevant to manual testing, where we want to ensure data integrity and consistency between the UI and API. We execute requests via the API and verify the actions through the web app UI and vice versa.

The purpose of these integrity test flows is to ensure that although the resources are affected via different mechanisms the system still maintains expected integrity and consistent flow.

Conclusion​

In conclusion, this article is a comprehensive guide to REST API test strategy with essential aspects, methodologies, and best practices, emphasizing the importance of testing for functionality, performance, and security. REST API testing is a critical aspect of software development, ensuring the reliability and functionality of APIs. A robust REST API testing strategy is essential for maintaining high-quality and dependable web services in today’s dynamic software landscape.

Reference​

  1. https://medium.com/@roy.mor/rest-api-test-strategy-what-exactly-should-you-test-21c2f1cc3ed5
  2. https://kimputing.com/blog/a-deep-dive-into-test-scenario-categories-for-rest-api-testing
Picture of Nguyen Kim Hue

Nguyen Kim Hue

Leave a Comment

Your email address will not be published. Required fields are marked *

Suggested Article

Scroll to Top