What are non acceptance service tests?
Understanding Non-Acceptance Service Testing
Introduction to Non-Acceptance Service Testing
- The discussion begins with an introduction to a specific type of testing known as non-acceptance service testing, which is distinct from acceptance testing.
- This type of test directly exercises the services of a system by stimulating the server and its available services, rather than using auxiliary classes or tools.
Characteristics of Non-Acceptance Tests
- The term "non-acceptance" indicates that these tests do not guarantee that the system delivery will be accepted, as they are not derived from explicit requirements gathered through user interviews.
- Instead, these tests focus on implicit requirements that must be satisfied for explicit requirements to be met; however, they are not explicitly requested by clients or stakeholders.
Implicit vs. Explicit Requirements
- The distinction between implicit and explicit requirements is crucial; non-acceptance service tests emphasize implicit needs rather than those clearly defined in scenarios.
- In this context, non-acceptance service tests can also encompass functional tests at various levels: unit, integration, or system level.
Test Execution and Guarantees
- When executed without updated dependencies, a test may be classified as a unit non-acceptance service test; if run with updated dependencies, it becomes an integration non-acceptance service test.
- The guarantees provided by these tests can include regression checks (ensuring existing functionalities remain intact after changes) or smoke tests (verifying basic functionalities).
Example of Non-Acceptance Service Testing
- A concrete example illustrates how such a test operates: sending a request to the server and inspecting the response to verify correctness.
- Unlike acceptance tests where requirements are explicitly stated by stakeholders, here developers check conditions like whether an initial list of students returns empty—an implicit requirement.
Key Differences Between Acceptance and Non-Acceptance Tests
- The fundamental difference lies in the origin of the requirement being verified: acceptance tests stem from explicit stakeholder requests while non-acceptance focuses on developer-driven checks.
Implementation Details
- An example code snippet demonstrates sending requests to verify responses against expected outcomes without any specific setup mentioned initially.
Error Handling in Tests
- If a request fails or does not return successfully, error handling mechanisms ensure that the test fails appropriately based on predefined conditions.
Conclusion on Testing Dynamics
Understanding Test Suites and Acceptance Testing
Overview of Test Suites
- The clause specifies a test, indicating that the suite is related to a set of tests. It highlights the distinction between empty responses and those with content.
- Initial tests return an empty list of students, serving as documentation for the test's intent but lacking direct connection to requirement documents.
Specific Test Examples
- A specific test checks if the student registration service only accepts valid JSON objects representing students, ensuring server-side validation.
- The request format supports various methods; here, it sends a JSON object with fields like name and CPF (Brazilian ID), which are essential for valid student registration.
Error Handling in Tests
- Sending an invalid JSON object should trigger an error response from the server, demonstrating expected behavior when incorrect data is submitted.
- If a successful response is received instead of an error, it indicates a problem with server validation processes.
Developer Assurance through Testing
- Developers rely on these tests to ensure that servers perform necessary validations without needing explicit checks in their code.
- This approach guarantees that developers can trust server functionality aligns with expectations set by stakeholders.
Setup and Execution of Tests
- The discussion includes clauses like
beforeAll, which executes setup actions before running any tests in the suite, such as starting up the server.
- Using
beforeAllensures consistent environment preparation across all tests without redundant setups for each individual test case.
Cleanup Procedures Post-Test
- After executing all tests, cleanup procedures can be implemented using
afterAll, which may involve shutting down servers or clearing databases to prevent future test contamination.
Understanding Code Execution in Testing
Specific Setup and Code Execution
- The code for a specific item setup is integrated directly into the request before sending it. This ensures that any necessary cleaning actions are included at the end of the code after verifying the expected results.
Distinction Between Test Types