Web Application and Software Testing Techniques

Testing Web Applications

Types of Web Applications

  • Conventional client-server applications
  • Web service applications
  • Cloud computing applications

Issues in Testing Web Software

  • Web applications are loosely coupled and components are distributed.
  • Data pertaining to one application is scattered among different devices.
  • Heterogeneity of devices, configurations, and software systems.
  • Dynamic adaptation to local settings.

Web Application Testing Techniques

  1. Content Testing (Content Design)
    • Objectives
      • To uncover syntactic errors (e.g., typos, grammar mistakes) in text-based documents, graphical representations, and other media.
      • To uncover semantic errors (i.e., errors in the accuracy or completeness of information) in any content object presented as navigation occurs.
      • To find errors in the organization or structure of content that is presented to the end user.
    • Combines both reviews and the generation of executable test cases.
  2. Interface Testing (Interface Design, Aesthetic Design)
    • Verification and validation of a WebApp user interface.
    • Occurs at:
      • During communication and modeling.
      • During design.
      • During testing.
    • UI Testing Strategy
      • Test each interface feature.
      • Test each individual interface mechanism.
      • Test each interface mechanism within the context of a use case.
      • Test the complete interface against selected use cases.
      • Test the interface within a variety of environments.
    • Usability Testing
      • Determines the degree to which the WebApp interface makes the user’s life easy.
      • Evaluates:
        • The degree to which users can interact effectively with the WebApp.
        • The degree to which the WebApp guides users’ actions, provides meaningful feedback, and enforces a consistent interaction approach.
  3. Navigation Testing (Navigation Design)
  4. Component Testing (Component Design)
  5. Configuration Testing
    • To test a set of probable client-side and server-side configurations to ensure that the user experience will be the same on all of them and to isolate errors that may be specific to a particular configuration.
    • Server-side
      • Configuration test cases are designed to verify that the projected server configuration [i.e., WebApp server, database server, operating system(s), firewall software, concurrent applications] can support the WebApp without error.
    • Client-side
      • Configuration tests focus more heavily on WebApp compatibility with configurations that contain one or more permutations of the following components (Hardware, OS, Browser, UI components, Plug-ins, Connectivity).
  6. Performance Testing
    • Focuses on unauthorized access to WebApp content and functionality along with other systems that cooperate with the WebApp on the server side.
    • Security and performance testing address the three distinct elements of the WebApp infrastructure:
      • The server-side environment.
      • The network communication pathway.
      • The client-side environment.
  7. Security Testing
    • Focuses on the operating characteristics of the WebApp and on whether those operating characteristics meet the needs of end users.

Gray-Box Testing Approach

  • Gray-Box testing incorporates elements of both black-box and glass-box (white-box) testing.
  • Gray-Box testing considers:
    • Testing outcome on the end user.
    • System-specific technical knowledge.
    • Operating environment.
    • Application design in the context of interoperability of system components.
  • Gray-box testing is the most appropriate testing technique for web application testing.

Service Composition

  • Web services are distributed across the network.
  • A separate composite web service is developed.
  • The composite web process enforces all the coordination and message handling.

Difficulties of Testing Web Services

  • Web services are distributed and most of the time 3rd party applications.
  • Web services can communicate peer-to-peer.
  • Web services find and bind to other web services dynamically.

Cloud Computing

The core idea is to provide a framework that facilitates sharing resources (Infrastructure, software, applications, and business processors) on an on-demand basis.

Web Applications Characteristics

  • Are distributed applications.
  • Are loosely coupled.
  • Can have dynamic control flow.
  • Store state information in a more restricted/limited manner.
  • Integrate components at runtime dynamically.
  • Give the user the ability to change the control flow of the application.
  • Generation test cases for web applications demand the knowledge of runtime environment, software architecture, application logic to a certain extent, and user expectations.
  • A combination of black-box and white-box testing approach is needed for testing web applications => Gray-box testing.

Database Testing

Tests are defined for each layer:

  1. Client layer – UI
  2. Server layer – WebApp
  3. Server layer – Data transformation
  4. Server layer – Data management
  5. Database layer – Data access
  6. Database

Tools for Testing Web Applications

  • Some unit testing framework to test server-side code (phpUnit, simpleTest … etc.)
  • Framework such as DBUnit to test database-related functionalities.
  • Tool such as Selenium to test user scenarios through UI.

Test Automation

  • Entails automating a manual process that uses a formalized testing process.
  • Saves money and time.
  • Increases accuracy.
  • Faster.
  • Can do certain things that manual testing can’t do:
    • Simulating load balancing.
    • Network traffic … etc.

What is Selenium?

  • Selenium is a toolkit that can automate web application testing.
  • The Selenium tests can be run on multiple browsers and supports several languages like Java, C#, PHP, and Python.
  • Many assertion statements provide an efficient way of comparing expected and actual results similar to Junit. Here assertion statements can be used to test web content.
  • Selenium IDE (Firefox Only)
  • WebDriver (Multiple Browsers)
  • RemoteWebDriver (Execute Test cases in a remote machine)
  • Selenium Grid (Run Tests in Parallel in different environments)

Locating and Performing Operations on WebElements

  • By.name()
  • By.id()
  • By.tagName()
  • By.className()
  • By.linkText()
  • By.partialLinkText()
  • By.xpath()

Selenium Verification Commands

  • verifyText
  • verifyTitle
  • verifyElementPresent
  • verifyValue
  • verifyTable

Selenium Assertions

Assertions are the same as Verifications. The only difference is, if the assertions fail, the script will abort. But the script will continue to run in case a verification point fails.

Limitations of Selenium IDE

It is difficult to use Selenium IDE for checking complex test cases involving dynamic contents.

Main Differences Between Component-Based and SOA Applications

  • Component-based systems are usually tightly-coupled.
  • Web service applications are loosely coupled, and the implementation/design is not available most of the time.
  • Web services can be dynamically bound at runtime.

Levels of Testing in SOA Applications

  • Web service level
    • Availability
    • Response time
    • Correctness of output
    • Other quality factors …
  • Composition level
    • Sequence [FSM, PN]
    • Selection [DT, FSM, PN]
    • Repetition [FSM, PN]
    • Multiple-causes of output [DT, FSM, PN]
    • Mutual Exclusion [PN]
    • Concurrency [PN]
    • Deadlock [PN]

Basic Logical Constructs in Web Service Composition

  • Sequence
  • And split
  • And join
  • Or split
  • Or Join
  • Loop

Testing Object-Oriented Applications and Integration Testing

Fundamental Concepts of Object-Oriented Applications

  • Abstraction
    • Information hiding
    • Inheritance and polymorphism
  • Modularity
    • Functional independence
  • Architecture
    • Patterns

Why Information Hiding?

  • Reduces the likelihood of “side effects.”
  • Limits the global impact of local design decisions.
  • Emphasizes communication through controlled interfaces.
  • Discourages the use of global data.
  • Leads to encapsulation—an attribute of high-quality design.
  • Results in higher quality software.

Coupling and Cohesion

  • Cohesion is the degree to which all elements of a component are directed towards a single task, and all elements directed towards that task are contained in a single component.
  • Coupling is the degree to which a module is “connected” to other modules in the system.
  • A well-defined object exhibits High cohesion and Low coupling.
  • High coupling makes modifying parts of the system difficult.
  • Low cohesion increases the complexity of software.

Types of Coupling (1:high, 5:low)

  1. Content Coupling: One module directly connected to the inner working of another module.
  2. Common Coupling: Two modules share a global data item.
  3. Control Coupling: Data in one module is used to determine the order of execution of another module.
  4. Stamp Coupling: One module passes non-global data structures/objects to another module.
  5. Data Coupling: One module passes elementary data types such as int, float, char as parameters to the other.

Type of Objects and Their Role in the Design

  • Boundary Classes: Represents UI and other communications with external software, hardware.
  • Entity classes: Models data (file access, data structures, database access … etc.).
  • Control classes: Main logic of the system that typically sits between boundary classes and entity classes.

MVC Architecture

  • The main idea behind MVC architecture is separation of concerns.
  • Using a Model-View-Controller design that separates Views and Controllers into separate classes allows automated testing of Controller logic.
  • In General:
    • Model objects which represent the data.
    • View objects which handle the display.
    • Control objects which handle events that modify the View or Model objects.

Object-Oriented Design Matrices (C-K matrices)

  • Weighted methods per class (WMC)
    • Number of methods weighted by their procedural complexity.
    • Difficult to set exact limits for metric, but WMC should be kept as low as possible.
  • Depth of inheritance (DIT)
    • This is the distance from the class to the root of the inheritance tree.
    • As DIT grows, it is likely that classes on a lower level inherit lots of methods and override some. Thus, predicting behavior for an object of a class becomes difficult.
  • Number of children (NOC)
    • This metric measures the number of direct subclasses of a class.
    • A high value of NOC might indicate misuse of subclassing (=implementation inheritance instead of is-a relationship).
    • A class with a very high NOC might be a candidate for refactoring to create a more maintainable hierarchy.
  • Response for a class (RFC)
    • Number of methods that can be invoked in response to a message/call.
    • High RFC -> there could be a better class subdivision (e.g., merge classes).
  • Coupling between object classes (CBO)
    • coupling = class x is coupled to class y iff x uses y’s methods or instance variables (includes inheritance-related coupling).
    • High coupling between classes means modules depend on each other too much.
    • High coupling makes maintenance more difficult since changes in a class might propagate to other parts of the software.
  • Lack of cohesion in methods (LCOM):
    • Cohesion measures ‘togetherness’ of a class: high cohesion means good class subdivision (encapsulation).
    • LCOM counts the sets of methods that are not related through the sharing of some of the class’s instance variables.

OO Testing Hierarchy

  • System Level [System Level StateChart, Use-cases, Sequence diagrams, Collaboration diagrams]
    • The main difference between integration testing and system testing is that integration test cases are developed with the knowledge of objects, their implementation, and integration among them.
    • System testing, on the other hand, focuses on the port of inputs, port of outputs, and system-level threads.
    • StateChart Based Testing
      • A state chart is a collection of “blobs” that represent state and arrows that represent transitions.
  • Integration [Top-down, Bottom-up, Sandwich, Mock Objects, Integration Level StateChart]
    • Top-down
      • Modules are integrated by moving downward.
      • Stubs are replaced one at a time.
    • Bottom-up
      • Drivers are replaced one at a time.
    • Sandwich
    • Integrate methods into a class.
    • Integrate classes into other classes.
  • Class Level [Class Level StateChart, Object flattening, Use-cases]
  • Method Level [Method as a unit test, Control and Data path testing]

Static Testing & Analysis

Software Review

  • Management reviews
    • A systematic evaluation of a software acquisition, supply, development, operation, or maintenance process performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches used to achieve fitness for purpose.
  • Technical reviews
    • Evaluate the product to determine its suitability for its intended use. Identify discrepancies from specification and standards.
  • Inspections
    • A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. Here, documents or code are provided to reviewers beforehand.
  • Walk-throughs
    • Another formal process in which the author of the code formally presents the requirements, design, or code to a small group of reviewers. Reviewers listen and ask questions.
  • Audits
    • An independent examination of a software product, software process, or set of software processes to assess compliance with specifications, standards, contractual agreements, or other criteria.

Review Reference Model

  • Roles
  • Planning & preparation
  • Meeting structure
  • Correction & verification

Formal Technical Reviews (FTR)

  • To uncover errors in function, logic, or implementation for any representation of the software.
  • To verify that the software under review meets its requirements.
  • To ensure that the software has been represented according to predefined standards.
  • To achieve software that is developed in a uniform manner.
  • To make projects more manageable.
  • The FTR is actually a class of reviews that includes walkthroughs and inspections.

What Do We Review?

  • Requirements
  • Specification
  • Design
  • Code
  • User guides
  • Test cases

Code Review Checklist

  • Code writing errors
    • Data declaration errors
    • Computation errors
    • Comparison errors
    • Control flow errors
  • Smelly code symptoms
    • Duplicated Code
    • Long Method
    • Large Class
    • Long Parameter List
    • Temporary Field
  • Code Complexity
    • One of the commonly used criteria is cyclomatic complexity:
      • 1 – 10 Simple module, little risk
      • 11- 20 moderate risk
      • 21 – 50 Complex module, high risk
      • 50 ALMOST Untestable, VERY HIGH RISK

Various Approaches to Software Testing

  • Software Testing
    • Static Testing
      • Reviews
      • Inspections
      • Walkthroughs
    • Dynamic Testing
      • Functional (black box)
        • Specification based
          • Boundary value analysis
          • Equivalent classes
          • Decision table
          • State-based testing
          • Cause and effect
          • Model-based testing
        • Exploratory
      • Structural (white box)
        • Code coverage
        • Data Path
        • Slicing

Functional vs. Structural Testing

  • Functional testing uses the specification while the structural approach uses the program code in designing test cases.
  • Test if the software does the right thing and the software does the things right.

Software Quality Assurance Stages

  • Defect prevention through error blocking.
  • Defect reduction through fault detection and removal.
  • Defect containment through failure prevention and containment.

Quality Assurance Activities in the Waterfall Model

  • Defect prevention [Requirement & specification, Design, Coding]
  • Defect removal [Coding, Testing]
  • Defect containment [Release & support]

Configuration Management

Is concerned with the policies, processes, and tools for managing changing software systems.

CM Activities

  • Change management
    • Keeping track of requests for changes to the software from customers and developers, working out the costs and impact of changes, and deciding which changes should be implemented.
  • Version management
    • Keeping track of the multiple versions of system components and ensuring that changes made to components by different developers do not interfere with each other.
  • System building
    • The process of assembling program components, data, and libraries, then compiling these to create an executable system.
  • Release management
    • Preparing software for external release and keeping track of the system versions that have been released for customer use.

SVN (Subversion)

  • SVN is a centralized version control system.
  • Only one master copy.
  • All clients commit to the master copy once changes are made.

Git

Git facilitates both centralized VC as well as DVC (distributed).