Basics Of Manual Testing

"Automation is a part, but Manual is Heart of testing"

What is software testing?

Software Testing is the process of executing a program or system with the intent of finding errors.

Software testing is the process used to help identify the Correctness, Completeness, Security and Quality of the developed Computer Software


The process of evaluating the software application or program to find the difference between the actual results to the expected result.

Software testing has three main purposes:

  1. Verification
  2. Validation and
  3. Defect finding.

The verification process confirms that the software meets its technical specifications and user requirements. It's a Process based application.

The Defect is a variance between the expected and actual result. The defect's ultimate source may be traced to a fault introduced in the specification, design, or development (coding) phases.

Describe the difference between validation and verification

Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with the checklists, walkthroughs, and inspection meetings.

Validation is done during actual testing and it takes place after all the verifications are being done.

Difference between Test case and Use case?

Use cases are prepared by business analysts from the functional requirement analysis (FRS) according to the user requirements.

Test case are prepared by Test Engineer based on the use case. The test case is a set procedure that guides a tester to execute a test.

Testing Methodology?

Means what kind of approach is following while testing (e.g.) functional testing, Regression testing, Retesting, Confirmation testing.

Exploratory Testing:

With out the knowledge of requirements, testing is done by giving random inputs.

Ad-Hoc testing:

Testing without a formal test plan or outside of a test plan.

Bug life cycle:

It has the following life cycle such as:

  • New: When the bug is posted for the first time is called new.
  • Open: After the tester sends the bug, the lead checks if it genuine then it is called as open.
  • Assign: After the lead checks, he assigns to the developer and that state is called assign.
  • Test: Before the developer releases the software with bug fixed, he changes the state of bug to "TEST".
  • Fixed: When the developer resolved the bug the status is fixed.
  • Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to reopen.
  • Closed: If the bug is no more the status is closed.


V-model is a model in which verification and validation parallely .As soon as we get the requirement from the customer, the left side is verification done and right side is validation done.


Short duration project like 6 months Water fall model is followed, longer duration project V Model is followed. Water fall model is much easier than V Model.

Test plan:

Test plan specifies process and scheduling of an application. Test lead

Prepares test plan document based on what to test, how to test, when to test, whom to test. It covers the entire testing activity.


Software requirement specification (SRS). It describes what the software will do and how it will be expected to perform.

Requirement Traceability Matrix (RTM):

It is the mapping between customer requirements and prepared test cases. This is used to find whether all the requirements are covered or not.

Different Levels of testing?

  • Unit Testing
  • Integrated Testing
  • System Testing
  • Acceptance Testing

Unit Testing = (Testing the individual modules)

The testing done to a unit or to a smallest piece of software. Done to verify if it satisfies its functional specification or its intended design structure.

The Tools used in Unit Testing are debuggers, tracers and is done by Programmers.

Integration Testing

Testing the related modules together for its combined functionality.

System Testing

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

Testing the software for the required specifications.

System integration testing

System integration testing is the process of verifying the synchronization between two or more software systems and which can be performed after software system collaboration is completed.

User Acceptance Testing = (It the testing done with the intent of conforming readiness of the product and Customer acceptance.)

Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is done against requirements and is done by actual users.

Acceptance Testing

Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, which enables a customer to determine whether to accept the system or not.

Compatibility testing

Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:

  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)
  • Bandwidth handling capacity of networking hardware
  • Compatibility of peripherals (Printer, DVD drive, etc.)
  • Operating systems (MVS, UNIX, Windows, etc.)
  • Database (Oracle, Sybase, DB2, etc.)
  • Other System Software (Web server, networking/ messaging tool, etc.)
  • Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.)

Installation Testing

System testing conducted once again according to hardware configuration requirements. Installation procedures may also be verified

Functional Testing

It checks that the functional specifications are correctly implemented. Can also check if Non Functional behavior is as per expectations.

Stress testing

To evaluate a system beyond the limits of the specified requirements or system resources (such as disk space, memory, processor utilization) to ensure the system do not break unexpectedly

Load Testing

Load Testing, a subset of stress testing, verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times.

Scalability Testing is used to check whether the functionality and performance of a system are capable to meet the volume and size change as per the requirements.

Scalability testing can be done using load test with various software and hardware configurations changed, where the testing environment settings unchanged.

Regression Testing = (Testing the application to find whether the change in code affects anywhere in the application)

Regression Testing is "selective retesting of a system or component to verify that modifications have not caused unintended effects". It is repetition of tests intended to show that the software's behavior is unchanged. It can be done at each test level.

Performance Testing

To evaluate the time taken or response time of the system to perform it's required functions in comparison

ALPHA TESTING: Testing is done near the completion of project.

Testing of a software product or system conducted at the developer's site by the customer

BETA TESTING: Testing is done after the completion of project.

Testing conducted at one or more customer sites by the end user of a delivered software product system.

Usability Testing = (Testing the ease with which users can learn and use a product.)

Usability testing is a technique used to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.


It evaluates the Human Computer Interface. Verifies for ease of use by end-users. Verifies ease of learning the software, including user documentation. Checks how effectively the software functions in supporting user tasks. Checks the ability to recover from user errors.

Data Flow Testing

Selects test paths according to the location of definitions and use of variables.

1.2.3 Loop Testing

Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested, and unstructured.



Note that unstructured loops are not to be tested. Rather, they are redesigned.

Configuration Testing

It is used when software meant for different types of users. It also checks that whether the software performs for all users.

Recovery Testing

It is used in verifying software restart capabilities after a "disaster"


Recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems.

Examples of recovery testing:

  1. While an application is running, suddenly restart the computer, and afterwards check the validness of the application's data integrity.
  2. While an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application's ability to continue receiving data from the point at which the network connection disappeared.
  3. Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is able to recover all of them.

Security Testing

Security testing is a process to determine that an information system protects data and maintains functionality as intended.


Security testing is the process that determines that confidential data stays confidential


Testing how well the system protects against unauthorized internal or external access, willful damage, etc?

This process involves functional testing, penetration testing and verification.

  • Test Plan: Test Plan is a document with information on Scope of the project, Approach, Schedule of testing activities, Resources or Manpower required, Risk Issues, Features to be tested and not to be tested, Test Tools and Environment Requirements.
  • Test Strategy: Test Strategy is a document prepared by the Quality Assurance Department with the details of testing approach to reach the Quality standards.
  • Test Scenario: Test Scenario is prepared based on the test cases and test scripts with the sequence of execution.
  • Test Case: Test case is a document normally prepared by the tester with the sequence of steps to test the behavior of feature/functionality/non-functionality of the application. Test Case document consists of Test case ID, Test Case Name, Conditions (Pre and Post Conditions) or Actions, Environment, Expected Results, Actual Results, Pass/Fail. The Test cases can be broadly classified as User Interface Test cases, Positive Test cases and Negative Test cases.
  • Test Script: Test Script is a program written to test the functionality of the application. It is a set of system readable instructions to automate the testing with the advantage of doing repeatable and regression testing easily. Test Environment: It is the Hardware and Software Environment where the testing is going to be done. It also explains whether the software under test interacts with Stubs and Drivers.
  • Test Procedure: Test Procedure is a document with the detailed instruction for step by step execution of one or more test cases. Test procedure is used in Test Scenario and Test Scripts.
  • Test Log: Test Log contains the details of test case execution and the output information.

What is Fuzz Testing?

Fuzz testing is a Black box testing technique which uses random bad data to attack a program and see what breaks in the application.

Fuzz testing is mostly used to,

  • Set up a correct file to enter your program
  • Restore some part of the file by using random data
  • Unlock the file with the program
  • Observe what breaks

Fuzz testing can be automated for maximum effects on large applications. This testing improves the confidence that the application is safe and secure.

Testing strategy:

  1. Black box testing
  2. White box testing
  3. Gray box testing

Black box testing:

Testing of application without the knowledge of coding. Black box testing (BBT) is also called as Functional testing.

White box testing:

Testing of application with the knowledge of coding to examine outputs.

White box testing (WBT) is also called Structural or Glass box testing.

White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised.

Gray box testing:

It is like monkey testing.

Static: Verifying the documents alone.

Dynamic: Testing the functionality.

Software testing lifecycle:

  • Requirements gathering: Collecting the project related information.
  • Analyzing: Discussing the collected information whether the requirements can meet.
  • Test plan preparation: It specifies the entire testing activity
  • Test case preparation: It is a document which contains input and corresponding results.
  • Test case execution: Execution of test case results to find bugs
  • Bug Tracking: Monitoring of the bug till closed.
  • Regression testing: Testing the application to find whether the change in code affect anywhere in the application.

What is meant by designing the application and testing the application?

Designing and Testing are two different phases in a software development process (SDLC).

  1. Information Gathering
  2. Analysis
  3. Designing
  4. Coding
  5. Testing
  6. Implementation and Maintenance

If u want answer in Testing terms means STLC, designing test includes preparing Test Strategy, Test Plan and Test Case documents, and testing means executing the test cases and generating Test Reports.

Designing the application as per the requirements Designing the application is nothing but deriving the functional flow, alternative flow, How many modules that we are handling, data flow etc.

Two types of designs are there:

LLD - Low Level Design Documentation : This level deals with lower level modules. The flow of diagram handled here is data Flow Diagram. Developers handle this Level.

In this designing team will divide the total application into modules and they will derive logic for each module.

HLD - High Level Design Documentation: This level deals with higher level modules. The flow of diagram handled here is ER - Entity Relationship. Both Developers and Testers handle this Level.

In this designing team will prepare functional architecture i.e. Functional flow.

Coding: writing the source code as per the LLD to meet customer requirements.



  • Set of the test cases which we getting a new build when we execute the application.
  • Smoke testing is verified whether the build is testable or not.
  • Testers can reject the application.


  • It is also a set of test cases which is testing major and critical functionality of the application.
  • It is one time testing process.

What is Smoke Testing?

A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Skim Testing:

Skim Testing A testing technique used to determine the fitness of a new build or release.

Mutation testing: (or Mutation analysis or Program mutation) is a method of software testing, which involves modifying programs' source code or byte code in small ways.[1] A test suite that does not detect and reject the mutated code is considered defective. These so-called mutations, are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as driving each expression to zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution.

What is Branch Testing?

Testing in which all branches in the program source code are tested at least once.

What is security testing

testing how well the system protects against unauthorized internal or external access, willful damage, etc?


Security testing is the process that determines that confidential data stays confidential

How do you debug an ASP.NET Web application?

Attach the aspnet_wp.exe process to the DbgClr debugger.

What is Backus-Naur Form?

A Meta language used to formally describe the syntax of a language.

Difference between Test Efficiency Vs Test Effectiveness

I've seen that many test engineers are confused with the understanding of Software Test Efficiency and Software Test Effectiveness. Below is the summary of what I understand from Efficiency and Effectiveness.

Software Test Efficiency:

  1. It is internal in the organization how much resources were consumed how much of these resources were utilized.
  2. Software Test Efficiency is number of test cases executed divided by unit of time (generally per hour).
  3. Test Efficiency test the amount of code and testing resources required by a program to perform a particular function.

Here are some formulas to calculate Software Test Efficiency (for different factors):

  • Test efficiency = (total number of defects found in unit+integration+system) / (total number of defects found in unit+integration+system+User acceptance testing).
  • Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted)* 100

Software Test Effectiveness:

Software Test Effectiveness covers three aspects:

  • How much the customer's requirements are satisfied by the system.
  • How well the customer specifications are achieved by the system.
  • How much effort is put in developing the system.

Software Test Effectiveness judge the Effect of the test environment on the application.

Here are some formulas to calculate Software Test Effectiveness (for different factors):

  • Test effectiveness = Number of defects found divided by number of test cases executed.
  • Test effectiveness = (total number of defects injected +total number of defect found) / (total number of defect escaped)* 100
  • Test Effectiveness = Loss due to problems / Total resources processed by the system

What is quality assurance?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

What is the difference between QA and testing?

Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented to 'detection'.

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.


Front end testing is testing GUI and functionality.

Back-end focuses on data stored in the database.

What Is Soak Testing?

Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Describe the Software Development Life Cycle

It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

What are SDLC and STLC and the different phases of both?


  • Requirement phase
  • Design phase (HLD, DLD (Program spec))
  • Coding
  • Testing
  • Release
  • Maintenance


  • System Study
  • Test planning
  • Writing Test case or scripts
  • Review the test case
  • Executing test case
  • Bug tracking
  • Report the defect


Every testing project has to follow the waterfall model of the testing process.

The waterfall model is as given below:

  • Test Strategy & Planning
  • Test Design
  • Test Environment setup
  • Test Execution
  • Defect Analysis & Tracking
  • Final Reporting

According to the respective projects, the scope of testing can be tailored, but the process mentioned above is common to any testing activity.

Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity. Involving software testing in all phases of the software development life cycle has become a necessity as part of the software quality assurance process. Right from the Requirements study till the implementation, there needs to be testing done on every phase. The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing.

Difference between STLC and SDLC?

STLC is software test life cycle it starts with:

  • Preparing the test strategy.
  • Preparing the test plan.
  • Creating the test environment.
  • Writing the test cases.
  • Creating test scripts.
  • Executing the test scripts.
  • Analyzing the results and reporting the bugs.
  • Doing regression testing.
  • Test exiting.

SDLC is software or system development life cycle, phases are:

  • Project initiation
  • Requirement gathering and documenting
  • Designing
  • Coding and unit testing
  • Integration testing
  • System testing
  • Installation and acceptance testing
  • Support or maintenance

SCM and SQA will follow throughout the cycle.

Waterfall Model


Requirement Analysis -> Design -> Coding and Unit testing -> Functional testing -> Maintenance

What is a Test bed?

Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used.

What is a Test data?

Test Data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.

Why does software have bugs?

Miscommunication or no communication about the details of what an application should or shouldn't do

Programming errors in some cases the programmers can make mistakes.

Changing requirements there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, rescheduling of engineers, effects of other projects, work already completed may have to be redone or thrown out.

Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.

What is the Difference between Bug, Error and Defect?

  • Bug: It is found in the development environment before the product is shipped to the respective customer.
  • Defect: It is found in the product itself after it is shipped to the respective customer's
  • Error: It is the Deviation from actual and the expected value.

Difference between defect, error, bug, failure and fault

  • Error: A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, and fault.
  • Failure: The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.
  • Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.
  • Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: bug, defect, error, exception.
  • Defect: Mismatch between the requirements.

What is the difference between structural and functional testing?

Structural testing is a "white box" testing and it is based on the algorithm or code.

Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification.

Describe bottom-up and top-down approaches

  • Bottom-up approach: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.
  • Top-down approach: In this approach testing is conducted from main module to sub module. If the sub module is not developed a temporary program called STUB is used for simulate the sub module.

What is Re- test? What is Regression Testing?

Re-test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.

Regression Testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.

Explain Load, Performance and Stress Testing with an Example.

Load Testing and Performance Testing are commonly said as positive testing where as Stress Testing is said to be as negative testing.

Say for example there is an application which can handle 25 simultaneous user logins at a time. In load testing we will test the application for 25 users and check how application is working in this stage, in performance testing we will concentrate on the time taken to perform the operation. Where as in stress testing, we will test with more users than 25 and the test will continue to any number and we will check where the application is cracking.

What is UAT testing? When it is to be done?

UAT stands for 'User acceptance Testing. This testing is carried out with the user perspective and it is usually done before the release.

  1. Every System Menu should have Exit/close option.
  2. OK and Cancel Buttons should exist.
  3. All labels should start with capital alphabets.
  4. Alignment of all controls must be same.
  5. All controls must be visible.
  6. All labels must not overlap

The above six are called as Microsoft six rules standard for user Interface testing. These are very important in GUI testing.

What is Vulnerability Testing?

In computer security, the term "vulnerability is a weakness which allows an attacker to reduce a system's Information Assurance". Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw. To be vulnerable, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface.

A security risk may be classified as vulnerability. Vulnerability with one or more known instances of working and fully-implemented attacks is classified as an exploit. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled.

Difference between Functional Testing and GUI Functional Testing?

  • Functional Testing: Testing the functionality of the Application i.e., (suppose click on login button in login screen it goes to next page or not)
  • GUI Functional Testing: Testing the GUI objects along with Functionality.

GUI testing or UI testing is user interface testing. That is, testing how the application and the user interact. This includes how the application handles keyboard and mouse input and how it displays screen text, images, buttons, menus, dialog boxes, icons, toolbars and more.

Functional Testing is done with the intent to identify errors related to the functionality of the Application under test.


To check whether all the functionalities are working properly or not. It is simply we can say Look And Feel.

In Security testing, what does u mean by:

  1. Encryption
  2. Authentication
  3. Authorization

Encryption: Encryption is the conversion of data into a form, called a cipher text that cannot be easily understood by unauthorized people.

Authentication: It is the process of establishing the claimed identity of an individual, a device, an object, a system, a component or a process; that claims to be.

Authorization: It is a process of granting access rights to an individual, a device, an object, a system, a component or a process over finite resources for a specific period of time.

Localization testing and internationalization testing are comes into black box testing or white box testing

Black box testing


A penetration test is a method of evaluating the security of a computer system or network by simulating an attack by a malicious user, known as a cracker (though often incorrectly referred to as a hacker). The process involves an active analysis of the system for any potential vulnerabilities that may result from poor or improper system configuration, known and/or unknown hardware or software flaws, or operational weaknesses in process or technical countermeasures. This analysis is carried out from the position of a potential attacker, and can involve active exploitation of security vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution. The intent of a penetration test is to determine feasibility of an attack and the amount of business impact of a successful exploit, if discovered.

  • Vulnerabilities and risks in your web applications
  • Known and unknown vulnerabilities (0-day) to combat against the threat until your security vendor provide the appropriate solution.
  • Technical vulnerabilities: URL manipulation, SQL injection, cross site scripting, back-end authentication, password in memory, session hijacking (cookies should not be stored in browser or it should be in encrypted format), buffer overflow, web server configuration, credential management etc)
  • Business Risks: Day-to-Day threat analysis, unauthorized logins, Personal information modification, pricelist modification, unauthorized funds transfer, breach of customer trust etc.

Baseline Testing

SRS is the baseline of testing.

Validating documents and specifications on which test cases would be designed is baseline testing. Requirement specification validation is baseline testing.


The point at which some deliverable produced during the software engineering process is put under formal change control.

Volume Testing

Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application's performance on it.

Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file to check performance.


Web Applications are more popular because they support more clients, no client side installation & are accessible from any where.

Types of Web Applications

  1. Web Sites
  2. Web Portals
  3. Web Applications


It is a Software Application, which retrieves, and Presents information in text, image and voice like different file formats. The browser is the viewer of a Web Site.

Popular Browsers:

  1. Internet Explorer = 2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0
  2. Mozilla Firefox = 0.8, 0.9, 0.9.1, 0.9.2, 0.9.3, 1.0, 1.0.1 to 1.0.8, 1.5, to, 2.0, to, 3, 3.0.1 to 3.0.18, 3.6, 3.6.2 to 3.6.16, 4.0 (Beta1) to 4.0 (Beta 12), 4.0 (RC1), 4.0 (RC2), 4.0
  3. Chrome =2, 3, 4,5,6,8,9,10.0
  4. Safari For MAC Machine = 1.0,1.2,1.3.1,1.3.2,2.0.1,3.1.1,3.2.3,4.0,4.0.5(Leopard), Safari 4.0.5 (Snow Leopard), Safari 4.0.5 (Tiger), Safari 4.1 (Tiger), Safari 4.1.2 (Tiger), Safari 4.1.3 (Tiger), Safari 5.0.2 (Leopard), Safari 5.0.2 (Snow Leopard), Safari 5.0.3 (Leopard), Safari 5.0.3 (Snow Leopard)
  5. Opera = 8.02, 8.51, 8.54, 9.0,9.21, 9.27, 9.52, 9.64, 10.0,10.01, 10.10, 10.50, 10.51,10.52,
  6. Maxthon =
  7. Netscape Navigator =

Web Technologies

  • HTML (Hyper Text Markup Language). For displaying web pages
  • XML (Extensible Markup Language). For transporting the Data
  • Java Script. for Client Side Validations
  • VB Script. for Server side Validations
  • IIS, Apache, Tomcat. as Web servers
  • JBoss, WebLogic, WebSphere - as Application Servers
  • Java, C#.NET, VB.NET, VC++.NET for Components development
  • SQL Server, Oracle, MySQL as Database Servers
  • HTTP, SOAP . as Protocols / Web Services

Web Testing Checklist

  1. Functionality Testing
  2. Usability testing
  3. Interface testing
  4. Compatibility testing
  5. Performance testing
  6. Cookie Testing
  7. Security Testing

Cookie Testing

What is Cookie?

Cookie is small piece of information stored in text file by web server. This information is later used by web browser.

Generally cookie contains personalized user data or information that is used to communicate between different web pages.

Why Cookies are used?

Cookies save the user.s identity and used to track where the user navigated throughout the web site pages. The communication between web browser and web server is stateless.

Whenever user visits the site or page small code inside that HTML page writes a text file on users machine called cookie.


Set-Cookie: NAME=VALUE; expires=DATE;
path=PATH: domain=DOMAIN_NAME;

Types of Cookies

There are 2 types of cookies

  1. Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.
  2. Persistent cookies: The cookies that are written permanently on user machine and lasts for months or years.

Where cookies are stored?

The path where the cookies get stored depends on the browser.

Different browsers store cookie in different paths.

E.g. Internet explorer store cookies on path .C:\Documents and

Where are Cookies Used?

  1. To implement shopping cart
  2. Personalized sites
  3. User tracking
  4. Marketing
  5. User sessions

Test Cases for Cookie Testing

The first obvious test case is to test if your application is writing cookies properly on disk.

  1. As Cookie privacy policies make sure that no personal or sensitive data is stored in the cookie.
  2. If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.
  3. Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
  4. Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like .For smooth functioning of this site make sure that cookies are enabled on your browser. There should not be any page crash due to disabling the cookies.
  5. Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
  6. Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
  7. Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say can't be accessed by other domain say unless and until the cookies are corrupted and someone trying to hack the cookie data.

Checking the deletion of cookies from your web application page:

Some times cookie written by domain say may be deleted by same domain but by different page under that domain. This is the general case if you are testing some .action tracking. web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching
to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.

  1. Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
  2. If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.

Security Testing

Security testing is the process that determines that confidential data stays confidential

What is .Vulnerability?

This is a weakness in the web application. The cause of such a .weakness. Can be bugs in the application, an injection (SQL/ script code) or the presence of viruses.

What is .URL manipulation?

Some web applications communicate additional information between the client (browser) and the server in the URL. Changing some information in the URL may sometimes lead to unintended behavior by the server.

What is .SQL injection?

This is the process of inserting SQL statements through the web application user interface into some query that is then executed by the server.
What is .XSS (Cross Site Scripting)?

When a user inserts HTML/ client-side script in the user interface of a web application and this insertion is visible to other users, it is called XSS.

What is .Spoofing?

The creation of hoax look-alike websites or emails is called spoofing.

What is Change and configuration management repository?

In general every company maintains a common server to maintain all the deliverables from development and testing for future references, is called configuration repository

Change and configuration management can be accessed to the development people to save their development deliverables and Visual SourceSafe (VSS) like tool they used for version Control Test case database (TCDB) can be accessed to the testing people to store the deliverables like test plan, test cases document, test metrics and other summary reports

Defect repository can be accessed to both testers and developers, for required negotiation between testers and developers .Ex: Bugzilla, Ms Excel sheet, Problem reporting tool etc,


Either in manual or automation testing, the test engineer is running test cases batch by batch and in every batch tests by test. In this level-1 test execution, every test engineer is preparing "test log" document with results. Test log is nothing but a document which is maintaining 3 types of test results such as passed, failed and blocked.

S no, Test case id, Test case description, Status (passed, failed)

Test case document:

S no, Test case id, Test case description, input data, actual result, expected result and Status (passed, failed)

How can u do the performance testing?

To do performance testing there are tools like Load runner, Jmetre

What is the parameters u applies for doing functional testing?

In functional testing:

We validate each and every functionality in terms of changes in objects properties, Calculation domain, Correctness of output, Input domain coverage, Order of functionalities.

We test above domains with BVA and negative scenario.

What is non-functional testing?

After completion of successful functional testing on software build, the test engineers are concentrating on extra characteristics of that software testing. Such as user Interface testing, reliability testing and configuration testing...

Reliability testing

The purpose of reliability testing is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements.

If there is more number of test cases, how can u pick up a selective test case?

If there are more no of test cases then we have to pick the test cases in terms of functionality i.e. priority (p0, p1, p2) p0 (high) for functional test case, p1 (medium) for non Functional except usability and p2 (low) for usability test cases. This according to test case format (IEEE 829).

What are the difference bug, error, and defect?

  • Bug: Discrepancy in the application functionality
  • Error: Mistakes in coding
  • Defect: Deviations from the requirements

Have u involved in reviews? What type reviews u done?

Yes I have involved in peer reviews which is conduct after implementing the test cases for the given module in these reviews we will discus the written test cases are Sufficient to validate the functionality of the module.

How Severity and Priority are related to each other?

Severity tells the seriousness/depth of the bug where as Priority tells which bug should rectify first.

  • Severity->Application point of view
  • Priority->User point of view

What are the exact testing types you involved when testing the web application testing and client server application testing? Have u find difference in terms of testing?

Name three types of tests that should be automated.

  1. Performance (Must)
  2. Functionality
  3. Data Driven.
  4. Stress testing.
  5. Load testing.
  6. Performance testing.
  7. Regression testing.

Tell me the test cases for a search and replace functionality in a Microsoft document (.doc)?

Open a already existing document with some content in it.

Case is replacing the word "testcase" with "TC"

  1. Click on Edit menu tab and then click on Find.
  2. A editable window pops up with various tabs on it find, replace and go to (default Find tab will be selected)
  3. Corresponding to "Find what?" there exists an edit box enter "testcase" in that and click "find next" button.
  4. A pop-up message appears with message "Window finished searching the document". Click OK button on this.
  5. Click Replace tab on this and in replace with edit box type "TC" and click "replace or replace all" buttons based on your case.
  6. A pop-up message appears with appropriate message.

Combination of 2 or more points should be considered as test cases and tested Use Ctl+H for that.

It will give the feature of find n replace.

Can automation testing replace manual testing? it so, how?

Automation Testing cannot test the entire application. Only a part of the application can be automated but not full. and also automation is very costly to do and maintenance.

Need skilled persons to carry out automation.

What is bug life cycle?

  1. Open stage: A defect originated by a tester
  2. Assign: Raised defect assigned to Developer
  3. Resolved stage: Developer provides the code fix for the bug and make it resolved
  4. Closed stage: Tester re-tests the bug and closes it, if it's working. Otherwise he will re-open the defect and it will go back to stage 1.


Types and Phases of Testing

SDLC Document QA Document
Software Requirement Specification Requirement Checklist
Design Document Design Checklist
Functional Specification Functional Checklist
Design Document & Functional Specs Unit Test Case Documents
Design Document & Functional Specs Integration Test Case Documents
Design Document & Functional Specs System Test Case Documents
Unit / System / Integration Test Case Documents Regression Test Case Documents
Functional Specs, Performance Criteria Performance Test Case Documents
Software Requirement Specification, Unit / System / Integration / Regression / Performance Test Case Documents User Acceptance Test Case Documents.

Test Driver

Bottom-up approach: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.

Top-down approach: In this approach testing is conducted from main module to sub module. If the sub module is not developed a temporary program called STUB is used for simulate the sub module.

In database testing, which are the things u used?

  • Database connectivity
  • Domain Constraints
  • Key constraints
  • Using (JOINS)

What is way of writing test cases for database testing. For writing test cases in Database first one should define the project name, then module, Bug number, objective, steps/action undertaken, expected result, actual result, then status, priority and severity.

What are the types of test strategy?

A test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment.

The test strategy describes the test level to be performed. There are primarily three levels of testing: unit testing, integration testing, and system testing. In most software development organizations, the developers are responsible for unit testing. Individual testers or test teams are responsible for integration and system testing.

What kinds of testing have you done?

Comprehensive Testing: Testing happens in various levels.

  • Level 0: sanity testing/Build verification testing/Tester

Acceptance Testing

  • Level 1: is comprehensive testing
  • Level 2: is regression testing
  • Level 3: is Acceptance testing

In level 0: all p0 testcases are executed
Level 1: All p0, p1 and p2 testcases as batches
Level 2: selected p0, p1 and p2 testcases modification
Level 3: selected 0, p1, and p2 testcases bug density

Comprehensive testing. After level 0 testing and selection of possible testcases for automation, test engineers concentrate on test suite and test set. Every test batch consists of a set of dependent testcases.during these test batch execution test engineers create test log document
with these types of entries like no. of pass, failed and blocked. {During comprehensive test execution, test engineers are reporting mismatches to developers as defects. After resolution of the bug, developers release modified build to testers. Testers reexecute their test to ensure bug fix work and occurrences of side effects}. Is according to mind material.

IS V Model a Process model or a Technique?

Can V process/Technique if answer for above be implemented in Waterfall Model.

V model is one of the software development model where testing is done parallely with the application development.i.e When the development of application is in process test engg will test each and every out come document

For example: Consider a bank application consisting of three modules admin, banker and the customer. The development team has completed the admin module and working with banker module. At this stage the testing team will test the admin module while the developers are in process with the banker module i.e. before completing the whole application. This is the process of v-model. There can be changes in the application with v-model.

Waterfall model is used when requirement are clear and complete and for the small projects. Here we can't incorporate new changes in the application Test Strategy: This is high level document which defines the approach for testing the overall product.

Test planning: Test plan defines the specific information about how to drive, track and record the test efforts along entrance exit criteria, resource planning, risk and contingency plans, etc. Test planning also define the milestone and schedules to effectively manage the efforts and performance.

Different testing methodologies

There are three types of testing methodologies;

  • WHITE BOX TESTING-Testing structural part of an application.
  • BLACK BOX TESTING-testing functionality of an application
  • GREY BOX TESTING-mixture of WBT & BBT


Test suite (more formally known as a validation suite) is a collection of test cases that are intended to be used as input to a software program to show that it has some specified set of behaviors (i.e., the behaviors listed in its specification).

A test suite often also contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario.

An executable test suite is a test suite that is ready to be executed. This usually means that there exists a test harness that is integrated with the suite and such that the test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with the system under test (SUT).

What is Bidirectional Traceability and how it is achieved?

When requirements are traced to test cases and vice versa it is called bidirectional traceability.

What is bidirectional traceability?

In the Requirements Management (REQM) process area, specific practice 1.4 states, "Maintain bidirectional traceability among the requirements and work products." Bidirectional traceability is the ability to trace both forward and backward (i.e., from requirements to end products and from end product back to requirements).

Typically, traceability identifies the origin of items (e.g., customer needs) and follows these same items as they travel through the hierarchy of the Work Breakdown Structure to the project teams and eventually to the customer. When the requirements are managed well, bidirectional traceability is achieved from the source requirements to lower-level requirements and selected work products and verifications and then back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements and selected work products can be traced to a valid source.

Where is system testing covered in CMMI for Development?

Examples of system testing are provided in SP 1.1 of the Verification process area and SP 1.1 of the Validation process area. However, system testing is not a term used in CMMI, since the terms "system" and "testing" can be interpreted in many ways.

The term "system" was not used in CMMI because of its multiple interpretations across disciplines. Instead of "system," the term "product" and "product component" were used for consistency and clarity. The terms "verification" or "validation" were used instead of "testing" since (1) testing can be either part of verification or validation, and (2) testing is only one method used for verification or validation.

What is Automation Test Framework?

A test automation framework is a set of assumptions, concepts, and practices that provide support for automated software testing. This article describes and demonstrates five basic frameworks.

  1. The Test Script Modularity Framework
    The test script modularity framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test. These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case.
  2. The Test Library Architecture Framework
    The test library architecture framework is very similar to the test script modularity framework and offers the same advantages, but it divides the application-under-test into procedures and functions instead of scripts. This framework requires the creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent modules, sections, and functions of the application-under-test. These library files are then called directly from the test case script.
  3. The Keyword-Driven or Table-Driven Testing Framework
    Keyword-driven testing and table-driven testing are interchangeable terms that refer to an application-independent automation framework. This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test.

    If we were to map out the actions we perform with the mouse when we test our Windows Calculator functions by hand, we could create the following table. The "Window" column contains the name of the application window where we're performing the mouse action (in this case, they all happen to be in the Calculator window). The "Control" column names the type of control the mouse is clicking. The "Action" column lists the action taken with the mouse (or by the tester) and the "Arguments" column names a specific control (1, 2, 3, 5, +, -, and so on).
    Window Control Action Arguments
    Calculator Menu   View, Standard
    Calculator Pushbutton Click 1
    Calculator Pushbutton Click +
    Calculator Pushbutton Click 3
    Calculator Pushbutton Click =
    Calculator   Verify Result 4
    Calculator   Clear  
    Calculator Pushbutton Click 6
    Calculator Pushbutton Click -
    Calculator Pushbutton Click 3
    Calculator Pushbutton Click =
    Calculator   Verify Result 3

    This table represents one complete test; more can be made as needed in order to represent a series of tests. Once you've created your data table(s), you simply write a program or a set of scripts that reads in each step, executes the step based on the keyword contained the Action field, performs error checking, and logs any relevant information.

  4. The Data-Driven Testing Framework
    Data-driven testing is a framework where test input and output values are read from data files (datapools, ODBC sources, cvs files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script.

    This is similar to table-driven testing in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. Unlike in table-driven testing, though, the navigation data isn't contained in the table structure. In data-driven testing, only test data is contained in the data files.
  5. The Hybrid Test Automation Framework


What Is a Test Strategy?

Why do a test strategy? The test strategy is the plan on how to approach testing. The purpose of a test strategy includes:

  • To obtain consensus of goals and objectives from stakeholders (e.g., management, developers, testers, customers, users)
  • To manage expectations from the beginning
  • To be sure we're "headed in the right direction"
  • To identify the types of tests to be conducted at all test levels
  • A test strategy provides an overall perspective of testing, and identifies or references:
  • Project plans, risks, and requirements
  • Relevant regulations, policies, or directives
  • Required processes, standards, and templates
  • Supporting guidelines
  • Stakeholders and their test objectives
  • Test resources and estimates
  • Test levels and phases
  • Test environment
  • Completion criteria for each phase
  • Required test documentation and review methods

What is a test strategy?


A test strategy must address the risks and present a process that can reduce those risks.

The two components of Test strategy are:

  1. Test Factor: The risk of issue that needs to be addressed as a part of the test strategy. Factors that are to be addressed in testing a specific application system will form the test factor.
  2. Test phase: The phase of the systems development life cycle in which testing will occur.


Here are some points to be considered when you are in such a situation:

  1. Find out Important functionality is your project?
  2. Find out High-risk module of the project?
  3. Which functionality is most visible to the user?
  4. Which functionality has the largest safety impact?
  5. Which functionality has the largest financial impact on users?
  6. Which aspects of the application are most important to the customer?
  7. Which parts of the code are most complex, and thus most subject to errors?
  8. Which parts of the application were developed in rush or panic mode?
  9. What do the developers think are the highest-risk aspects of the application?
  10. What kinds of problems would cause the worst publicity?
  11. What kinds of problems would cause the most customer service complaints?
  12. What kinds of tests could easily cover multiple functionalities?

When to stop testing?


  1. When all the requirements are adequately executed successfully through test cases
  2. Bug reporting rate reaches a particular limit
  3. The test environment no more exists for conducting testing
  4. The scheduled time for testing is over
  5. The budget allocation for testing is over

Your company is about to roll out an E-Commerce application. It is not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to reduce the business risks and commercial risks?


Compatibility testing should be done on all browsers (IE, Netscape, Mozilla etc.) across all the operating systems (win 98/2K/NT/XP/ME/Unix etc.)

What's the difference between priority and severity?


"Priority" is associated with scheduling, and "severity" is associated with standards.

"Priority" means something is afforded or deserves prior attention; precedence established by order of importance (or urgency). "Severity" is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior. The words priority and severity do come up in bug tracking. A variety of commercial, problem tracking management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its 'severity', reproduce it and fix it. The fixes are based on project 'priorities' and 'severity' of bugs. The 'severity' of a problem is defined in accordance to the customer's risk assessment and recorded in their
selected tracking tool. Buggy software can 'severely' affect schedules, which, in turn can lead to a reassessment and renegotiation of 'priorities'.

Your manager has taken you onboard as a test lead for testing a web-based application. He wants to know what risks you would include in the Test plan. Explain each risk factor that would be a part of your test plan.


Web-Based Application primary risk factors:-

  1. Security :- anything related to the security of the application.
  2. Performance :- The amount of computing resources and code required by the system to perform its stated functions.
  3. Correctness :- Data entered, processed, and outputted in the system is accurate and complete
  4. Access Control :- Assurance that the application system resources will be protected
  5. Continuity of processing :- The ability to sustain processing in the event problem occurs
  6. Audit Trail :- The capability to substantiate the processing that has occurred.
  7. Authorization :- Assurance that the data is processed in accordance with the intents of the management.

General risk or secondary risk's:-

  1. Complex :- anything disproportionately large, intricate or convoluted.
  2. New :- anything that has no history in the product.
  3. Changed :- anything that has been tampered with or "improved".
  4. Upstream Dependency :- anything whose failure will cause cascading failure in the rest of the system.
  5. Downstream Dependency :- anything that is especially sensitive to failures in the rest of the system.
  6. Critical :- anything whose failure could cause substantial damage.
  7. Precise :- anything that must meet its requirements exactly.
  8. Popular :- anything that will be used a lot.
  9. Strategic :- anything that has special importance to your business, such as a feature that sets you apart from the competition.
  10. Third-party :- anything used in the product, but developed outside the project.
  11. Distributed :- anything spread out in time or space, yet whose elements must work together.
  12. Buggy :- anything known to have a lot of problems.
  13. Recent Failure :- anything with a recent history of failure.

What is parallel testing and when do we use parallel testing? Explain with example?


Testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. OR we can say that parallel testing requires the same input data be run through two versions of the same application.

Parallel testing should be used when there is uncertainty regarding the correctness of processing of the new application. And old and new versions of the applications are same.


  1. Operate the old and new version of the payroll system to determine that the paychecks from both systems are reconcilable.
  2. Run the old version of the application system to ensure that the operational status of the old system has been maintained in the event that problems are encountered in the new application.

What is the difference between testing Techniques and tools? Give examples.


  • Testing technique :- Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.
  • Tools :- Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing.
  • E.g. :- The swinging of hammer to drive the nail. The hammer is a tool, and swinging the hammer is a technique. The concept of tools and technique is important in the testing process. It is a combination of the two that enables the test process to be performed. The tester should first understand the testing techniques and then understand the tools that can be used with each of the technique.

Differentiate between Transaction flow modeling, Finite state modeling, Data flow modeling and Timing modeling?


  • Transaction Flow modeling :-The nodes represent the steps in transactions. The links represent the logical connection between steps.
  • Finite state modeling :-The nodes represent the different user observable states of the software. The links represent the transitions that occur to move from state to state.
  • Data flow modeling :-The nodes represent the data objects. The links represent the transformations that occur to translate one data object to another.
  • Timing Modeling :-The nodes are Program Objects. The links are sequential connections between the program objects. The link weights are used to specify the required execution times as program executes.


SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.

CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.

Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

ISO = 'International Organization for Standards' - The ISO 9001, 9002, and 9003 standards concern quality systems that are assessed by outside auditors, and they apply to many kinds of production and manufacturing organizations, not just software. The most comprehensive is 9001, and this is the one most often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products - it indicates only that documented processes are followed.

IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).


  1. See the limit of username field. I mean the data type of this field in DB and the field size. Try adding more characters to this field than the field size limit. See how application responds to this.
  2. Repeat above case for number fields. Insert number beyond the field storage capacity. This is typically a boundary test.
  3. For username field try adding numbers and special characters in various combinations. (Characters like!@#$ %^&*()_+}{":?><,./;'[]). If not allowed specific message should be displayed to the user.
  4. Try above special character combination for all the input fields on your sign up page having some validations. Like Email address field, URL field validations etc.
  5. Many applications crash for the input field containing ' (single quote) and " (double quote) examples field like: "Vijay's web". Try it in all the input fields one by one.
  6. Try adding only numbers to input fields having validation to enter only characters and vice versa.
  7. If URL validation is there then see different rules for url validation and add urls not fitting to the rules to observe the system behavior.

    Example urls like:'s!@#$ %^&*()_+}{":?><,./;'[]web_page. Also add urls containing http:// and https:// while inserting into url input box.
  8. If your sign up page is of some steps like step 1 step 2 etc. then try changing parameter values directly into browser address bar. Many times urls are formatted with some parameters to maintain proper user steps. Try altering all those parameters directly without doing anything actually on the sign up page.
  9. Do some monkey testing manually or automating (i.e. Insert whatever comes in mind or random typing over keyboard) you will come up with some observations.
  10. See if any page is showing JavaScript error either at the browser left bottom corner or enable the browser settings to display popup message to any JavaScript error.

White Box Testing is coverage of the specification in the code.

Code coverage:

An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Coverage Analysis

1.1 Basis Path Testing

A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once.

1.1.1 Flow Graph Notation

A notation for representing control flow similar to flow charts and UML activity diagrams.

1.1.2 Cyclomatic Complexity

The cyclomatic complexity gives a quantitative measure of 4the logical complexity. This value gives the number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each statement is executed at least once. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge). Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.

1.2 Control Structure testing

1.2.1 Conditions Testing

Condition testing aims to exercise all logical conditions in a program module. They may define:

  • Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions.
  • Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator.
  • Compound condition: composed of two or more simple conditions, Boolean operators and parentheses.
  • Boolean expression : Condition without Relational expressions.

1.2.2 Data Flow Testing

Selects test paths according to the location of definitions and use of variables.

1.2.3 Loop Testing

Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested, and unstructured.


Note that unstructured loops are not to be tested . rather, they are redesigned.

  • Segment coverage: Ensure that each code statement is executed once.
  • Branch Coverage or Node Testing: Coverage of each code branch in from all possible was.
  • Compound Condition Coverage: For multiple condition test each condition with multiple paths and combination of different path to reach that condition.
  • Basis Path Testing: Each independent path in the code is taken for testing.
  • Data Flow Testing (DFT): In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code.DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. In short each data variable is tracked and its use is verified.

This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.

  • Path Testing: Path testing is where all possible paths through the code are defined and covered. It's a time consuming task.
  • Loop Testing: These strategies relate to testing single loops, concatenated loops, and nested loops. Independent and dependent code loops and values are tested by this approach.

Why we do White Box Testing?

To ensure:

  • That all independent paths within a module have been exercised at least once.
  • All logical decisions verified on their true and false values.
  • All loops executed at their boundaries and within their operational bounds internal data structures validity.

    Need of White Box Testing? To discover the following types of bugs:
  • Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
  • The design errors due to difference between logical flow of the program and the actual implementation
  • Typographical errors and syntax checking skills Required:

We need to write test cases that ensure the complete coverage of the program logic.

For this we need to know the program well i.e. we should know the specification and the code to be tested. Knowledge of programming languages and logic.

Limitations of WBT:

Not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems.

This does not mean that WBT is not effective. By selecting important logical paths and data structure for testing is practically possible and effective.


Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the "Black box" or application.

Main focus in black box testing is on functionality of the system as a whole. The term 'behavioral testing' is also used for black box testing and white box testing is also sometimes called 'structural testing'. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the application are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by black box testing.

Black box testing occurs throughout the software development and Testing life cycle i.e. in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:

Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing

  • Tester can be non-technical.
  • Used to verify contradictions in actual system and the specifications.
  • Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing

  • The test inputs needs to be from large sample space.
  • It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
  • Chances of having unidentified paths during this testing

Methods of Black box Testing:

Graph Based Testing Methods:

Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:

This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:

Many systems have tendency to fail on boundary. So testing boundary values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

  • Extends equivalence partitioning
  • Test both sides of each boundary
  • Look at output boundaries for test cases too
  • Test min, min-1, max, max+1, typical values

BVA techniques:

  1. Number of variables
    For n variables: BVA yields 4n + 1 test case.
  2. Kinds of ranges

Generalizing ranges depends on the nature or type of variables.

Advantages of Boundary Value Analysis

  • Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
  • Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
  • Forces attention to exception handling

Limitations of Boundary Value Analysis

Boundary value testing is efficient only for variables of fixed values i.e. boundary.

Equivalence Partitioning:

Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How this partitioning is performed while testing:

  1. If an input condition specifies a range, one valid and one two invalid classes are defined.
  2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
  3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
  4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:

Different independent versions of same software are used to compare to each other for testing in this method.

What is Impact analysis? How to do impact analysis in your project?

Impact analysis means when we r doing regressing testing at that time we are checking that the bug fixes r working properly, and by fixing these bug other components are working as per their requirements are they got disturbed.

Which comes first test strategy or test plan?

Test strategy comes first and this is the high level document. And approach for the testing starts from test strategy and then based on this the test lead prepares the test plan.

What is the difference between web based application and client server application as a tester's point of view?

According to Tester's Point of view

  1. Web Base Application (WBA) is a 3 tier application; Browser, Back end and Server. Client server Application (CSA) is a 2 tier Application; Front End, Back end.
  2. In the WBA tester test for the Script error like java script error VB script error etc, that shown at the page. In the CSA tester does not test for any script error.
  3. Because in the WBA once changes perform reflect at every machine so tester has less work for test. Whereas in the CSA every time application need to be install hence ,it maybe possible that some machine has some problem for that Hardware testing as well as software testing is needed.

What is the significance of doing Regression testing?

To check for the bug fixes. And this fix should not disturb other functionality. To ensure the newly added functionality or existing modified functionality or developer fixed bug arises any new bug or affecting any other side effect. This is called regression test and ensure already PASSED TEST CASES would not arise any new bug.

What are the diff ways to check a date field in a website?

There are different ways like:–

  1. you can check the field width for minimum and maximum.
  2. If that field only take the Numeric Value then check it'll only take Numeric no other type.
  3. If it takes the date or time then check for other.
  4. Same way like Numeric you can check it for the Character, Alpha Numeric and all.
  5. And the most Important if you click and hit the enter key then some time page may give the error of JavaScript, that is the big fault on the page.
  6. Check the field for the Null value.

The date field we can check in different ways Positive testing: first we enter the date in given format.

What is Positive Testing ?

Testing aimed at showing software works. Also known as "test to pass".

What is Negative Testing?

Testing aimed at showing software does not work. Also known as "test to fail".

In negative testing, we check whether the application or system handles the exception properly or not. It is nothing but "Test to Break" testing.

What is the difference between QC and QA?

Quality assurance is the process where the documents for the product to be tested are verified with actual requirements of the customers. It includes inspection, auditing, code review, meeting etc. Quality control is the process where the product is actually executed and the expected behavior is verified by comparing with the actual behavior of the software under test. All the testing types like black box testing, white box testing comes under quality control. Quality assurance is done before quality control.

What is Gray Box Testing?

A combination of Black Box and White Box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings.

Difference between smoke testing and sanity testing

Smoke Testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. Sanity Testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

The difference between smoke and sanity testing is in smoke testing tester concentrate on the core functionality of the application whether it is working or not for further crash, environmental effect, networking etc.

Sanity testing basic functionalities are tested, eg.check boxes, radio buttons, text boxes, list boxes.

What is Ramp Testing?

Continuously raising an input signal until the system breaks down.

What is beta testing

testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

What is alpha testing

testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

What is Test Bed?

An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerate the test beds(s) to be used.
What is a scenario?

A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.

What is Gorilla Testing?

Testing one particular module, functionality heavily.

what is the difference between system testing and end to end testing. System testing is done with respect to the Application functionality by considering that system as a individual(internal functionality flow).

Where as in End to end testing we will verify the application end to end functional flow by considering all other integrated applications functionality (includes upstream and downstream systems connected to that particular application for which System Testing is completed as mentioned above).

What is Code Coverage?

An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.


The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible.

Select a test platform

So, which operating system (OS) should you use for your international testing platform? The first choice should be your local build of Windows 2000 with a language group installed. For example, if you use the U.S. build of Windows 2000, install the East Asian language group.

  • MUI (Multilanguage User Interface) Windows 2000 - especially useful if your code implements multilingual UI and it must adjust to the UI settings of the OS. This approach is an easier implemented alternative to installing multiple localized versions of the OS. To further enhance multilingual support, Microsoft offers a separate Windows 2000 Multilanguage Version, which provides up to 24 localized language versions of the Windows user interface.
  • Localized build of the target OS - German or Japanese are good choices. Remember it might be harder to work with them if you do not know the operating system's UI language. This approach does not have significant advantages over the solutions above.
  • Execute tests
    After the environment has been set for globalization testing, you must pay special attention to potential globalization problems when you run your regular test cases:
  • Put greater importance on test cases that deal with the input/output of strings, directly or indirectly.
  • Test data must contain mixed characters from East Asian languages, German, Complex Script characters (Arabic, Hebrew, Thai), and optionally, English. In some cases, there are limitations, such as the acceptance of characters that only match the culture/locale. It might be difficult to manually enter all of these test inputs if you do not know the languages in which you are preparing your test data. A simple Unicode text generator may be very helpful at this step.

Recognize the problems

The most serious globalization problem is functionality loss, either immediately (when a culture/locale is changed) or later when accessing input data (non-U.S. character input).

Some functionality problems are detectable as display problems:

  • Question marks (?) appearing instead of displayed text indicate problems in Unicode-to-ANSI conversion.
  • Random High ANSI characters (e.g., ¼, †, ‰, ‡, ¶) appearing instead of readable text indicate problems in ANSI code using the wrong code page.
  • The appearance of boxes, vertical bars, or tildes (default glyphs) [□, |, ~] indicates that the selected font cannot display some of the characters.

It might be difficult to find problems in display or print results that require shaping, layout, or script knowledge. This test is language-specific and often cannot be executed without language expertise. On the other hand, your test may be limited to code inspection. If standard text-handling mechanisms are used to form and display output text, you may consider this area safe.

Another area of potential problems is code that fails to follow local conventions as defined by the current culture/locale. Make sure your application displays culture/locale-sensitive data (e.g., numbers, dates, time, currency, and calendars) according to the current regional settings of your computer.


Localization translates the product UI and occasionally changes some initial settings to make it suitable for another region. Localization testing checks the quality of a product's localization for a particular target culture/locale. This test is based on the results of globalization testing, which verifies the functional support for that particular culture/locale. Localization testing can be executed only on the localized version of a product. Localizability testing does not test for localization quality.

The test effort during localization testing focuses on:

  • Areas affected by localization, such as UI and content
  • Culture/locale-specific, language-specific, and region-specific areas

In addition, localization testing should include:

  • Basic functionality tests
  • Setup and upgrade tests run in the localized environment
  • Plan application and hardware compatibility tests according to the product's target region.

You can select any language version of Windows 2000 as a platform for the test. However, you must install the target language support.

The localization testing of the user interface and linguistics should cover items such as:

  • Validation of all application resources
  • Verification of linguistic accuracy and resource attributes
  • Typographical errors
  • Consistency checking of printed documentation, online help, messages, interface resources, command-key sequences, etc.
  • Confirmation of adherence to system, input, and display environment standards
  • User interface usability
  • Assessment of cultural appropriateness
  • Checking for politically sensitive content


In DB testing we need to check for:

  1. The field size validation
  2. Check constraints.
  3. Indexes are done or not (for performance related issues).
  4. Stored procedures.
  5. The field size defined in the application is matching with that in the db.
  6. We can check whether all the data from the application is being inserted into the database properly, or not imposes constraints on the data i.e. database integrity.
  7. Database testing is the test various things of data like its functioning, performance, loading. It also checks and removes the data redundancy.
  8. Database testing can be done in two ways that is testing the backend end database by inserting the values in the frontend application and seeing whether these had been inserted correctly r not and at the same time inserting the values in the backend database directly and seeing them in the frontend application.
  9. we can retrieve the data by giving some select statements and at the same time insert them by insert statements in the database and check. Whether they are been effected or not.
  10. In manual database testing we will type the query and see that the table is giving the same result or not.


Truncate removes all the rows from the table and cannot be rollbacked, while delete removes all/specific rows from table and can be rollbacked.
Also truncate resets the high water mark.

A common misconception is that they do the same thing. Not so. In fact, there are many differences between the two. DELETE is a logged operation on a per row basis. This means that the deletion of each row gets logged and physically deleted. You can DELETE any row that will not violate a constraint, while leaving the foreign key or any other constraints in place. TRUNCATE is also a logged operation, but in a different way. TRUNCATE logs the deallocation of the data pages in which the data exists. The deallocation of data pages means that your data rows still actually exist in the data pages, but the extents have been marked as empty for reuse. This is what makes TRUNCATE a faster operation to perform over DELETE. You cannot TRUNCATE a table that has any foreign key constraints. You will have to remove the constraints, TRUNCATE the table, and reapply the constraints.

The difference between the two is that the truncate command is a DDL operation and just moves the high water mark and produces a now rollback. The delete command, on the other hand, is a DML operation, which will produce a rollback and thus take longer to complete.


Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e. DB.

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application is a bit different and complex to test as tester don't have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

Standards for Software Test Plans

Several standards suggest what a test plan should contain, including the IEEE.

The standards are:

IEEE standards:

  • 829-1983 IEEE Standard for Software Test Documentation
  • 1008-1987 IEEE Standard for Software Unit Testing
  • 1012-1986 IEEE Standard for Software Verification & Validation Plans
  • 1059-1993 IEEE Guide for Software Verification & Validation Plans

What is good code?

A code which is:

  1. bug free
  2. reusable
  3. independent
  4. less complexity
  5. well documented
  6. easy to chage is called good code

What type of metrics would you use?

  1. QAM: Quality AssuranceMatrix
  2. TMM: Test ManagementMatrix
  3. PCM: Process Compatibility Matrix

How involved where you with your Team Lead in writing the Test Plan?

As per my knowledge Test Member are always out of scope while preparing the Test Plan, Test Plan is a higher level document for Testing Team. Test Plan includes Purpose, scope, Customer/Client scope, schedule, Hardware, Deliverables and Test Cases etc.

Test plan derived from PMP (Project Management Plan). Team member scope is just go through TEST PLAN then they come to know what all are their responsibilities, Deliverable of modules.

Test Plan is just for input documents for every testing Team as well as Test Lead.

What processes/methodologies are you familiar with?


  1. Spiral methodology
  2. Waterfall methodology. these two are old methods.
  3. Rational unified processing. this is from I B M and
  4. Rapid application development. this is from Microsoft office.

What is globalization testing?

The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems.

What is migration testing?

Changing of an application or changing of their versions and conducting testing is migration testing.

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

What is UAT testing. When it is to be done?

UAT stands for 'User acceptance Testing' This testing is carried out with the user perspective and it is usually done before a release UAT stands for User Acceptance Testing. It is done by the end users along with testers to validate the functionality of the application. It is also called as Pre-Production testing.

Similar Articles