Dynamic Testing

Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.

Dynamic Testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time.

In dynamic testing the software must actually be compiled and run; this is in contrast to static testing. Dynamic testing is the validation portion of Verification and Validation.

Some of dynamic testing methodologies include:  Unit Testing, Integration Testing, System Testing, Acceptance Testing


Volume Testing

Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application’s performance on it.
Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application’s functionality with that file to check performance.

At first glance it might appear that volume testing should be fairly high up on the V model, especially if we take the above definition. Certainly it comes under non-functional requirements testing and Performance testing.

Volume testing can (and should) be used in component testing. This would be closely correlated with the code, so might at this level be connected to Dynamic Analysis.
Volume testing will also be undertaken (normally) as part of the User Acceptance test. Stress testing is closely related, as it seeks to find out how the software will behave beyond its specified limits.
Volume testing needs two things. Firstly clear expected outcomes of how the software is to behave for a given level of data. Secondly, data, and lots of it.
The expected behaviour at various levels, should be in the specification documentation. Ideally this will say something like “the customers details will be returned returned on the screen within 3 seconds, from a database with 1 million customer records.” This gives the tester a benchmark to base a test case on.
The second requirement for data, needs either real life data, or simulated data. Usually, real life data will come in the form of a customer database, that has had private information, such as names and account numbers scrambled. Alternatively records can be created from scratch using automated tools or by adding rules directly on to the database, with SQL.
As with all testing, proper records must be kept showing the inputs, outputs other information, to aid potential debugging and audit purposes.

What is Compatibility Testing?

Computer software has become an integral part of people’s lives although many people are still not really aware of it. Various software have been developed to help people in the way they do their jobs, transact business, study, travel, shop, and in many other important activities. Manufacturing operations are more efficient and less costly, transactions are faster and convenient, medical examinations are more thorough and less painful or invasive and many hazardous activities are now safer because of the various software and hardware installed in the machines used.
The use of software, be it an application software or a system software, is so widespread that software development has become big business in a competitive environment. Several software are being developed all over the world. Marketing the product is also worldwide as long as there is internet access. Delivery of the product is easy as downloading it to a computer after paying online.
Online purchase of software can be risky for both, the seller as well as the buyer.  While the buyer of the software may waste his or her money in buying defective software, the seller who sells defective software may not only lose business, but also the reputation and marketing presence.  A very good feedback, testimony or any other form of endorsement for the product would have all people with internet access worldwide as potential buyers.
To survive the competitive market, it is a must that the products provide the buyers with true value for the amount they paid for it. Various software tests are performed at the different stages of development and on the finished product prior to ensure a competitiveness edge in terms of quality, compatibility, cost and delivery.
Compatibility Testing
Compatibility testing is one of the software tests that ensure complete customer satisfaction. Compatibility testing is a non-functional test to ensure that an application operates or performs as expected for all the intended customers. A non-functional test checks, if software operates according to its specified product characteristics like usability, documentation, compliance, reliability and other non-functional characteristics.
As a non-functional test, compatibility testing is not concerned with the resulting output per input. Compatibility test is intended to certify or validate that the software runs or operates properly with the many types of computers and peripherals, many versions of operating systems and in different environments. This test is a very efficient way of verifying the software to check if it works under any type of computing environments. After all, the end user wants high quality software that works on the system without any glitch.
Compatibility test is conducted by using the software under different hardware and software application conditions. The computing environment, where you conduct this test assumes importance, as the software product created must work in a real-time environment without any errors or bugs. You may want to consider the following computing environments, before conducting compatibility tests.
Operating systems (Windows or Linux)
Hardware peripherals (printers, scanners, peripherals, drives, etc)
Browsers (Firefox, Internet Explorer, Opera)
Database content
Computing capacity
Other related system software
To create an effective compatibility testing, software companies should either train and maintain a separate, independent testing group or hire the services of a third party to perform the test on the software, just prior to delivery. The advantages of having the testing done outside is that the company need not maintain a group of testers and the wide range of hardware and software used in testing.
To maintain the different software models, comprehensive compatibility testing is expensive. The disadvantage of not having an in-house compatibility testing facility is the scheduling. If there are other clients waiting in front of you, you have to include waiting time in the delivery schedule.
Some companies maintain a limited compatibility testing facility and hire the services of testing companies for the tests they cannot conduct compatibility test in their own facilities. However, it depends on the size and business volume of the company. A company developing a large volume of software products can easily maintain a well-equipped testing facility, while a small company will need to consider the cost of maintaining one.
Need for Compatibility Testing
A software product that is compatible with different computing environment at the user’s end will be the most successful one, both commercially and structurally.   Compatible testing is a great way to check that your software works everywhere and on user’s computer systems. If it works well on a client machine, then you would have ascertained that the software has achieved its planned objectives.
The major reasons why software developers conduct various tests during and after software development are ensuring customer satisfaction and planning product improvement. Anytime, the current and future competitors could come up with a superior product that would cause your product to remain on the market shelves instead of sales. So, how does compatibility testing ensure customer satisfaction and continual product improvement? Here are some of the reasons that justify the cost of compatibility testing:
To ensure that the product meets the requirements – What the end user or customer expects or desire from the software is very critical for the success of a software application. Before starting designing and developing software, the requirements or standards vis-à-vis quality will need to be set and agreed upon by both the developer and the end user. Any developmental activities that go on in the future should comply with these needs and requirements. A compatibility test with a wide range of software and hardware systems will most likely ensure that the software would meet the needs and requirements that the developers have committed to meet.
.Improve Sales and marketing – There is no better product endorser than a delighted customer. Give the customer all that they expect from the product and you have a happy and satisfied customer. Who would not be delighted to learn that the new software functions well, without any problem, and very quick upon installation on the computer?
.Maintain good product or company reputation – A reputation of being consistently compliant to the specified product requirements will propel the company to success.
.Reduce cost of quality and lower product cost – The cost of attending to customer complaints, product recalls and repairs are computed as cost of maintaining quality. An excessive numbers of customer complaints not only ruin the product’s reputation and the company’s reputations, they also add to the product’s cost. Unresolved or unattended customer complaints may lead to the filing of many legal suits against the developer.
.This would also result in bringing increased attention to the supposed product failure. Reducing customer complaints to an almost insignificant level would result in substantial savings for the company. The savings could eventually translate to decreased software development cost and consequently lower product cost.
.Product improvement – Testing is not only a tool to control product quality. The testing results and recording these errors or discrepancies found will become good references to the software developers, in knowing how they can develop good products on a continuous basis.
.Product developmental aspects should also take into consideration, while certifying product compatibility with various hardware and software specified in the test plan. This certification of compatibility compels buyers consider the buying product.
.Beat the competitor – Good products with great features and stability will help software developers beat the existing competition.
.Shareholder satisfaction – Better products produced at lower production cost and increased sales volume will increase profits and satisfy the company shareholders as well.
Compatibility testing is costly and time-consuming. Is it necessary to subject developed software to this kind of test? What are the advantages and disadvantages of conducting compatibility testing? Here are some of the significant advantages and disadvantages of compatibility testing.
Compatibility Testing Advantages
An efficient test ensures real compatibility among different computing environments because testers install and use the application on different hardware systems and under different computing environment. In addition, a truly dynamic test reveals quite a lot about the workability and stability of the software.
This test helps you enhance your firm’s image and reputation, as a maker and supplier of high class and productive software. Building good reputation is always difficult. A software product is very difficult to sustain and manage, especially when it is not stable, unworkable, and incompatible with differing working environments. Even a single error in the architecture might spoil your reputation.
Compatibility Testing Disadvantages
Compatibility testing will lead to a rise in cost of production and time. You will need to maintain staff, pay them the salary of their expectations, and update different types of hardware and software systems and payments for testing services. If you do not have a testing facility, then you will need to hire professional testers to do the job for you.
Test delays are common with compatibility testing, which may eventually result prolonged delivery schedules. Conducting this test is very time-consuming because testing has to be conducted with the various kinds of hardware and software those are currently in use by the end users.
One definition of quality is the ability of the product to function in the manner and mode expected by the buyer and the end user. Any testing for quality will add to the cost of production or development. However, incompatibility is a discrepancy or failure that needs immediate action, before any product is delivered to the end user.
Compatibility testing ensures that a gross and costly error, such as the failure to operate due to the issue of incompatibility among different computing environments, is detected and corrected, before the software is delivered to the end users.

What is Beta Testing?

Beta testing is the last important step that software developers conduct before they launch new software. Software systems at their beta stages are not fully developed. Hence, you should not expect it to be completely functional. However, it does give you a first look and a brief glimpse of what it can do for the users before its final launching in the market.
Testing a software before it is commercially released into the market is not only vital but also very critical.  The released software should ensure there are no errors and bugs and the software application must work without any problems on any supported system. Beta testing enables the end-users to test the software to determine if it would satisfy their actual needs.
You should note that beta test is part of the normal developmental stage of the software system along with alpha testing to determine whether the software is acceptable or not. There are some essential differences between alpha and beta testing. It is very important to know them in detail so that you can avoid any confusion. In essence, both of them have similar goal that is to allow the end user to test the software. However, alpha testing will run in a simulated laboratory in preparation to the beta testing.
Software developers use alpha testing to check if there are any bugs or errors in the system. Beta test is almost similar to alpha testing; the main difference between the two is that limited numbers of end users who do not belong to the company that produces the software in question conducts beta testing. Users who conduct the beta testing will enable the company to find out whether the software under testing needs further improvements or not. In some cases, the software developers may provide the software for testing to anyone who wants to test it. Obviously, the software developer will be too happy to get as many numbers of feedbacks as possible from potential users.
Software developers refer both alpha and beta tests as immersion test because they do not use the most basic tenets of testing for these procedures. Instead, both of these tests are conducted in an external environment to see whether outside situation will have an impact on the efficiency of the software or not. Alpha and beta tests will also determine whether it would indeed be useful to their potential clients. In essence, beta testing is a critical procedure that allows the software developers to test their products under field conditions and by the people who actually use them.
How to Use Beta Tests
Beta tests are the most critical tests that help the software companies to make an objective test of their product’s ability to perform under variable conditions. If the developed software is reported to have several bugs or issues at hand, then the software developers will need to introduce further modifications to the software. Any bugs, errors or problems noticed during the beta tests will have something to do with the functionality and architecture of the software. Therefore, software developers will need to work quickly to collect feedbacks from different testers so that they can correct possible errors, eradicate any bugs and make the product viable, stable and robust before introduction for global market.
In spite of foolproof testing procedures, developers may still need to make necessary modifications to the software and conduct regression tests. Usually, software developers conduct regression tests at an earlier phase of software development. However, it is still necessary to employ this procedure, because modifying any software may cause regression in any of the previous functionalities. It is possible to conduct a regression test at any time during the developmental phase; in almost all cases, an automated process will allow the tester to conduct the test with least hassles.
In practice, release notes come together with the beta version of the software. It will give you many bits of necessary information about the software. Some of the most common information that comes along with the beta version is the accurate version number of the beta version, the list of changes that were carried out compared to the previous version, and the description of determined problems, and other related information. The release note will also contain the necessary technical and system requirements that are required to conduct the beta test.
However, you will still need to know about some necessary steps before conducting the beta version tests. Initially, you will need to download and install the software on your system and spend some time to know and understand about the basic features so that you can give accurate feedback. The next essential step is to create a report of all bug and errors that you might encounter while operating the software product. In addition, you should also inform the software developers of any configuration problems in the software system. This will allow the software developers to address configuration concerns and make the product problem-free before releasing to the market.
However, you should remember that the beta version is not the final version of the product. Thus, no company can provide any guarantee on the security of the product tested. In some instances, there have been instances of beta version software disrupting the computer systems and eventual loss of data and information stored in such systems. However, beta testing is still the only recognized testing process designed to test the software before its release to the market. Beta testing provides many benefits to the software developers as it is more practical and future users will conduct such tests which lend an air of authenticity to the efficiency of the product.
Beta Testing – Advantages and Pitfalls
Beta testing is a unique testing process that helps software developers test a particular software product before its final release to the market. Beta testing offers several, unique advantages to both software developers and users. Here are some of the most significant benefits of using beta testing:
For users, the most significant benefit is that they can be the first of the few to get  access to many latest and modern software products before they arrive at the market.
User’s opinions and test reports are invaluable because software developers can use them to improve the quality of software products tested.
In addition, you can expect to get free technical support for any issues that concern your system, when you are using the beta software on your computer.
Communicating with other beta users is also possible, so that you can get more in-depth information about the different software products that are out in the market today.
Some software developers offer free versions of software tested to recognize your efforts of time, money and energy
However, you should know and study advantages offered by the software developers before conducting the beta tests.
The main purpose of conducting beta testing is to find out if the software designed can function in a proper manner. Thus, you should report any problems that encounter while testing the product through beta testing. There are many simple steps that you should know so that you can provide the best possible suggestions to the software developer.
Reporting on beta tests is very important. To report any problems, you should create a comprehensive description of all the problems you faced while running the tests. It means that you should know how to describe the errors and bugs that occurred by themselves on your computer system. Accurate and specific reporting is mandatory because the software product designed and created is very complex and software developers will spend a lot of time and money to create their products.

Hence, they should know the exact sector where the problem lies. You may also need to describe the events that caused the problem and the details of the hardware used to test the software. Beta testers can send their reports to the software developers either through emails or through the server. You can even discuss about any problems in the forum created and hosted by the software developer.
Why is Beta Test Important?
Beta testing is one of the most important and mandatory tests that one will need to undertake to ensure that the software is safe and completely free from errors. Beta testing offers many benefits to the software developers too.
The most important one relates to the advanced details like understanding user needs, requirements and demands from the software.
It is also possible to know the pulse of the software users like what features they like , why they use the product and how they use it.
Software developers can provide their software products to end users to expose them to real life usage. This will help software developers to determine whether the product can withstand the different configurations of the end-user computers.
Beta testing at the user end will also help software developers test the product under different conditions.
Beta testing will also help the software developers to know and understand end user opinion and experience.
Software developers will also understand whether the end user benefited from the features included in the software.
Software developers may also get a chance to estimate the possible profit levels based on the feedback received by the end users.

What is Alpha Testing?

Alpha testing makes use of prototypes to test the software in its beta stage. However, you cannot expect the software to possess complete functionality for which it is designed especially at this stage. This is because alpha testing is usually conducted to ensure that the software that is being developed provides all necessary core functions and that it accepts all inputs and gives out expected outputs.
Alpha tests are conducted in the software developer’s offices or on any designated systems so that they can monitor the test and list out any errors or bugs that may be present. Thus, some of the most complex codes are developed during the alpha testing stage. Furthermore, the project manager who is handling the testing process will need to talk to the software developer on the possibilities of integrating the results procured from the alpha testing process with the future software design plans so that all potential future problems can be avoided.
You can recommend alpha test, when software developers in a laboratory conduct them. This allows them to see the software working in an actual, practical setting. However, the number of consumers, who test the software, should be strictly limited, because the software is not yet ready for a commercial launch at this point.
Before consumers receive the software system, you should ensure that it provides plenty of improvements in terms of stability and robustness. A software system that possesses such advantageous features will bring a positive image for your business. The beta stage of the software is when you can request the consumers to test the software to see whether it would be acceptable or not.
Alpha Testing Expectations
The testing team that handles the testing process will have many expectations from alpha testing process. Alpha tests are conducted by the software developers inside a laboratory or somewhere where the testing environment can be controlled. In the meantime, many developers also recognized the need for handing the testing to many customers because they are the users who understand their expectations and functionalities from the software being tested. Thus, some of the customers will accept to test the software during the alpha stage.
When running end-product software for testing, it is inevitable that testers will encounter a number of minor problems, errors and eventually frustrating experiences. This is because software developers would have taken care of the most complex and serious problems as soon as they encounter them. Thus, the software during its alpha stage will have minimum functionality and ability to accept the inputs and create the expected output. Therefore, the users who would test the software must be aware of this fact so that they can create unbiased and accurate reviews of the software’s problems and other design issues.
Software engineers should take the responsibility of educating all users about the limited capability of the software at this point. In addition, they should also include an advisory letter along with the software package that mentions about the limited capability of the software. Customers should also understand that they will most likely experience frustrating problems if they run the software at its alpha stage like a product.
The real purpose of the alpha test by the consumer is to provide the necessary feedback and give the end-users perspective so that software developers can address these issues. Asking for customer feedback about the problems they had encountered and the things they expect once the software is launched in the market makes lot of sense.
Advantages of Testing the Software in EPRI
Although, a alpha software is not yet fully functional, a quality assurance tester, EPRI (Electric Power Research Institute) Corporate Software Quality, will still run a test so that the software sent out to a user will not have any serious errors or complications. A failure to address any errors or bugs could lead to a number of potential problems with the user. The EPRI tests the software first to ensure that the user’s system will never falter, from an error on the software or due to a virus. Doing the testing process, the software testers will also ensure that EPRI will maintain its reputation before it is sent out to the user.
One should note that the entire software program and its documentation in any of its lifecycle stage must possess the EPRI advisory before passed on to the users for further testing. EPRI makes use of a series of e-mails to provide progress reports about the software. However, you may still need to expect well-structured reports and provide you with necessary information for further improvement.
In addition, EPRI also works well with the software developers and managers, because it is possible to complete the software project within a specified period.  However, you can even accommodate occasional special reports. For example, the EPRI has the ability to provide an earlier feedback about the usability and functionality of the software in case when you are in a hurry. You can also discuss specific details like scheme of the navigation and overall appearance and look of the software.
The entire testing process will take about six to eight days. Occasionally, it may take more time when there are complicated errors in the software coding systems. On the flipside, software testing by using EPRI could take unreasonably longer time especially during the peak season when there is a backlog of tests due to increased demand.
Alpha Testing Advantages
Alpha testing provides better insights about the software’s reliability and robustness at its early stages.  You can find out whether the software has the real potential to work at the alpha stage. You would even be able to detect many of the serious errors quite easily during the alpha test, because other minor design structures are yet to be integrated. For this sole reason, alpha testing has the ability to provide early detection, regarding design issues and errors, to prevent potential problems in the future.
Alpha test acts as an effective software testing method. You will conduct this testing to ensure that the user will get high quality services in the form of complete functionalities and stability. The features that the technical team looks for in software are efficiency, maintainability, usability, compatibility, and reliability among others.
However, the most common requirement is that the software provided to the user should work properly according to the purpose for which it is created. Conducting alpha tests will reveal whether the software has all these necessary features to pass the strict quality standards of EPRI and even ISO. However, the overall quality concerning the software is never absolute; it just needs to meet a certain set of specifications that is expected out of it.
Limitations of Alpha Testing
Like any other software testing processes, even alpha testing has some limitations in its use. Likewise, it also offers some specific advantages when compared to other testing procedures. Since alpha testing makes use of undeveloped prototypes, this type of testing is only good for the early stages of the software. It is almost impossible to conduct an in-depth reliability test or an installation test, and even documentation tests at this point, because the software is still undeveloped. Therefore, it is very difficult to address these concerns accurately at the alpha test stage.
Alpha test allows software developers to see the true potentialities of the software including its design issues. Software developers can fix any potential code and structural problems right during the alpha test process. However, you should be aware of several critical factors about alpha testing before sending it to users.

Static Testing

The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review’s, Inspection’s and Walkthrough’s are static testing methodologies.
Static testing is a form of software testing where the software isn’t actually used. This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.
Bugs discovered at this stage of development are less expensive to fix than later in the development cycle.
Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding or program comprehension.
The sophistication of the analysis performed by tools varies from those that only consider the behavior of individual statements and declarations, to those that include the complete source code of a program in their analysis. Uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., the lint tool) to formal methods that mathematically prove properties about a given program (e.g., its behavior matches that of its specification).
Some people consider software metrics and reverse engineering to be forms of static analysis.
A growing commercial use of static analysis is in the verification of properties of software used in safety-critical computer systems and locating potentially vulnerable code.
Formal methods is the term applied to the analysis of software (and hardware) whose results are obtained purely through the use of rigorous mathematical methods. The mathematical techniques used include denotational semantics, axiomatic semantics, operational semantics, and abstract interpretation.
It has been proven that, barring some hypothesis that the state space of programs is finite and small, finding possible run-time errors, or more generally any kind of violation of a specification on the final result of a program, is undecidable: there is no mechanical method that can always answer truthfully whether a given program may or may not exhibit runtime errors. This result dates from the works of Church, Gödel and Turing in the 1930s (see the halting problem and Rice’s theorem). As with most undecidable questions, one can still attempt to give useful approximate solutions.
Some of the implementation techniques of formal static analysis include:
  • Model checking considers systems that have finite state or may be reduced to finite state by abstraction;
  • Abstract interpretation models the effect that every statement has on the state of an abstract machine (ie, it ‘executes’ the software based on the mathematical properties of each statement and declaration). This abstract machine overapproximates the behaviours of the system: the abstract system is thus made simpler to analyze, at the expense of incompleteness (not every property true of the original system is true of the abstract system). If properly done, though, abstract interpretation is sound (every property true of the abstract system can be mapped to a true property of the original system).
  • Use of assertions in program code as first suggested by Hoare logic. There is tool support for some programming languages (e.g., the SPARK programming language (a subset of Ada) and the Java Modeling Language — JML — using ESC/Java and ESC/Java2).

Black Box Testing: Types and techniques

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.

Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ‘structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.

Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing
– Tester can be non-technical.
– Used to verify contradictions in actual system and the specifications.
– Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing
– The test inputs needs to be from large sample space.
– It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
– Chances of having unidentified paths during this testing

Methods of Black box Testing:

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Mobile Application Testing Checklist.

Functional Verify that all functional requirements have been tested as per the functional test cases of the mobile app All functionality should be tested as per the test cases
UI/Design Verify that design of the mobile app is as per the mockup Design of the the mobile app should be as per the design mockup
Performance & Stress Verify perfomance of the mobile app Performance of the mobile app should be acceptable as per the defined criteria

Verify mobile app behaves gracefully under heavy load Mobile app should behave gracefully under heavy load
Navigation Verify navigation of the application  
Content Testing Verify content of mobile app Content of mobile app should be proper

Verify spelling mistakes There should not be any spelling mistakes
Messages Verify messages are context specific and proper Messages should be context specific and proper

Verify for missing messages There should not be any message missing if it is required
System Crash/Force Close Application must preserve sufficient state information to cope with forcible close by the system The Application must not lose any information that it implies would be preserved, nor become difficult to use subsequently, as a result of a forcible closure by the system
Installation Verify that application can be Installed Successfully Application should be able to install successfully

Verify that version number must match that specified during submission The version number must match that specified during submission

Verify that the application has successfully installed on the device by navigating to the area on the phone where new applications are installed Application must be seen in the installed applications list
Uninstallation Verify that application can be uninstalled successfully. User should be able to uninstall the application successfully.
Network Verify the behavior of application when there is Network problem and user is performing operations for data call User should get proper error message like “Network error. Please try after some time”

Verify that user is able to establish data call when Network is back in action User should be able to establish data call when Network is back in action
Voice Call Handling Verify that user can accept Voice call at the time when application is running and can resume back in application from the same point User should be able to accept Voice call at the time when application is running and can resume back in application from the same point.

Verify that user can reject the Voice call at the time when application is running and can resume back in application from the same point User should be able to reject the Voice call at the time when application is running and can resume back in application from the same point

Verify that user can establish a Voice call in case when application data call is running in background. User should be able to establish a Voice call in case when application data call is running in background.
SMS Handling Verify that user can get SMS alert when application is running User should be able to get SMS alert when application is running

Verify that user can resume back from the same point after reading the SMS User should be able to resume back from the same point after reading the SMS
Unmapped keys Verify that unmapped keys are not working on any screen of application Unmapped keys should not work on any screen of application
Application Logo Verify that application logo with Application Name is present in application manager and user can select it. Application logo with Application name should be present in application manager and user can select it.
Splash Verify that when user selects application logo in application manager splash is displayed. When user selects application logo in application manager splash should be displayed.

Note that Splash do not remain for fore than 3 seconds. Splash should not remain for fore than 3 seconds.
Low Memory Verify that application displays proper error message when device memory is low and exits gracefully from the situation. Application should display proper error message when device memory is low and exits gracefully from the situation.
Clear Key Verify that clear key should navigate the user to previous screen. Clear key should navigate the user to previous screen.
End Key Verify that End Key should navigate the user to native OEM screen End Key should navigate the user to native OEM screen.
Visual Feedback Verify that there is visual feedback when response to any action takes more than 3 seconds There should be visual feedback given when response time for any action is more than 3 second.
Continual Keypad Entry Verify that continual key pad entry do not cause any problem. Continual key pad entry should not cause any problem in application.
Exit Application Verify that user is able to exit from application with every form of exit modes like Flap,Slider,End Key or Exit option in application and from any point. User should be able to exit with every form of exit modes like Flap,Slider,End Key or Exit option in application and from any point.
Charger Effect Verify that when application is running then inserting and removing charger do not cause any problem and proper message is displayed when charger is inserted in device. When application is running then inserting and removing charger should not cause any problem and proper message should be displayed when charger is inserted in device.
Low Battery Verify that when application is running and battery is low then proper message is displayed to the user. When application is running and battery is low then proper message is displayed to the user telling user that battery is low.
Removal of Battery Verify that removal of battery at the time of application data call is going on do not cause interruption and data call is completed after battery is inserted back in the device. Removal of battery at the time of application data call is going on should not cause interruption and data call should be completed after battery is inserted back in the device.
Battery Consumption Verify that application does not consume battery excessively. The application should not consume battery excessively.
Application Start/ Restart 1. Find the application icon and select it 2. “Press a button” on the device to launch the app. 3.Observe the application launch In the timeline defined Application must not take more than 25s to start.

Verfiy that application appears in task manager after starting the application Application should appear in task manager

Verfiy that application does not appear in task manager after closing the application Application should not appear in task manager

Verify application appears after restarting the application Application should appear in task manager

Verify that the Application notifies the user about a long launch time If the Application takes longer than 5 seconds to launch, a progress bar or a message must be displayed to tell the user what is happening.
Application Side Effects Make sure that your application is not causing other applications of device to hamper. Installed application should not cause other applications of device to hamper.
OTA install Verify that the Application must install via OTA.  
File System – Memory during run Verify that the Application correctly handles out of memory exceptions during Application execution The Application should handle any out of memory exceptions correctly.Ensure that there is a warning to the user advising about lack of memory when file is trying to be stored
HTTP – Invalid Network Connection Verify that the Application can handle the network connection being invalid / unusable  
Network connectivity – Airplane mode Verify that when the Application uses network capabilities, it must be able to handle the device being in Airplane mode The Application will give a meaningful error message to indicate that the device is in Airplane mode and the application cannot run successfully.
Timed Event – Expiry during Application run Verify that the Application behaves correctly on expiry of a timed event while the Application is running Ensure that Application reacts correctly once the designated time has expired.
Timed Event – Expiry during Application suspend Verify that the Application resumes correctly from a suspended state on expiry of a timed event Ensure that the application resumes correctly once the designated time has expired, and then ensure that the Application behaves correctly after being resumed.
Timed Event – Expiry during Application exit Verify that the Application starts correctly from an exited state on expiry of a timed event. Application starts, or user is presented with a start option once the designated time has expired.

Application behaves correctly when started.

Memory Card – Insertion Verify that the Application works correctly following a memory card insertion action when the Application is suspended and resumed. The Application continues to operate as designed based on the Application specification and is not affected by the memory card insertion or mounting/unmounting.
Memory Card – screen behaviour Verify that the Application with memory card functional screens works correctly with memory card inserted and removed. The Application should work correctly following memory card insertion.

The Application should work correctly following memory card removal

Readability Verify that the application content is readable. The application content should be readable
UI – Read time Verify Comfortable time for content reading Each screen must be visible for the time necessary to comfortably read all its information.
UI – Consistency UI consistency The Application UI should be consistent and understandable throughout, e.g. common series of actions, action sequences, terms, layouts, soft button definitions and sounds that are clear and understandable
UI – Differing screen sizes Where the application is designed to work on multiple devices it must be able to display correctly on differing screen sizes The Application should display correctly without obvious errors
UI – Multiple Format Input Handling Where the device and application can accept input in multiple formats (e.g. external touchscreen / external keypad / internal touchscreen / internal keypad / QWERTY layout / 12-key layout and others), the application must work correctly with all supported input methods The Application should accept input correctly in all supported formats
Language – Correct operation Verfiy that application appears in task manager after starting the application All text content is rendered in the correct/expected language.

Ensure Application detects correct language and renders content as appropriate (if applicable).

Language – Supported formats Ensure that the Application supports all date/time/numeric/currency features for supported languages All text content relating to date/time/numeric/currency fields are rendered in the correct/expected language format.
Lifecycle – Resource Sharing – Database Check that database resources are properly shared between Application and a competing Application. Application should continue from the previous state prior to being suspended.

Application should see the new entry and the deleted entry.

Key Press – Scrolling in menus Scrolling in menus Scrolling should work with keypad or navigation device
Key Press – Selection key Selection key selects menu items This MUST select the menu item with no adverse effects on the Application
Key Press – Text field scrolling Scrolling in text fields and About / Help screens This should scroll vertically and (if applicable) horizontally in the dialog
Key Press – Pause The Application must support a pause feature in areas of the Application where immediate user interaction is needed 1. The user can pause the Application and the pause feature must support an option to resume .

2. All time-specific features of the Application are disabled at the time of the pause.

3. There is a clear indication that the Application is in a paused state.

4. There is a clear indication how the user can return from the paused state.

Action – Device Close Ensure that the Application while launching handles closing of the device correctly
The Application returns to the same state before the interruption
Action – Device Open Ensure that the Application handles device opening correctly The Application returns to the same state before the interruption
Orientation Verify that application works as expected in supported orientation Verify that application works as expected in supported orientation
Battery usage of running application Verify the Battery usage of the Application in android device it is getting display the battery usage of the currently runnig applicaiton
External source Verify the behaviour of the application when the external source is not available E.g. The application is fetching the data from the web service then check the application without that web service(please stop that web service and then run the application and check the behaviour of the application that it is getting firring some proper message to user or what)
Memmory (RAM) Verify that application is not using much memory in some devices we can check that how much memory is getting used by which applicaiton
working with navigation keys verify that navigation keys are working with our applicaiton in some devices(except iphone) there are keys which are provided for navigation so our application should also work with those keys
notifications Verify that any notification is sent to the device but device is turned off so when it again turn on then it should get the notification When device is turned off and send the notification to the mobile device so obviously it will bot get displyed but when user turn the device on then it should get the notification.
idol application behaviour verify that idol application does not increase the occupancy of memmory or battery keep the application in idol state(minimise mode) for some long time and then check the memmory/battery occupeid by that application
External incoming communication – infrared/blue tooth Application should gracefully handle the condition when incoming communication is made via Infra Red/Blue tooth [Send a file using Infrared (if applicable) to the device application presents the user] When the incoming communication enters the device the application must at least respect one of the following: a) Go into pause state, after the user exits the communication, the application presents the user with a continue option or is continued automatically from the point it was suspended at b) Give a visual or audible notification The application must not crash or hung.
Camera interruption Run the Applicaiton and start the camera and take one picture and then resume the app some times in iphone it is getting crash the application or mess the UI of the application as camera is using more memory and application doesn’t handel this.
Ensure that the Application notifies the user about a long launch time Launch the Application.

Observe launch time.

If the Application takes longer than 5 seconds to launch, a progress bar or a message must be displayed to tell the user what is happening
Application must not perform inappropriate actions while thinking or rendering Make user input while the Application or handset is busy processing or rendering. There must be no inappropriate reaction by the Application.
Check that database resources are properly shared between Application and a competing Application. Launch the Application. 2. Suspend Application 3. Launch the device Contacts application 4. Add a new entry into contacts. 5. Remove an existing entry from contacts. 6. Resume the Application under test 7. Check Application state Application should continue from the previous state prior to being suspended.

2. Application should see the new entry and the deleted entry.

Ensure that the Application has a Mute or Sound On / Off setting. It will be sufficient for the application to respect the settings of the device volume controls, such that sound can be turned down to zero before the application launches, and the application then makes no sound.

Use the application and note the effects of either muting the device via the device volume controls or via a menu setting.

Application must provide a means of muting background music and / or sound effects.
Special symbol for crash app… if user enter the text ‘, %, &, $, @, <> than most of the times application crashed there.. if there is any server communicaiton is made and user enter such kind of characters then it is getting crash the application so check such kind of things in app

A simple approach to Software Test Estimation

A very simplistic method for estimating testing time can be as follows for a typical black box testing assignment .
Total Time=No of cycles*(Time for each testing cycle+time for fixing of bugs found in one cycle)
So if you have an estimated time of testing cycle as 6 days and you expect that the developers are going to take 4 days for fixing bugs found in one cycle and if the number of testing cycles are estimated as 4 then
Total time = 4*( 6+4)
Total time= 40 days

But this is very simplistic model but is useful for simpler projects.

A little more complex model for estimating software testing time involves:
1 Categorizing Software Requirements resulting into number of requirements-they can cover business,technical,functional and performance
2 Using a Complexity Constant ( e.g 1 for simple and 1.5 for complex)
3 Applying Risk Factors (e.g ranging from 1.1 to 1.5)
4 Applying a General Contingency Factor (e.g 1.15-1.35)

So one cycle time =No of requirement*complexity constant*risk factor*contingency factor.

If you are implementing the above methods, the you need to consider some other factors also:
When you run test estimation, you can estimate efforts for tasks belong to the Test Team. You cannot figure out a correct estimation for tasks which much depend on developers such as builds release for testing and bug fixing. Normally Builds are released very late to testers and testers have to change their schedule. In addition, Testers do not know and control how bugs are fixed. That is why the hardest part of test estimation is from test execution phase.

The best way to calculate test efforts is you should know how to size projects with function point approach. However, it is not easy task either.

What is Not Software Testing? – Exploring Myths

Software testing is a relatively new field and has changed considerably in past few years. It is not taught in many universities and when I moved from development to testing in 2001, I was confused about it. I tried to learn from internet, books, forums and was not impressed with the information I got. I even did my certification (CSTE, if you are interested) but that wasn’t very useful either. During that time, I came across many interesting theories / concepts and after working in the industry, I know they are not true, and are myths. Unfortunately, some of these myths are still in practice and widespread.

Myths in software testing has done this field more harm than good. In this post, I will explore popular software testing myths, why they are myths and what wrong these myths are doing to our profession?

1. Testers are Gatekeepers Of Quality – Nothing should be released to production before test organization / testers give their approval.

In many organizations, tetser / test team fight for this right. It makes test team empowered and to be honest, when I started my career I did think this is right way. In reality, this view is extremely dangerous, for the team and product both. Test team is information provider and provides information to stakeholders. It is up to them to act on the information test team provides. When testers act as gatekeeper they become responsible for quality in the product. It gives them feeling that other than test team, no one else is concern about quality. It also increases pressure and sometime creates situation wherein testers are afraid to release product, because there might be one last remaining defect which is not uncovered.

2. Complete testing is possible – If you plan properly, it is possible to test software completely, identify and fix all the defects.

Many organizations feel that it is possible to test software completely and fix all the defects. Nothing can be further from truth. No matter how much you test, complete testing is just plain illusion. Applications are becoming more and more complex and possibility of testing all the features, under all the conditions is extremely rare. When management is trap in this belief, test team will become responsible for every defect. Also, if test team attempts to do complete testing, they will become bottleneck. In reality, almost all the products have defects. Only difference is what kind of defects they have and how frequent is their occurrence. If you try hard, I am sure you can find defects in almost any software you use. Complete testing is not solution for this.

3. Best practices – Improving quality is simple & straight forward, just follow the best practices.

Best practices, standards and processes are still a big myth. Not all the standards, processeses and best practices work all the time. Sometime they work and sometime they don’t. There is nothing wrong in the practice as such, problem is in not identifying the context and problem before applying practices. Practices are practices, what makes them good or bad is whether they are applied after considering the context or not. Applying best practices is like applying hammer, if you do not consider size of the nail and try to use same hammer for all the nails, sometime it will work and some time it will not. When test team starts implementing industry’s best practices without considering their project, timeline, skills, technology, environment, team structure and many other aspects, they get frustrated because they do not get results they expected.

4. Certifications will make you better tester – So go and get CSTE, ISTQB…. etc to become better tester / get promotion.

When I started my career as tester, I was in service industry and certifications were / are considered good. There was a valid reason for that, because if you need more clients than boasting about number of certified test professionals will increase their confidence. But from what I have seen, certification exams are very shallow in nature and does not reflect whether person who is getting certification is good tester or not. Certifications, in their current format can be acquired by anyone who is prepared to study for a couple of weeks and it is highly unlikely that someone will become good tester in couple of weeks time. Certifications in its current format have created unnecessary pressure in the testing community to get certified, just because of peer pressure and client demand rather than as a benchmark for knowledge.

5. Test Automation is Silver Bullet – If something can be automated and you can automate – automate it.

Now do not get me wrong, I am a big fan of automation , but only where it add value. I have seen many engineering hours wasted on developing automation or frameworks for automation which are hardly used. Automation, without considering its ROI and effectiveness is just waste of time. Dr. James in his recent post has highlighted it nicely and made a very good point that manual / automated testing should be considered only after good test design. This mentality of considering test automation as silver bullet, like many other myths is dangerous for our profession because of many reasons. Management sometime can become extremely focused on the automation rather than improving quality. Remember, more automation will not improve quality. Right level of automation combined with required exploratory testing and good test design will certainly improve it.

6. Testing is demonstration of Zero defect – Testing is completed for this product and test team has signed off the product. This product does not have any defect now.

Whoever claim this, is obviously wrong. It is impossible to claim that any product is defect free. Even after spending thousands of hours in testing, there will be one more combination which was not tested, one condition which was missed and for all we know that might surface in production environment. If as a tester / manager you believe that zero defect is a possibility, you will feel responsible for any defect which is uncovered in production. On the other hand, if you modify the sentence and say that for the combinations I have tried, environment and data I have used and scenarios I tested, there was no defect according to my understanding of the product.Also, goal of testing is to uncover defects. As a tester, you can only find defects, you can not claim that this product is defect free.

7. All measurements are useful – Keep track of number of test cases, how many of them are executed, how much automation is present, defect count.. and any other numbers you can think of.

When I started my career, we started preparing reports on the lines of – how many test cases are written, how many of them are executed, how many of them are automated. how many defects were found and so on. Week after week, we would send these reports without realizing that if additional information is not provided along with numbers, they does not convey any meaning. If these numbers become primary consideration for management, quality will suffer. For example. if number of defects are important test team will start filing each and every issue, if number of rejected defects / duplicate defects become important test team will start spending lot more time on defects before filing them or may be will not file at all. Any measurement program should be approached with caution and should always provide clear meaning / summary for all the numbers.

8. Stable requirement and documentation are essential for any project.. BTW development team is crap.

With Agile development, this myth is slowly going away and we have realized that changes are inevitable. rather than fighting changes, we now embrace them. It was different when I started and probably still in many organizations, changes are not welcome, requirements are similar to contractual obligation and documentation is the first thing test team ask for. Development and test team work in their own silos and communication between them is limited to finger pointing. It is impossible to have quality software coming out from such environment. Development and test team should work together to improve quality.

9. Limited time and resources are the main reasons for missed defects – As a test team, we are always pressed for time and hardly have any resources. We could have caught these defects, only if we had more time / resources.

I am sure many of us have heard this and some of us even raised this as an issue, including me. It is true that time and resources are limited, but how many times defects are missed because of unavailability of resources and how many time defects are missed because of not utilizing resources properly and faulty test strategy and design. I have seen it many times that resources spend time in creating work which is not adding value in any way. It could be in the form of writing detailed test plans, test cases, writing automation suite which becomes shelf-ware, munching numbers to prepare report for management and so on. Availability of time and resources is important, but also it is more important to have solid test strategy and design, prepared with application / project under test in mind.

10. Anyone can become tester, they should have ability to read and follow instructions, thats it.Testing is not a creative job and does not require special trainings or skills and thats why there are not many professional testers around.

This is one of the most damaging myth of all and to some extent this is because of some of the practices we have seen in our industry. Manual scripted testing is probably closest to unskilled job, which require minimal skill and probably very basic training. Everything else apart from that, right from test design, to test execution to automation is highly skilled and creative job and can be done effectively, only if you are skilled. Not considering testing as a skilled profession has done more harm to the testing community than any other myth. This myth is going away with the rise / recognition of testing as a separate skill, exploratory testing practices, Agile and sensible test automation but still there is a long way to go.

This was my list of myths and is by no means is a complete list. Do leave your comments if you have observed / come across myths which are not covered here or if you do not agree with anything I said.