Does Quality Assurance Remove Need for Quality Control? "If QA (Quality Assurance) is done then why do we need to perform QC (Quality Control)?", this thought may come to our mind some times and looks a valid point too. This means if we have followed all the pre-defined processes, policies and standards correctly and completely then why do we need to perform a round of QC? In my opinion QC is required after QA is done. While in 'QA' we define the processes, policies, strategies, establish standards, developing checklists etc. to be used and followed through out the life cycle of a project. And while in QC we follow all those defined processes, standards and policies to make sure that the project has been developed with high quality and at least meets customer's expectations. QA does not assure quality, rather it creates and ensures the processes are being followed to assure quality. QC does not control quality, rather it measures quality. QC measurement results can be utilized to correct/modify QA processes which can be successfully implemented in new projects as well. Quality control activities are focused on the deliverable itself. Quality assurance activities are focused on the processes used to create the deliverable. QA and QC are both powerful techniques which can be used to ensure that the deliverables meet high quality expectations of customers. E.g.: we have to use an Issue tracking system to log the bugs during testing a web application. QA would include defining the standard for adding a bug and what all details should be there in a bug, like summary of the issue, where it is observed, steps to reproduce the bugs, screenshots etc. This is a process to create deliverable 'bug–report'. When a bug is actually added in issue tracking system based on these standards then that bug report is our deliverable. Now, suppose some time at later stage of project we realize that adding 'probable root cause' to the bug based on tester's analysis would provide some more insight to the Dev team, then we will update our pre-defined process and finally it will be reflected in our bug reports as well. This is how QC gives inputs to QA to further improve the QA. Following is an example of a real life scenario for QA / QC: QA Example: Suppose our team has to work on completely new technology for upcoming project. Our team members are new to the technology. So for that we need to create a plan for training the team members in the new technology. Based on our knowledge we need to collect pre-requisites like understanding documents, design of the product along with the documents etc. and share with the team, which would be helpful while working on the new technology and even would be useful for any new comer in the team. This is QA.
QC Example: Once the training is done how we can make sure that the training was successfully done for all the team members? For this purpose we will have to collect statistics e.g. number of marks the trainees got in each subject and minimum number of marks expected after completing the training. Also we can make sure that everybody has taken training in full by verifying the attendance record of candidates. If the number of marks of candidates are up to the expectations of the trainer/evaluators then we can say that the training is successful otherwise we will have to improve our process in order to deliver high quality training. Hope this explains the difference between QA and QC. |
Does Quality Assurance Remove Need for Quality Control?
How to Test Application Security – Web and Desktop Application Security Testing Techniques
How to Test Application Security – Web and Desktop Application Security Testing Techniques Need of Security Testing? Software industry has achieved a solid recognition in this age. In the recent decade, however, cyber-world seems to be even more dominating and driving force which is shaping up the new forms of almost every business. Web based ERP systems used today are the best evidence that IT has revolutionized our beloved global village. These days, websites are not meant only for publicity or marketing but these have been evolved into the stronger tools to cater complete business needs. Web based Payroll systems, Shopping Malls, Banking, Stock Trade application are not only being used by organizations but are also being sold as products today. This means that online applications have gained the trust of customers and users regarding their vital feature named as SECURITY. No doubt, the security factor is of primary value for desktop applications too. However, when we talk about web, importance of security increases exponentially. If an online system cannot protect the transaction data, no one will ever think of using it. Security is neither a word in search of its definition yet, nor is it a subtle concept. However, I would like to list some complements of security. Examples of security flaws in an application:1) A Student Management System is insecure if 'Admission' branch can edit the data of 'Exam' branch Security Testing Definition: Desktop and Web Security Testing: I hope this foreword is enough and now let me come to the point. Kindly accept my apology if you so far thought that you are reading about the subject of this article. Though I have briefly explained software Security and its major concerns, but my topic is 'Security Testing'. I will now explain how the features of security are implemented in software application and how should these be tested. My focus will be on Whats and Hows of security testing, not of security. Security Testing Techniques:1) Access to Application:Whether it is a desktop application of website, access security is implemented by 'Roles and Rights Management'. It is often done implicitly while covering functionality, e.g.in a Hospital Management System a receptionist is least concerned about the laboratory tests as his job is to just register the patients and schedule their appointments with doctors. So, all the menus, forms and screen related to lab tests will not be available to the Role of 'Receptionist'. Hence, the proper implementation of roles and rights will guarantee the security of access. How to Test: In order to test this, thorough testing of all roles and rights should be performed. Tester should create several user accounts with different as well multiple roles. Then he should use the application with the help of these accounts and should verify that every role has access to its own modules, screens, forms and menus only. If tester finds any conflict, he should log a security issue with complete confidence. 2. Data Protection:There are further three aspects of data security. First one is that a user can view or utilize only the data which he is supposed to use. This is also ensured by roles and rights e.g. a TSR (telesales representative) of a company can view the data of available stock, but cannot see how much raw material was purchased for production. So, testing of this aspect is already explained above. The second aspect of data protection is related to how that data is stored in the DB. All the sensitive data must be encrypted to make it secure. Encryption should be strong especially for sensitive data like passwords of user accounts, credit card numbers or other business critical information. Third and last aspect is extension of this second aspect. Proper security measures must be adopted when flow of sensitive or business critical data occurs. Whether this data floats between different modules of same application, or is transmitted to different applications it must be encrypted to make it safe. How to Test Data Protection: The tester should query the database for 'passwords' of user account, billing information of clients, other business critical and sensitive data and should verify that all such data is saved in encrypted form in the DB. Similarly (s)he must verify that between different forms or screens, data is transmitted after proper encryption. Moreover, tester should ensure that the encrypted data is properly decrypted at the destination. Special attention should be paid on different 'submit' actions. The tester must verify that when the information is being transmitted between client and server, it is not displayed in the address bar of web browser in understandable format. If any of these verifications fail, the application definitely has security flaw. 3. Brute-Force Attack:Brute Force Attack is mostly done by some software tools. The concept is that using a valid user ID, software attempts to guess the associated password by trying to login again and again. A simple example of security against such attack is account suspension for a short period of time as all the mailing applications like 'Yahoo' and 'Hotmail' do. If, a specific number of consecutive attempts (mostly 3) fail to login successfully, then that account is blocked for some time (30 minutes to 24 hrs). How to test Brute-Force Attack: The tester must verify that some mechanism of account suspension is available and is working accurately. (S)He must attempt to login with invalid user IDs and Passwords alternatively to make sure that software application blocks the accounts that continuously attempt login with invalid information. If the application is doing so, it is secure against brute-force attack. Otherwise, this security vulnerability must be reported by the tester. The above three security aspects should be taken into account for both web and desktop applications while, the following points are related with web based applications only. 4. SQL Injection and XSS (cross site scripting):Conceptually speaking, the theme of both these hacking attempts is similar, so these are discussed together. In this approach, malicious script is used by the hackers in order to manipulate a website. There are several ways to immune against such attempts. For all input fields of the website, field lengths should be defined small enough to restrict input of any script e.g. Last Name should have field length 30 instead of 255. There may be some input fields where large data input is necessary, for such fields proper validation of input should be performed prior to saving that data in the application. Moreover, in such fields any html tags or script tag input must be prohibited. In order to provoke XSS attacks, the application should discard script redirects from unknown or untrusted applications. How to test SQL Injection and XSS: Tester must ensure that maximum lengths of all input fields are defined and implemented. (S)He should also ensure that defined length of input fields does not accommodate any script input as well as tag input. Both these can be easily tested e.g. if 20 is the maximum length specified for 'Name' field; and input string "<p>thequickbrownfoxjumpsoverthelazydog" can verify both these constraints. It should also be verified by the tester that application does not support anonymous access methods. In case any of these vulnerabilities exists, the application is in danger. 5. Service Access Points (Sealed and Secure Open)Today, businesses depend and collaborate with each other, same holds good for applications especially websites. In such case, both the collaborators should define and publish some access points for each other. So far the scenario seems quite simple and straightforward but, for some web based product like stock trading, things are not so simple and easy. When there is large number of target audience, the access points should be open enough to facilitate all users, accommodating enough to fulfill all users' requests and secure enough to cope with any security-trial. How to Test Service Access Points: Let me explain it with the example of stock trading web application; an investor (who wants to purchase the shares) should have access to current and historical data of stock prices. User should be given the facility to download this historical data. This demands that application should be open enough. By accommodating and secure, I mean that application should facilitate investors to trade freely (under the legislative regulations). They may purchase or sale 24/7 and the data of transactions must be immune to any hacking attack. Moreover, a large number of users will be interacting with application simultaneously, so the application should provide enough number access point to entertain all the users. In some cases these access points can be sealed for unwanted applications or people. This depends upon the business domain of application and its users, e.g. a custom web based Office Management System may recognize its users on the basis of IP Addresses and denies to establish a connection with all other systems (applications) that do not lie in the range of valid IPs for that application. Tester must ensure that all the inter-network and intra-network access to the application is from trusted applications, machines (IPs) and users. In order to verify that an open access point is secure enough, tester must try to access it from different machines having both trusted and untrusted IP addresses. Different sort of real-time transactions should be tried in a bulk to have a good confidence of application's performance. By doing so, the capacity of access points of the application will also be observed clearly. Tester must ensure that the application entertains all the communication requests from trusted IPs and applications only while all the other request are rejected. Similarly, if the application has some open access point, then tester should ensure that it allows (if required) uploading of data by users in secure way. By this secure way I mean, the file size limit, file type restriction and scanning of uploaded file for viruses or other security threats. This is all how a tester can verify the security of an application with respect to its access points. |
Vasuna.vb.testing, Win Rs. 50,000 this august!
What is difference between Performance Testing, Load Testing and Stress Testing?
Q. What is difference between Performance Testing, Load Testing and Stress Testing?1) Performance Testing:Performance testing is the testing, which is performed, to ascertain how the components of a system are performing, given a particular situation. Resource usage, scalability and reliability of the product are also validated under this testing. This testing is the subset of performance engineering, which is focused on addressing performance issues in the design and architecture of software product. Performance Testing Goal: The primary goal of performance testing includes establishing the benchmark behaviour of the system. There are a number of industry-defined benchmarks, which should be met during performance testing. Performance testing does not aim to find defects in the application, it address a little more critical task of testing the benchmark and standard set for the application. Accuracy and close monitoring of the performance and results of the test is the primary characteristic of performance testing. Example: For instance, you can test the application network performance on Connection Speed vs. Latency chart. Latency is the time difference between the data to reach from source to destination. Thus, a 70kb page would take not more than 15 seconds to load for a worst connection of 28.8kbps modem (latency=1000 milliseconds), while the page of same size would appear within 5 seconds, for the average connection of 256kbps DSL (latency=100 milliseconds). 1.5mbps T1 connection (latency=50 milliseconds) would have the performance benchmark set within 1 second to achieve this target. For example, the time difference between the generation of request and acknowledgement of response should be in the range of x ms (milliseconds) and y ms, where x and y are standard digits. A successful performance testing should project most of the performance issues, which could be related to database, network, software, hardware etc… 2) Load Testing:Load testing is meant to test the system by constantly and steadily increasing the load on the system till the time it reaches the threshold limit. It is the simplest form of testing which employs the use of automation tools such as LoadRunner or any other good tools, which are available. Load testing is also famous by the names like volume testing and endurance testing. The sole purpose of load testing is to assign the system the largest job it could possible handle to test the endurance and monitoring the results. An interesting fact is that sometimes the system is fed with empty task to determine the behaviour of system in zero-load situation. Load Testing Goal: The goals of load testing are to expose the defects in application related to buffer overflow, memory leaks and mismanagement of memory. Another target of load testing is to determine the upper limit of all the components of application like database, hardware and network etc… so that it could manage the anticipated load in future. The issues that would eventually come out as the result of load testing may include load balancing problems, bandwidth issues, capacity of the existing system etc… Example: For example, to check the email functionality of an application, it could be flooded with 1000 users at a time. Now, 1000 users can fire the email transactions (read, send, delete, forward, reply) in many different ways. If we take one transaction per user per hour, then it would be 1000 transactions per hour. By simulating 10 transactions/user, we could load test the email server by occupying it with 10000 transactions/hour.
3) Stress testingUnder stress testing, various activities to overload the existing resources with excess jobs are carried out in an attempt to break the system down. Negative testing, which includes removal of the components from the system is also done as a part of stress testing. Also known as fatigue testing, this testing should capture the stability of the application by testing it beyond its bandwidth capacity. The purpose behind stress testing is to ascertain the failure of system and to monitor how the system recovers back gracefully. The challenge here is to set up a controlled environment before launching the test so that you could precisely capture the behaviour of system repeatedly, under the most unpredictable scenarios. Stress Testing Goal: The goal of the stress testing is to analyse post-crash reports to define the behaviour of application after failure. The biggest issue is to ensure that the system does not compromise with the security of sensitive data after the failure. In a successful stress testing, the system will come back to normality along with all its components, after even the most terrible break down. Example: As an example, a word processor like Writer1.1.0 by OpenOffice.org is utilized in development of letters, presentations, spread sheets etc… Purpose of our stress testing is to load it with the excess of characters. To do this, we will repeatedly paste a line of data, till it reaches its threshold limit of handling large volume of text. As soon as the character size reaches 65,535 characters, it would simply refuse to accept more data. The result of stress testing on Writer 1.1.0 produces the result that, it does not crash under the stress and that it handle the situation gracefully, which make sure that application is working correctly even under rigorous stress conditions. |
Database Testing – Practical Tips and Insight on How to Test Database
Database Testing – Practical Tips and Insight on How to Test Database Database is one of the inevitable parts of a software application these days. It does not matter at all whether it is web or desktop, client server or peer to peer, enterprise or individual business, database is working at backend. Similarly, whether it is healthcare of finance, leasing or retail, mailing application or controlling spaceship, behind the scene a database is always in action. Moreover, as the complexity of application increases the need of stronger and secure database emerges. In the same way, for the applications with high frequency of transactions (e.g. banking or finance application), necessity of fully featured DB Tool is coupled. Currently, several database tools are available in the market e.g. MS-Access2010, MS SQL Server 2008 r2, Oracle 10g, Oracle Financial, MySQL, PostgreSQL, DB2 etc. All of these vary in cost, robustness, features and security. Each of these DBs possesses its own benefits and drawbacks. One thing is certain; a business application must be built using one of these or other DB Tools. Before I start digging into the topic, let me comprehend the foreword. When the application is under execution, the end user mainly utilizes the 'CRUD' operations facilitated by the DB Tool. C: Create – When user 'Save' any new transaction, 'Create' operation is performed. It does not matter at all, which DB is used and how the operation is preformed. End user has no concern if any join or sub-query, trigger or stored-procedure, query or function was used to do what he wanted. But, the interesting thing is that all DB operations performed by user, from UI of any application, is one of the above four, acronym as CRUD. As a database tester one should be focusing on following DB testing activities: What to test in database testing:1) Ensure data mapping:Make sure that the mapping between different forms or screens of AUT and the Relations of its DB is not only accurate but is also according to design documents. For all CRUD operations, verify that respective tables and records are updated when user clicks 'Save', 'Update', 'Search' or 'Delete' from GUI of the application. 2) Ensure ACID Properties of Transactions:ACID properties of DB Transactions refer to the 'Atomicity', 'Consistency', 'Isolation' and 'Durability'. Proper testing of these four properties must be done during the DB testing activity. This area demands more rigorous, thorough and keen testing when the database is distributed. 3) Ensure Data Integrity:Consider that different modules (i.e. screens or forms) of application use the same data in different ways and perform all the CRUD operations on the data. In that case, make it sure that the latest state of data is reflected everywhere. System must show the updated and most recent values or the status of such shared data on all the forms and screens. This is called the Data Integrity. 4) Ensure Accuracy of implemented Business Rules:Today, databases are not meant only to store the records. In fact, DBs have been evolved into extremely powerful tools that provide ample support to the developers in order to implement the business logic at DB level. Some simple examples of powerful features of DBs are 'Referential Integrity', relational constrains, triggers and stored procedures. So, using these and many other features offered by DBs, developers implement the business logic on DB level. Tester must ensure that the implemented business logic is correct and works accurately. Above points describe the four most important 'What Tos' of database testing. Now, I will put some light on 'How Tos' of DB Testing. But, first of all I feel it better to explicitly mention an important point. DB Testing is a business critical task, and it should never be assigned to a fresh or inexperienced resource without proper training. How To Test Database:1. Create your own QueriesIn order to test the DB properly and accurately, first of all a tester should have very good knowledge of SQL and specially DML (Data Manipulation Language) statements. Secondly, the tester should acquire good understanding of internal DB structure of AUT. If these two pre-requisites are fulfilled, then the tester is ready to test DB with complete confidence. (S)He will perform any CRUD operation from the UI of application, and will verify the result using SQL query. This is the best and robust way of DB testing especially for applications with small to medium level of complexity. Yet, the two pre-requisites described are necessary. Otherwise, this way of DB testing cannot be adopted by the tester. Moreover, if the application is very complex then it may be hard or impossible for the tester to write all of the needed SQL queries himself or herself. However, for some complex queries, tester may get help from the developer too. I always recommend this method for the testers because it does not only give them the confidence on the testing they have performed but, also enhance their SQL skill. 2. Observe data table by tableIf the tester is not good in SQL, then he or she may verify the result of CRUD operation, performed using GUI of the application, by viewing the tables (relations) of DB. Yet, this way may be a bit tedious and cumbersome especially when the DB and tables have large amount of data. Similarly, this way of DB testing may be extremely difficult for tester if the data to be verified belongs to multiple tables. This way of DB testing also requires at least good knowledge of Table structure of AUT. 3. Get query from developerThis is the simplest way for the tester to test the DB. Perform any CRUD operation from GUI and verify its impacts by executing the respective SQL query obtained from the developer. It requires neither good knowledge of SQL nor good knowledge of application's DB structure. So, this method seems easy and good choice for testing DB. But, its drawback is havoc. What if the query given by the developer is semantically wrong or does not fulfill the user's requirement correctly? In this situation, the client will report the issue and will demand its fix as the best case. While, the worst case is that client may refuse to accept the application.
Conclusion:Database is the core and critical part of almost every software application. So DB testing of an application demands keen attention, good SQL skills, proper knowledge of DB structure of AUT and proper training. In order to have the confident test report of this activity, this task should be assigned to a resource with all the four qualities stated above. Otherwise, shipment time surprises, bugs identification by the client, improper or unintended application's behavior or even wrong outputs of business critical tasks are more likely to be observed. Get this task done by most suitable resources and pay it the well-deserved attention.
|
Failover and Recovery Testing
Failover and Recovery Testing testing verifies product in terms of ability to confront and successfully recover from possible failures, arising from software bugs, hardware failure or communication problems (eg network failure). The objective of this test is to check the system restore (or duplicate the main functional systems), which, in the event of failure, ensure the safety and integrity of the data product being tested.
Testing for failure and recovery is very important for systems operating on the principle of "24×7". If you create a product that will work, such as the Internet, without this kind of test you just can not do. Because Every minute of downtime or data loss in case of equipment failure can cost you money, losing customers and reputation in the market.
The technique of this test is simulating various fault conditions and subsequent study and evaluation of the reaction of protective systems. During these inspections it turns out, was it achieved the desired degree of recovery after the crash occurred.
For clarity, we consider some variants of this test, and general methods for their implementation. The object of testing in most cases are highly probable operational problems, such as:
- Denial of electricity on a computer server
- Denial of electricity on the client computer
- Incomplete data processing cycle (interruption of data filters, interrupt synchronization).
- Announcement or introduction into arrays of data are not available or erroneous elements.
- Refusal data carriers.
These situations can be played as soon reached a point in development when all the system restore or duplication are ready to perform its functions. Technically, to implement the tests in the following ways:
- Simulate the sudden failure of electricity on the computer (disconnect the computer).
- Simulate the loss of communication with the network (turn off the power cord, disconnect the network device)
- Simulate the failure of carriers (disconnect the external storage medium)
- Simulate the situation in the presence of invalid data (a special test kit or a database).
Upon reaching the appropriate conditions of failure and performance-based recovery systems, we can estimate the product in terms of testing to failure. In all the cases listed above, upon completion of the recovery to be achieved some desired state of the data product:
- Data loss or corruption within an acceptable range.
- Report or reporting system, indicating the processes or transactions that were not completed because of errors.
It is worth noting that testing for failure and recovery - is very product-specific testing. Development of test scripts shall be subject to all the features of the system under test. Taking into account the rather harsh methods of influence, we should also evaluate the usefulness of this type of testing for a particular software product.
Design-Based Test Case Design an Effective Software Testing Technique
Software design errors and faults can be discovered and software designs validated by two techniques like:
1) Requirements-based test case design being the primary technique
2) Another technique being the early design-based test case design.
In design-based test case design the information for deriving them is taken from the software design documentation.
Design-based test cases focus on the data and process paths within the software structures. Internal interfaces, complex paths or processes, worst-case scenarios, design risks and weak areas, etc. are all explored by constructing specialized test cases and analyzing how the design should handle them and whether it deals with them properly. In software testing effort, requirements-based and design-based test cases provide specific examples that can be used in design reviews or walkthroughs. Together they provide a comprehensive and rich resource for design based software testing.
Design Testing Metrics:
Increasingly, formal design reviews are adopting metrics as a means of quantifying test results and clearly defining expected results.
The metrics (measures that are presumed to predict an aspect of software quality) vary greatly. Some are developed from scored questionnaires or checklists. For example, one group of questions may relate to design integrity and system security.
Typical Integrity Questions can be like the following
Q.1: Are security features controlled from independent modules?
Q.2: Is an audit trail of accesses maintained for review or investigation?
Q.3: Are passwords and access keywords blanked out?
Q.4: Does it require changes in multiple programs to defeat the access security?
Each reviewer would answer these questions, and their answers would be graded or scored. Over time, minimum scores are established and used as pass/ fail criteria for the integrity metric. Designs that score below the minimum are reworked and subjected to additional review testing before being accepted.
Another example of a metric-based design test that can be used effectively is a test for system maintainability. An important consideration in evaluating the quality of any proposed design is the ease with which it can be maintained or changed once the system becomes operational. Maintainability is largely a function of design. Problems or deficiencies that produce poor maintainability must be discovered during design reviews; it is usually too late to do anything to correct them further along in the cycle of software testing.
To test the maintainability we develop a list of likely or plausible requirements changes (perhaps in conjunction with the requirements review). Essentially, we want to describe in advance what about the system we perceive is most apt to be changed in the future. During the design review a sample of these likely changes is selected at random and the system alterations that would be required are walked through by the reviewers to establish estimates for how many programs and files or data elements would be affected and the number of program statements that would have to be added and changed. Metric values for these estimates are again set on the basis of past experience. Passing the test might require that 80 percent of the changes be accomplished by changes to single programs and that the average predicted effort for a change be less than one man-week. Designs that score below these criteria based on the simulated changes are returned, reworked, and re-subjected to the maintainability test before being accepted. This is just one example of an entire class of metrics that can be built around what-if questions and used to test any quality attribute of interest while the system is still being designed.
Design for Testing:
In addition to the testing activities we perform to review and test the design, another important consideration is the features in the design that simplify or support testing. Part of good engineering is building something in a way that simplifies the task of verifying that it is built properly. Hardware engineers routinely provide test points or probes to permit electronic circuits to be tested at intermediate stages. In the same way, complex software must be designed with "windows" or hooks to permit the testers to "see" how it operates and verify correct behavior.
Providing such windows and reviewing designs to ensure their testability is part of the overall goal of designing for testability. With complex designs, testing is simply not effective unless the software has been designed for testing. Testers must consider how they are going to test the system and what they will require early enough in the design process so that the test requirements can be met.
Design Testing Tools and Aids:
Automated tools and aids to support design testing play an important role in a number of organizations. As in requirements testing, our major testing technique is the formal review; however there is a greater opportunity to use automated aids in support of design reviews.
Software testing tools in common use include design simulators (such as the data base and response time simulators; system charters that diagram or represent system logic; consistency checkers that analyze decision tables representing design logic and determine if they are complete and consistent; and data base dictionaries and analyzers that record data element definitions and analyze each usage of data and report on where it is used and whether the routine inputs, uses, modifies, or outputs the data element.
None of these software testing tools performs direct testing. Instead, they serve to organize and index information about the system being designed so that it may be reviewed more thoroughly and effectively. In the case of the simulators, they permit simplified models to be represented and experimentation to take place, which may be especially helpful in answering the question of whether the design solution is the right choice. All the tools assist in determining that the design is complete and will fulfill the stated requirements.
Documentation Errors
The purpose of error reporting is fixing it. About how to describe the error, what it consists of a description of the error and how it might look like the example tells this story.
So you found a bug. Not shelving them to start to write a bug report (not to procrastinate, then you can forget to write a report, to forget in what place was a mistake to miss part or even misquote the situation).
The first step is desirable to calm down and not make any sudden movements, not to press extra buttons, etc. We must remember the sequence of actions that have been made and try to reproduce the situation. Better to do it in a new browser window (if it is a web-application). Come and write data input / command buttons are pressed, in any menu jump to, what kind of reaction system to these actions, what error message is displayed.
Now we need to write their own actions. Record should be brief, but clear and understandable. Find a middle ground. If you write a memoir, the programmer will not read them or think that the error is very difficult and would defer until later, and ultrashort report no one will understand. As a consequence, error correction (bug) will hang in the air and sent back to you marked "not play" or ask clarifications, thereby simply and your spending and your time. Also, do not enter into one report more than one error. Motive is the same.
The report is written not only for ourselves but for others. So, it should be written so as to understand everything, but had no idea that you would like to say, do not ask again. Ask yourself a question: whether to repeat your actions the person who first sees the product?
If possible, try different options to express exactly the problem.
It is also desirable to avoid jargon or expressions which may be difficult to understand others.
In no case does not need to pass an oral report bugs, write an e-mail, icq, etc.! In most cases, forget about them, not treat them seriously and, if not corrected, especially in this case will blame you. You need this? All errors must be observed and described and have its own unique number. Then, for uncorrected errors will be the responsibility on the programmer.
These records will need another bug testers to work with you, the managers for them to see that you work and work productively, testers, who will come for you, as well as for writing reports.
Error Description
We now turn to the description of the error. Depending on what the company bagtrekingovaya system (accounting system errors), there will be different input fields.
At the beginning of opening a new bug report. It is possible that you will see a lot of lines for the filling, but it is possible that they do not all have to fill. It is better to consult with other testers, a manager or head of the group testing. But most likely have to fill out the following fields:
- Priority (as a serious mistake and a speed of execution requires. Must be corrected quickly or you can wait)
- Designate (who will deal with an error)
- Class (this is what kind of error-serious, minor, typo …)
Header Error
The headline should concisely and fully describe the problem. We spend a lot of time leafing through the bugs database and browsing the headlines errors. Much time can be saved if the headlines are clear and not have to open the error description to understand what was meant.
Problem Description
Describe the problem better by using "arrows". With them, the text of the report is discarded many unnecessary words that interfere with understanding the essence.
Example: I opened www.aaa.ru -> introduced in the word bbb ccc -> clicked ddd -> get errors: ddd
An example from the life
Title: The problem with the menu "forgot password"
Problem Description: Go to the login page -> click "Forgot Password" - "in the" Personal Account "enter 2389 -> in the« e-mail »enter test@test.com -> System says:" Error sending message . (# 1) "
If necessary, the data in the operating environment, configuration, logs. Is there a dependence on the configuration, installation, condition, options, settings, version, etc.
Attachments
To make the report more detailed and vivid can and must resort to:
- links
- screenshot
- video recording
Link
Well here everything is clear. Popped up an error - is taken link to this page, and inserted into the report. It is desirable also with screenshots. (Assuming that the tested Web application - approx. Editor)
Screenshots
A very useful thing to visualize the problem. Make a screenshot of the problematic area. (The simplest thing - it's on the keyboard to find the button Print Screen, then press it to open the program from Paint (if we are in the operating system of family Windows - approx. Editor), which is automatically installed in Windows and in her press Ctrl-V, then cut out unnecessary , store (preferably in the format JPG))
Although there is a more professional program that are more adapted to this kind of action and have a lot of very useful features, such as SnagIt, HyperSnap, HardCopy, RoboScreenCapture, FullShot 9, HyperSnap-DX 5, TNT 2. Screenshot want to attach to the bug report.
Videos
If the error difficult to describe, it is the most appropriate method. Program: SnagIt, CamStudio.
See Also
Top 10 negative test cases
Negative test cases used for testing the application, subject to receipt on his entry "incorrect" data. Such test cases should always be used during testing. Below are the ten most popular negative test scenarios:
Embedded Single Quote - Most SQL databases, there are problems in the presence of single quotes in the query (eg, Jones's car).
Use single quotes when checking each input field working with the database.
Required Data Entry - In the specification of your application should be clearly defined fields requiring mandatory data entry.
Check that the forms that have fields defined as mandatory for entry, can not be maintained in the absence of data in them.
Field Type Test - In the specification of your application should be clearly defined data types for each of the fields (fields date / time, numeric fields, fields for entering a telephone number or postal code, etc.)
Check that each of the fields allows you to enter or store data only certain specification of the type (for example, the application should not allow you to enter or maintain letters or special characters in numeric fields).
Field Size Test - In the specification of your application should be clearly defined maximum number of characters in each of the fields (for example, the number of characters in a field with a user name should not exceed 50).
To check that your application can not adopt or maintain more characters than specified. Do not forget that these fields should not only function correctly, but also to warn the user about the limitations, for example, with explanatory text boxes or error messages.
Numeric Bounds Test - Numeric field of your application may be limited allowable numeric values. These constraints can be specified in the specification of your application or arising from the logic of the program (for example, if you test the functionality associated with the accrual of interest on the account, it is logical to assume that the accrued interest can not take a negative value).
Check that the application displays an error message if the values fall outside the acceptable range (for example, the error message should appear when you enter a value of 9 or 51 in a field with valid values range from 10 to 50, or when you enter a negative value in the field whose values must be positive).
Numeric Limits Test - Most databases and programming languages define the numerical values of the variables with a certain type (eg, integer or long integer), which, in turn, have limited allowable numeric values (eg, integer values must be in the range -32768 32767, a long integer from -2147483648 to 2147483647).
Check the boundary values of the variables used for numeric fields, the boundary values which are not clearly defined specification.
Date Bounds Test - Very often in applications, there are logical limits to the fields containing the date and time. For example, if you check the box containing the user's date of birth, it is only logical to ban entry not yet due date (ie the date in the future), or restriction on the entry date is different from that of today more than 150 years.
Date Validity - Date fields should always be checking the validity of the entered values (eg, 10/31/2009 - not valid date). Also, do not forget about checking dates in leap years (years divisible by 4 m and multiples of 100 and 400 at the same time - a leap).
Web Session Testing - Many web applications use a browser session to track user logged into the system, application-specific application settings for a particular user, etc. At the same time, many of the features of the system can not or should not work without login. Check that the functionality or pages that are behind a password, not user is not authenticated.
Performance Changes - For each new product release is conducting a series of performance tests (for example, the rate of additions, deletions or changes of various elements on the page). Compare the results with tests of performance of previous versions. This practice will allow you to advance to identify potential performance problems due to code changes in new versions of the product.
localization testing tips and tricks
If you have already encountered with the testing locations, it is certainly one of the first questions you asked yourself starting to work, sounded something like this: "What should I test? Because I do not know the language / languages. "And the truth is that?
In fact, the correctness of the translation - not the only thing that you should pay attention when testing sites. Yes, of course, very important that the text was the grammatically, syntactically and logically correct. But that's not enough for a good location. That is why this kind of work and attract testers.
So, a few words about what needs to know and that should draw the attention of the tester in the test sites.
1. Prepare a suitable test environment for testing applications
Depending on the implementation, the choice of language for web applications can be carried out both manually and on the basis of language and regional settings on your browser or operating system, and even on your geographic location. And if a manual selection of a language more or less clear, in other cases, you will have to show a little ingenuity, and is likely to have multiple test environments. The ideal option would be virtual machines with installed OS and other software related sites. When configuring these machines, try to keep most settings to their original state, because very few users are using a configuration different from the CDS. When you create a virtual machine, is guided by the average statistical portrait of your end user, in order to imagine what software can be installed on its PCs. Why do it? The fact that some programs could seriously affect the final result of testing, and, accordingly, to get you to make false conclusions. For example, PC with MS Office 2003 and MS Office 2007 will behave differently in terms of working with a localized product, since the installation of MS Office 2007 includes the font Arial Unicode, which includes the inscription of characters overwhelming majority of world languages ??(in including Chinese and Japanese characters), but in the MS Office 2003 is not such a font.
2. Follow the correct translation
In my opinion, validation of the translation should always carry a person who is a native speaker or professional translator, or at least, people familiar with the language. All the rest of the evil one. But still, it is believed that such tests should exercise and a tester, even if he has no idea about the language. In such cases the Board to use electronic translators and dictionaries, not one but several at once, which will compare the results of translation and can make correct conclusions about its correctness.
In general, even if you have decided on such an adventure, try not too hold to, in fact, most likely, a professional translation interface, which, believe me, makes it more adequate translation than electronic translators.
3. Be the application "for you"
Before embarking on testing web applications in an unknown language to you, try to learn it so that you can move on it almost blindly. Read the basic functionality in the version of localization, the language you understand. This will save a lot of time, because you do not have to guess which may cause one or another link, or what the consequences will be pressing some buttons.
4. Begin testing with static elements
First of all, try to check the label on the static elements of the site: the block header explaining the inscriptions, etc., on them the user will pay attention at first.
By checking these items, do not forget that the length of one and the same text in different languages ??may differ materially. This is especially true if you checked the localization of the product for which the "native" language is English, because, as you know, when translating from English into any other text length increases by about 30%. Accordingly, try to make sure that all required inscriptions found a place in the markup of your site.
5. Pay attention to the controls, and error messages
Once the static elements will be finished, proceed to the rest of the control of your site: buttons, menus, etc. Remember that depending on the implementation, localization controls can be defined in the code, and may depend on your browser settings or OS.
Do not forget about error messages. Plan testing and compose test cases so that the maximum number of test error messages. Programmers are somehow paying very little attention to such things, because of which a significant part of the error messages may never entered into a localized version or be translated into appropriate language, or be totally unreadable because of problems with the encoding.
6. Ensure that data entry can be done in terms of localization
If the tested Web application involves the implementation of any data the user, be sure that users can enter data in terms of localization and all extended characters entered by the user and process the application correctly.
7. Do not forget the national and regional particularities
Another important point to which attention should be paid for testing - regional and national characteristics of the country, which is designed for localization. These features include the direction of the text, date formats, addresses, decimals, currency symbols, units of different quantities, etc. Always remember that good localization should be not only well-translated text, but an exact match cultural characteristics of people who speak the appropriate language.
I hope these simple tips will help you make your application more accessible and understandable for users of multi-lingual and multi-national World Wide Web.
Requirements Testing
Requirements Testing
Testing software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. Most of the defects in a system can be traced back to wrong, missing, vague or incomplete requirements.
Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria.
The Quality Gateway
As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.
To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.
Make The Requirement Measurable
In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement.
"The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."
In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.
The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.
Quantifiable Requirements
Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.
Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?
Non-quantifiable Requirements
Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.
Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.
An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.
Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.
Requirements Test 1
Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?
By adding a quality measure to each requirement we have made the requirement visible. This is the first step to defining all the criteria for measuring the goodness of the solution. Now let's look at other aspects of the requirement that we can test before deciding to include it in the requirements specification.
Requirements Test 2
Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.
Requirements Test 3
Is every reference to a defined term consistent with its definition?
Requirements Test 4
Is the context of the requirements wide enough to cover everything we need to understand?
Requirements Test 5
Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?
Requirements Test 5 (enlarged)
Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?
Requirements Test 6
Is every requirement in the specification relevant to this system?
Requirements Test 7
Does the specification contain solutions posturing as requirements?
Requirements Test 8
Is the stakeholder value defined for each requirement?
Requirements Test 9
Is each requirement uniquely identifiable?
Requirements Test 10
Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?
Conclusions
The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.
The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.
Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.
References:
An Early Start to Testing: How to Test RequirementsSuzanne Robertson
The Importance of “Hands-On” Mobile App Testing
On a simulator, you still use a mouse to 'touch' the screen and simulate gestures. You also have a full-sized keyboard for data entry. Of course, this is very different from using a mobile device, wouldn't you say?
First, a mobile device sits in your hand. Each of us likely has slightly different ways of holding and operating the device. For some, it's done with one-hand using your thumb or a finger. For others, it might be two hands using both thumbs.
Second, there's the act of touching various screen elements like buttons and controls. This is much easier to do with a mouse pointer than a pudgy finger.
Based on the prior experience of many mobile testers, this difference is a critical one, and the biggest, for testing application design and function. Using a mouse with the simulator, you do not get the full effect of having to scroll through a large list view of items or having to play 'whack-a-mole' on the screen with your thumb because button placement for navigating multiple screens is inconsistent.
Mobile developers are strongly encouraged to ensure that application testing begins early, and happens often, on the mobile device itself rather than on a simulator. The same holds true for tablet devices.
From Web Trends, Mobile Analytics: "Even on the same mobile platform, screen sizes and resolutions can vary based on device type. For instance, the screen size and resolution on the HTC Incredible is different than that on the HTC EVO 4G. Consequently, for an application to have a consistent look and feel across both devices and across a variety of other devices, user interface elements and graphics need to be scalable."
Top 10 Reasons to Become a Mobile App Tester
There are lots of reasons to become a mobile app tester, which you would know if you read our posts every day. Here are ten of those reasons, in no particular order:
1. High income potential
2. You want to work in the "wild west" of new technology
3. No fancy degrees or certifications needed to get started
4. You want to say "I tested that app!" to your friends and family
5. You're bored with testing the same old web and desktop apps
6. You want to see the latest, greatest apps before everyone else
7. You want to be one of the early experts in a fast-growing field
8. You're curious, with a knack for problem-solving
9. You want to get paid to play with the latest apps and devices
10. You want your wireless bill to be tax deductible
Mobile Functional Testing: Manual or Automated?
Mobile Functional Testing: Manual or Automated?
Okay, so you know what aspects of your mobile application are in need of functional testing. But before you start crafting test cases or user journeys, you must answer another important question: manual testing or automation?
For established companies, the answer to that question would be a resounding "both". But for startups with limited testing budgets and rapidly-evolving applications, manual testing – although slightly more costly – is the preferred option. Although there are several open-source automated solutions many of them are exclusively made for one operating system (iOS). Preferred
Other advantages of manual testing include:
- Find real bugs: Automation suites will highlight some errors, but most bugs within mobile apps – especially usability and layout issues – are only discovered under true real-world scenarios.
- Adaptability: Manual testing can be altered much more quickly and effectively than an elaborate automated test. Chances are, if you're working within a startup environment, your testing requirements are likely to change as new features are added.
- Real feedback: Unfortunately, automated tests can't give you an honest (human) opinion about your app's performance, usability and functionality. We'll let you know when this changes. In the meantime, you need to see results from real users with real devices.
- Variable Control: As we've alluded to earlier, there's simply too many outside variables to rely on automation for all of your testing objectives. Until you've isolated and addressed all of these variables, manual testing should be your preferred methodology.
Mobile testing for start-ups is all about discovering new areas of concern. So, to rehash an old quote from mobile testing expert Karen N. Johnson:
Software Cost Estimation
Software Cost Estimation
This article aims to study the process of Software Cost Estimation and its impact on the Software Development Process. We also highlight the various challenges involved in Software Cost Estimation and common solutions to navigate through these challenges.
Background:
Software Cost Estimation is widely considered to be a weak link in software project management. It requires a significant amount of effort to perform it correctly. Errors in Software Cost Estimation can be attributed to a variety of factors. Various studies in the last decade indicated that 3 out of 4 Software projects are not finished on time or within budget or both
Who is responsible for Software Cost Estimation?
The group of people responsible for creating a software cost estimate can vary with each organization. However the following is possible in most scenarios -
- People who are directly involved with the implementation are involved in the
..estimate.
- Project Manager is responsible for producing realistic cost estimates.
- Project Managers may perform this task on their own or consult with
..programmers responsible.
- Various studies indicate that if the programmers responsible for development
..are involved in the estimation it was more accurate. The programmers have
..more motivation to meet the targets if they were involved in the estimation
..process.
Following scenarios are also possible
- An independent cost estimation team creates an Estimate
- Independent Experts are given the Software specification and they create a
..Software Cost estimate. The Estimation team reviews this and group
..consensus arrives at a final figure.
Factors contributing to inaccurate estimation
· Scope Creeps, imprecise and drifting requirements
· New software projects pose new challenges, which may be very different from
..the past projects.
· Many teams fail to document metrics and lessons learned from past projects
· Many a times the estimates are forced to match the available time and
..resources by aggressive leaders
· Unrealistic estimates may be created by various 'political under currents'
Impact of Under-estimating:
Under-Estimating a project can be vary damaging
- It leads to improper Project Planning
- It can also result in under-staffing and may result in an over worked and
..burnt out team
- Above all the quality of deliverables may be directly affected due insufficient
..testing and QA
- Missed Dead lines cause loss of Credibility and goodwill
The Estimation Process:
Generally the Software Cost estimation process comprises of 4 main steps:
1) Estimate the size of the development product.
This comprises of various sub-steps or sub tasks. These tasks may have been done already during Requirement Analysis phase. If not then they should be done as a part of the estimation Process. Important thing is that they should be done to ensure the success of the Estimation Process and the Software Project as a whole
a) Create a detailed Work Break Down Structure. This directly impacts the accuracy of the estimate. This is one of the most important steps. The Work Break down structure should include any and all tasks that are within the scope of the Project, which is being estimated. The most serious handicap is the inability to clearly visualize the steps involved in the Project. Executing a Software Project is not just coding.
b) The work Break down structure will include the size and complexity of each software module that can be expressed as number of Lines of Code, Function Points, or any other unit of measure
c) The Work Break down structure should include tasks other than coding such as Software Configuration Management, various levels and types of Testing, Documentation, Communication, User Interaction, Implementation, Knowledge Transition, Support tasks(if any) and so on
d) Clearly indicate or eliminate any gray areas (vague/unclear specifications etc.)
e) Also take into account the various Risk Factors and down times. There are many different Risk Factors involved – Technical aspects such as availability of the Environment, Server/Machine uptime, 3rd party Software Hardware failures or Human aspects – Employee Attrition, Sick time, etc. Some of them may seem to be 'overkill' but real world experience shows that these factors affect the time lines of a project. If ignored they may adversely impact the Project timelines and estimates.
2) Estimate the effort in person-hours.
The Result of various tasks involved in step 1 is an effort estimate in person hours. The effort of various Project tasks expressed in person-hours is also influenced by various factors such as:
a) Experience/Capability of the Team members
b) Technical resources
c) Familiarity with the Development Tools and Technology Platform
3) Estimate the schedule in calendar months
The Project Planners work closely with the Technical Leads, Project Manager and other stakeholders and create a Project schedule. Tight Schedules may impact the Cost needed to develop the Application.
4) Estimate the project cost in dollars (or other currency)
Based on the above information the project effort is expressed in dollars or any other currency.
Measuring the Size/Complexity of the Software Program:
This is one of the most elusive aspects in the Software Cost Estimation Process.
There are different methodologies for arriving at and expressing the size/complexity of the Software Program. Some of the popular ones are
1) Function Points
2) Lines of Code
3) Feature Points
4) Mk II function points
5) 3D Function Points
6) Benchmarking
We briefly explain each of the above methods in the next few pages
Function Points
The Function Point methodology was developed by Allan Albrecht at IBM. This methodology is based on the belief that the size of a software project can be estimated during the requirements analysis. It takes into account the inputs and outputs of the system. Five classes of items are counted:
1. External Inputs
2. External Outputs
3. Logical Internal Files
4. External Interface Files
5. External Inquiries
The Total Function Point count is calculated based on the
a) Counts for each of these items
b) The weighting factors and adjustment factors in this methodology
What are function points and why count them?
"Function points are a measure of the size of Software applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application."
Function points are not a perfect measure of effort to develop an application or of its business value, although the size in function points is typically an important factor in measuring each. Since the function point count for an application is independent of the technology used to develop the application it can be used for almost all types of applications such as GUI, OOP, Client Server, etc.
Since function points are based on screens, reports and other external objects, this measure takes the users' view. In these days of outsourcing and other confusion regarding the role of IT in an organization, understanding the users' view is of critical importance!
Lines of code:
Counting lines of code measures software from the developers' point of view The number of lines of code is the traditional way of measuring the application size. Many people consider this method as irrelevant now. There are technical problems with the lines of code measure. It is difficult to compare lines of code when a mix of technologies is used. There is no standard definition of what a line of code is. A Program may have blank lines, comments, data declarations, and multi-line statements.
Feature points Methodology:
It was developed by Software Productivity Research (SPR) in 1986. This technique takes into account the number of algorithms used in the application. It is compatible with the Function Points Methodology. The size calculated by the two methods for an ordinary transactional program would be the same. Feature Points Methodology is generally more useful for estimation in real-time process control, mathematical optimization and various embedded systems. The estimates are higher and considered more accurate in these cases.
Mk II function points Methodology:
This was developed Charles Symons in 1984 at Nolan, Norton & Co., part of KPMG Management Consulting. The Original Function Point approach suffers from the following weaknesses:
· It is often difficult to identify the components of an application.
· The original Function Point Methodology assigned weights to function point
..components based on "debate and trial."
· The original Function Point Methodology did not provide a means of accounting
..for internal complexity. 'Feature points' technique addresses these issues.
· When small systems are combined into larger applications. Function
..Points Methodology makes the total function point count less than the sum
..of the components.
MKII decomposes the application being counted into a collection of logical transactions. Each transaction consists of an input, a process and an output. For each transaction, Unadjusted Function Points (UFP) become a function of the number of input data element-types, entity-types referenced and output data element-types. The UFPs for the entire system are then summed. Mk II is widely used in the UK, India, Singapore, Hong Kong and Europe. Users include governmental organizations, finance, insurance, retail and manufacturing.
3D function points:
This methodology was developed by Boeing Company and published in 1992. The new technique was designed to address two classic problems associated with the Albrecht approach( the original Functional Point Methodology)
a) The original Functional Point Methodology is not user friendly
b) It is inaccurate when measuring complex scientific and real-time systems.
The 3D function points takes into account the following Dimensions - data, function and control. The data dimension is similar to the original Function Point Methodology. The function dimension accounts for the transformations or algorithms. The control dimension accounts for transitions or changes in application state.
Benchmarking:
Over the years many Organizations with significant development experience and mature processes have collected metrics on the various software development projects. These include the time, effort required to develop applications on various platforms and in various Business Domains. Based on this data benchmarks are created.
Each new software module to be developed can be categorized using the
a) Number of inputs
b) Number of outputs
c) Number of transactions
d) Algorithms
e) Features of the module
Based on the above factors the module can be categorized for example as Simple, Medium or Complex. If it is too Complex you could express it in multiples of the above three categories. The baseline effort in terms of person-hours it takes for each category is predefined based on historical data/metrics for a similar platform. This figure can be improvised/refined over a period of time This can be correlated to an algorithm for calculating Car Insurance Premium. This is used to estimate the size and the effort needed for Software Development.