Software Quality Assurance in Multi-Vendor Environments

Subject: Tech & Engineering
Pages: 12
Words: 3591
Reading time:
14 min
Study level: College

Overview of Quality

Quality is generally defined as a measure of excellence in a firm’s production process and outcome (Swanson, Esposito & Jester 1999, p.33) or as Bowen P (1985, p. 87) puts it, “the state of being free from defects, deficiencies, and significant variations” and that quality embraces cost, scope and time factors. ISO 8402-1986 standard’s definition is based on the “totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs” (Peyton & Rajwani 2004, p. 243). In essence, it is important to note that quality is prerequisite of the client’s, customer’s, or intended user’s demand and needs and should not be termed “expensive” due to “high quality” since even low-priced goods will be considered quality items as long as they meet the needs of the consumer (Juran & Gryna 1970, p.17; Godfrey 1999, p.324). Quality standards are part of the demands in the manufacturing and service companies where strict and consistent standard adherence are observed in mostly measurable variables that ensures specific standards and uniformity in terms of output, to meet the intended user’s demands (DTM 2008, p.1).

Quality in the IT environment has proved to be one of the most important aspects in an organization’s business process (Jones 1992, p. 152). This could be due to the fact that information technology has taken over virtually all departments in companies. However, the effectiveness of the integrated information technology system has proved to be the most crucial aspect for any successful business process of a consortium (Jones 1991, p. 213). Quality in an IT environment would involve areas in software and its application, computer hardware, computer networks, and most importantly the database that is linked with the storage and management of the information of an organization (Swanson, Esposito & Jester 1999, p.4). The ever changing needs of organizations make the process of IT implementation a challenge to the many organizations, especially when the process involve many organizations working as a consortium. In this case, there is a consistent absence of common quality standards since every organization has its own way of monitoring its standards that involve monitoring individual software quality, keeping the cost as low as possible while not compromising functionality, and prompt project delivery (Zimmer 1989, p.628). As Kan (2002, p. xv) puts it, “software has been a troublesome discipline for more than 50 years. Software’s problems are numerous and include cancellations, litigation, cost overruns, schedule overruns, high maintenance costs, and low level of user satisfaction…and that more than a half of large software projects will encounter some kind of delay, overrun, or failure to perform when deployed.”

Quality Assurance

Quality assurance (QA) is a systematically planned process of production that provides confidence in the suitability of the product considering its intended use (Pyzdek 2003, p.19). In other words, quality assurance entails efforts to ensure that goods and services provided to meet the customer’s or intended user’s requirements in a systematic and reliable manner (Diaper 1989, p.162). However, it has proved absolutely hard to eliminate the whole quality shortcomings in the process of productions hence the emergence of quality measure, which basically revolve around the limit at which a product should be considered quality adherent (Sage 2002, p.285; OpenSTA 2008, p.2; Kimball & Ross 2002, p.51). In lieu to this, quality assurance (QA) has mainly two key characteristics, namely: (a). the product is declared “fit for purpose or use”, and (b) the product is declared “right first time”, meaning mistakes are eliminated (Jeng 2006, p.11). It is therefore the work of quality assurance team to ensure that there is certified level of quality to the raw materials, assembly process, product and its component, services that relate to process of production, and most importantly the management of production and inspection process (Jarrar, Al-Mudimigh & Zairi 2000, p.177; Crosby 1979, p.312).

Historical view

There were early efforts to control quality when craftsmen started the process of manufacturing of tools and materials to purposefully sell to the end user (Juran & Gryna 1970, p.161). In this context, there was a simple principle of quality that stated; “let the buyer beware” (p.166). The early civil engineers used quality measure to ensure their designs and constructions met the specified set standards for example; the specifications of the Great Pyramid of Giza were that they were to be perpendicular within 3.5 arcseconds (Ishikawa 1985, p.239). The emergence of industrial revolution meant that the initially individual control of production in the agrarian period was changed to a set of monitored quality adherence standards due to large groups of people doing similar works in the industries (Harry & Lawson 1992, p.57; Gauspari 1988, p.149). However, the systematic quality standards approach started in the 1930s in the industrial manufacturing period in the United States, starting with the cost of scrap and rework (p.15). World War II further necessitated the quality aspect of monitoring with the advent of Statistical Quality Control (SQC) due to the increased mass production (Feigenbaum 1991, p.171). In the post war era, the United States industries continued with its quality control concept despite being put under pressure to adopt the more comprehensive Quality Assurance criteria (p.174).

Kan (2002, p. xix) traces software engineering and quality issues back to early 1960s when organizations began to exploit information technology to meet their respective needs, thereby directly linking software to its daily operational needs. He states that things changed when in 1970s the industry was bogged down by rampant delays and cost overruns thereby prompting the change in focus to plan and control the software projects (p.3). The subsequent events led to the introduction of phase- based life-cycle models and analysis, like the mythical man-month (p.4). 1980s saw the decline in hardware cost thereby prompting every institution to adopt information technology as part of their system operations and various software engineering cost models were developed (Crosby 1979, p.199; Chen & Norcio 1991, p.47). Even though software quality issue emerged towards the end of 1980s, it is the 1990s and beyond that was described as quality era (Crosby 1979, p.201; Harry & Lawson 1992, p.77). The billing errors, telephone calls disruptions and missile failures in wars showed the high dependency on software and the demand for quality has intensified the need for thorough assessment (Bowen 1985, p.12). In this era, software quality has even become more than just an advantage factor for the consumers but a necessity for any firm that intends to have competitive advantage (Card, Morgan & Newell 1983, p.562).

Quality Assurance Vs Quality Control

Whereas quality control emphasize on the product testing aimed at uncovering defects and subsequently reporting the noted defect to the management for further actions, quality assurance make an effort of improving and stabilization of production process to minimize or avoid issues that led to the defects or shortcoming(Chen & Norcio1991, p.21; Basisli & Rombach 1988, pp.759-760). There has been several quality assurance methodologies used to ensure that mistakes do not arise and that quality control (QC) is considered an integral part of the overall quality assurance process (Basili1985, p.7; Card, Morgan & Newell 1983, p.229; Humphery 1989, p.213).

The total quality management

In the modern statistical era, many organizations have applied statistical process control to ensure the organizations reach the level of 6-sigma level of quality, that is, the possibility of an unexpected failure is basically confined to six- standard of deviations (DTM 2008, p.5; Harry & Lawson 1992, p. 453). However, the traditional statistical process controls in the manufacturing operations normally proceed to the next level through sampling and testing of a fraction of an output (Harry & Lawson 1992, p. 459). In this process, the variance in the process is considered critical and therefore is continuously corrected before complete production of the product (Godfrey 1999, p.49; Jarrar, Al-Mudimigh & Zairi 2000, p.21).

Quality Assurance in IT environments

The major challenge to software engineers is to provide high-quality software in time for the increasingly demanding customers. The W3C Working Group (1998, p. 18) study on the project cost overruns in the United States revealed that 33% of IT projects are overestimated by between 21% and 50%, 18% are between 51% and 100% overestimated, and 101% to 200% overestimate for 11% of projects. It, therefore, emerged that IT project management concentrated on minimizing project cost and timely project delivery (Chen & Norcio 1991, p.1429). However, the current challenge is more than just costs but also encompasses areas such as general standard compliance, security, and scope of the project (TurboData 2008, p.2).

Software metrics application is one way of determining the reliability of software estimation (Basisli & Rombach 1988, p.759). It entails computing of the actual project effort versus projected project schedule and risk management that includes identifying potential risks to the project at early stages, determining the probability of the risks occurring and assessment of its impact in case it occurs, reducing or eliminating the risk if it occurs, and tracking of the risks throughout the project phases (Jeng 2006, p.421). An example of serious project failure is the Taurus project at the London stock exchange that was abandoned midway despite the £800 million already spent (Rankin 2002, p.12). Furthermore, the original budget was only £6 million and by the time of abandonment, the project was already late by 11 years (13,200% late) (p.11).

The implementation of the requirements involves the design, coding, as well as activities of testing that will require user manuals, technical documentation, and training materials (Swanson, Esposito & Jester 1999, p.151). In the process of project implementation, the project team is likely to face challenges that include project technicality, communication of changes to the project team, building quality into the software product, software verification to ascertain if the software conforms to the requirements, ensure timely delivery, and taking corrective measures if possible and necessary (Jones 1992, 126; Jones 1991, p.111; Kimball & Ross 2002, p.121).

Quality Assurance methodologies

Canada teaching Hospital: A Case of integrated IT quality assurance

Canada teaching hospital has slightly over 10,000 employees and 1,000 physicians. Averagely, the hospital receives over 100,000 emergency cases and ten million lab tests and its “data warehouse has over 100 million records, and additionally adds tens of thousands of new records a week” (Forster 2005, p.67).

In this case study, Peyton, Zhan & Steven (2009, p.11) analyze the reliability of the hospital’s complex IT process in delivering adequate services to its clients (patients) and stakeholders. They observed the off-the-shelf IT components used by the hospital and used OpenSTA as their testing tool (OpenSTA, 2008, p.5). The tool was used to automate test scripts, and simulate up to 50 concurrent requests over extended periods running reports to simulate the load that must be handled in production and that the objective of the test the quality of service that the system provides to the hospital (Peyton, Zhan & Steven 2009, p.13). In short, the test was aimed at ascertaining the following components according to (Forster 2005, p.16).

  1. The system’s usefulness i.e. how much the system is used by which users with what frequency
  2. Performance of the system; the average response time when the system is exposed to different load levels and
  3. The system reliability; monitoring the status of the system and the failures of the components under normal and extreme load over periods (p.17).

The test also analyzed the quality of the framework used by the hospital to offer maximum quality IT processes (p.18). Principally the OpenSTA client is able to simulate the user actions in the enterprise PM portal and the log files that contain the detailed information from each of the components of the system are analyzed (OpenSTA 2008, p.4). This is because the OpenSTA log can record the results and the execution time of user action and the existence of a database log accompanied by a server log for each of the physical boxes that are meant to monitor the file handlers and memory usage (pp.5-6).

The result revealed that members of the quality assurance team were able to do a deeper and more comprehensive analysis of quality standards (Kan 2002, p.88; Kimball & Ross 2002, p.19). For instance, the OpenSTA log indicated a slower response time than normally expected (Forster 2005, p.21).

Furthermore, more information was revealed that showed that the database is slow and that a particular server was running out of memory (Forster 2005, p.22). However, this approach had several drawbacks; it is highly dependent on the existence and accuracy of the logs that the system’s different components provide, it is apparent that correlating the log entries to individual user actions at the interface level can prove to be a very challenging task (pp.22-23). The other challenge is that this process requires a lot of skills in data warehouse tools and that there is normally a performance and behavioral overhead directly proportional to the amount of login enabled (p.24).

Consortiums

A consortium, a Latin word that implies “partnership”, is an association of two or more entities (e.g. companies, individuals, organizations, or governments) with a specific intention of carrying out an activity together or combining their resources for a common goal (Diaper 1989, p.29; Crosby 1979, p.21). Organizations have found it quite necessary to create an environment that is favorable to their general operations in terms of cots, efficiency, speed, and scope (Forster 2005, p.71). By combining their resources, the organizations are able to attain a specific level of data processing that may prove absolutely unattainable if each organization had to go separate ways (p.91). Traditionally, consortiums were considered to be the only cost-saving venture where the organizations would come together to cut on the costs of project implementations (p.89). However, as has been proved, many organizations have realized that sharing and cooperation are more than just cost factors and lead to improved operations of the project process. The motivations behind collaborations to form a consortium are to acquire a competitive advantage in marketing, ease of accessing financial resources, acquisition of new technology, and know-how (Chountas & Kodogiannis 2004, p. 14; Norton 2009, p.2). However, the challenge comes in the implementation process where such areas as Information technology have to be integrated in a more friendly manner, that is, to ensure the process is in line with all the organization’s quality criteria or at least reach an acceptable level. Why does this become difficult and problematic? In a practical sense, each organization has its own specific criteria for monitoring its IT process, quality, and industry-standard gave the complexity of information technology (Bannister 2004, p.234; Chountas & Kodogiannis 2004, p. 214). The barrier to uniform criteria for the consortium is therefore eminent in the process and is likely to jeopardize efforts to; reduce cost, increase functionality, prompt delivery, and reduce the project scope creep (TurboData 2008, p.33; Swanson, Esposito & Jester 1999, p.6).

Quality Assurance in multi-vendor environments (consortiums)

The multi-vendor environment makes the whole IT project integration a tough process, through the main purpose is to form a powerful team to implement the project since the present IT process is growing complex in nature every day (Bannister 2004, p.321; Basili1985, p.62 ). Typically, a consortium would be comprised of: the main hardware supplier, a supplier of specialist or peripheral hardware, a package software supplier, a system integrator or project manager, an enabling software (e.g. RDBMS) supplier, an integration software supplier, or middleware software supplier, a communications technology supplier and many more (Kimball & Ross 2002, p.222).

What makes the multi-vendor process troublesome and risky? Bannister highlights some of the reasons that may cause a lot of problems as follows:

  1. Mutual blame; when the team experiences a problem in a consortium project, the team may decide to shift blame from one another rather than focusing on the fixing of the problem. Experts have described this as an “it’s a biscuit, it’s a bar” problem. In this concept, the members of the consortium argue over the cause of the problem instead of focusing on the solution of the problem. This may be perpetuated by arguments related to the performance of other consortium members, cost overruns, problems of compatibility, and delays in the delivery process,
  2. Poor project management structure; in case the consortium is not well organized and planned, the IT project management may be poor, thereby interfering with the project timeframe, budget, and poor deliverables. Logically, consortia are always off-on arrangements, that is, they only come together when there is a project they are to take together or they are planning to bid for a particular contract. It, therefore, means that some of the project members have never worked together, and
  3. Culture clash; the difference in corporate cultures, styles, and standards can interfere with the process of project delivery. Furthermore, personality clashes have been known to interfere with the successful implementation of projects (pp.327-328).

According to Chountas & Kodogiannis 2004, p. 15), the success of IT integration in the multi-vendor environment is one critical area that relies on the ability of the organizations to transform into a bigger learning consortium and the rate at which they learn to develop the product about time, cost, and quality customer satisfaction and more importantly, many of the consortia have identified the importance of project management (PM). However, traditional project management concepts do not have an effective mechanism to facilitate both qualitative and quantitative learning processes (Forster A.J, 2005, p.61; Andy 1997, p.311). This is the reason why many researchers have identified continuous learning in the project management process as the major step towards successful project implementation in consortia (Jarrar, Al-Mudimigh & Zairi 2000, p.197).

Trends of QA in Consortiums

In August 2000, what was expected to be a standard 2-hour information system upgrade in a South-eastern hospital became a 33-hour information technology disaster (Kan 2002, p.213). The 300-bed capacity hospital was a good example of a multi-focus delivery system that provided technologically superior healthcare in the City’s competitive metropolitan market and it had its clinical data captured electronically at the point of service (p.214). The enterprise-wide information system was configured as a sophisticated network of servers and personal computer workstations and terminals and prided itself as applying the “best of breed” approach to its applications development (pp.214-15). In this arrangement, multiple vendors were represented by the clinical, administration, and decision support systems, a type of information system that required collaborative arrangement and relationship among the vendors to develop interfaces between legacy systems and the emerging data repository (p.215). The primary vendor was designed to be responsible for communication with other application and hardware vendors during the system upgrades or new installs (p.216).

The events that led to this incident began when the primary vendor of the main information system was brought on-site to perform a minor hardware upgrade on a clinical documentation system (Kan 2002, p.216). During this upgrade, the technicians determined the hardware firmware had to be updated to support the new hardware changes being made and after verifying that a good system backup existed for the server, the vendor identified the required firmware version and downloaded the patches to the system (p.217). From this moment, there was a series of unpreventable reactions, and the attempt to restart the system failed miserably and all the information that supported the clinical documentation too failed to pick up (p.218). The analysis of the problem root cause revealed a lack of sufficient communication among the vendor groups supporting various elements of the organization’s system(Kan 2002, p.220). Instead of direct communication involving vendor-to-vendor, the hospital information system employees (with completely no knowledge of the bug) talked separately with vendors and communicated information to the vendors (p.221). Without the important communication channel that occurred, something that was considered a simple upgrade ended up a disaster.

Lessons learned

One important lesson learned in this area is the importance of system dynamics in a consortium. System dynamics focus on how performance evolves around the interactions between the managerial decision-making and development process. Jeng (2006, p.22) and Peyton & Rajwani (2004, p.33) highlight that system dynamics have been applied in the various software development process to improve the situation. The focus has been on the reduction of schedule pressure, increasing productivity through technology, and management improvement (Chountas & Kodogiannis 2004, p.16). Despite an agreement that these IT projects have a great influence on either failure or success of the projects, closer monitoring reveals that project success or failure is a factor of a broader aspect of the project process that entails several aspects (pp.17-18). Bannister (2004, p.212) explains that a project runs overtime or budget if the actual performance does not match estimates and its inability to evolve itself. It is therefore important to apply the system dynamics approach in the management of a multi-vendor project that practically has its “scope” changed (Advanced Data Generator 2008, p.4; Pyzdek 2003, p.297).

Again, the formal disaster recovery plans should be put in place and a policy guiding the process to be clear for all the vendors and the manual of clinical documentation procedures specified for use in the event of information system failure should be tested and prove its adequacy (Basili & Rombach 1987, p.41; Basili 1989, p.51).

Finally, software integration should be carefully carried out to ensure compliance with the set standards and procedures Sage (2002, p.10). The quality assurance team should ensure that all deliverables are monitored to conform to plans and make sure that all the test plans, as well as procedures, are followed and any form of non-conformance is appropriately reported and resolved (Bannister 2004, p. 187; Rankin 2002, p.1221).

List of References

Advanced Data Generator 2008, Upscene Productions. Web.

Andy S 1997, Human Computer Factors: A study, London, McGraw-Hill.

Bannister F 2004, Purchasing and financial management of information technology, London, Butterworth-Heinemann.

Basili R 1985, “Quantitative Evaluation of Software Engineering Methodology”, Proceedings First Pan Pacific Computer Conference, Melbourne.

Basili R 1989, “Software Development: A paradigm for the Future”, Proceedings 13th International Computer Software and Applications Conference (COMPSAC), Keynote Address, Orlando, FL.

Basili R & Rombach D 1987, “Tailoring the Software Process to Project Goals and Environments”, Proceedings Ninth International Conference on Software Engineering, Monterey, Calif: IEEE Computer Society, pp. 345-357.

Basisli R & Rombach D 1988, “The TAME project: Towards improvement oriented software environments”, IEEE Transaction on Software Engineering, Vol. SE-14, No. 6, pp.758-773.

Bowen P 1985, “Specification of software quality attributes”, Rome Air Development Centre, Vol. 3, RADC-TR-85-37.

Card S.K, Morgan T.P & Newell A 1983, The psychology of Human-Computer Interaction, Lawrence Erlbaum Associates.

Chen Q & Norcio F 1991, Aneutral network Approach to use Modelling”, Proceedings of 1991 IEEE International Conference on Systems, Man and Cybernetics, pp. 1429-1434.

Chountas P & Kodogiannis V 2004, Development of a Clinical Data Warehouse, Medical In- formation Systems: The Digital Hospital (IDEAS-DH’04), pp 14-18.

Crosby B 1979, Quality if free: The art of making quality certain, New York, McGraw-Hill.

Diaper D 1989, Knowledge Elicitation: Principles, Techniques and Applications, England, Ellis Hoewood Publisher.

DTM 2008, “Soft Data Generator”.

Feigenbaum V 1991,Total quality control, New York, McGraw-Hill.

Forster A.J, 2005, “The Ottawa Hospital data warehouse: obstacles and opportunities”, IT in Healthcare Series.

Gauspari J 1988, The customer connection: Quality for the rest of us, New York, American Management Association.

Godfrey A B 1999 “Juran’s Quality Handbook”, ISBN 0-07-034003-X.

Harry J & Lawson J 1992, Six sigma producibility analysis and process characterization, Reading, Mass: Addison- Wesley.

Humphery S 1989, Managing the software process, Reading, Mass: Addison- Wesley.

Ishikawa K 1985, What is total quality control? The Japanese way, Englewood Cliffs, N.J.: Prentice-Hall.

Jarrar Y.F, Al-Mudimigh A & Zairi M 2000, “ERP implementation critical success factors –the role and impact of business process management”, IEEE International Conference on Management of Innovation and Technology.

Jeng J 2006, “Service-Oriented Business Performance Management for Real Time Enterprise”, E-Commerce Technology.

Jones C 1986, Programming productivity, New York, McGraw- Hill.

Jones C 1991, Applied software measurement: Assuring productivity and quality, New York, McGraw-Hill.

Jones C 1992, “Critical problems in software measurement”, Mass: Software Productivity Research (SPR). Verson 1.0, Burlington.

Juran J M & Gryna F M 1970, Quality Planning and analysis: From product development through use, New York, McGraw- Hill.

Kan S 2002, Metric and Models in Software Quality Engineering (2nd ed.), New York, Wiley- IEEE.

Kimball R & Ross M 2002, The Data Warehouse Toolkit: The Complete Guide to Dimen- sional Modeling, 2nd ed, Wiley.

OpenSTA 2008, “Open System Testing Architecture”.

Norton P 2009, “A Minnesota consortium finds benefits in sharing”, Government Finance Review, FindArticles.com. Web.

Peyton L & Rajwani A 2004, “A Generative framework for managed services”, International Conference on Generative Programming and Component Engineering.

Pyzdek T 2003, “Quality Engineering Handbook”, ISBN 0-8247-4614-7

Rankin C 2002, “The software testing automation framework”, IBM Systems Journal, Software Testing and Verification, Vol. 41, No.1.

Sage A 2002, “Knowledge, skill and information requirement for system designers,” in System Design: Behavioural Perspectives on Designers, Tools, and Organizations, North Holland, pp.285-303.

Shneiderman B 2000, “Universal usability,” Communications of the ACM, Vol. 43, No. 5, pp. 85-91.

Swanson D, Esposito R & Jester J 1999, “Managing quality for Information Technology”, IBM Global Services. Web.

TurboData 2008, “Canam Software”.

W3C Working Group 2008, Web Services Architecture, Note 11.

Zimmer, B 1989, “Software quality and productivity at Hewlett-Packard”, Proceedings of IEEE Computer Software and Applications Conference, pp. 628-632.