In the context of software engineering, software quality refers to two related but different ideas that exist wherever the quality is defined in the business context:
- The functional quality of the software reflects how well the device complies with or conforms to a given design, based on functional requirements or specifications. The attribute can also be described as a match for the purpose of a software or how it compares with competitors in the market as a valuable product. This is the extent to which the correct software is generated.
- The software's structural qualities refer to how it meets non-functional requirements that support the delivery of functional requirements, such as endurance or maintenance. This is more related to the extent to which the software works as needed.
Many aspects of structural quality can be evaluated only statically through structural analysis in software, source code, at the unit level, technological level and system level, which applies how the architecture complies with the principles of software architecture described in a paper on the topic by OMG. But some structural qualities, such as usability, can be judged only dynamically (users or others acting on their behalf interacting with software or, at least, some prototypes or partial implementations, even interactions with artificial versions made in cardboard symbolize test dynamics because the version can be considered as a prototype). Other aspects, such as reliability, may involve not only software but also the underlying hardware, therefore, can be assessed statically and dynamically (test voltage).
Functional quality is usually assessed dynamically but it is also possible to use static tests (such as software reviews).
Historically, the structure, classification and terminology of attributes and metrics that apply to software quality management have been derived or extracted from ISO 9126-3 and the next ISO 25000: 2005 quality model, also known as SQuaRE. Based on these models, the Consortium for Software Quality IT (CISQ) has established the five major desirable structural characteristics required for a software to deliver business value: Reliability, Efficiency, Security, Maintenance, and (adequate) Size.
Software quality measurements quantify the extent to which the software program or system level is along each of these five dimensions. The size of software quality aggregates can be calculated through qualitative or quantitative scoring schemes or mixtures of both and then weighting systems that reflect priorities. This view of software quality positioned on a linear continuum is complemented by a "critical programming error analysis" which in some circumstances may lead to disaster outages or performance degradation that render the system rendered unsuitable for use regardless of rating based on aggregate measurement. Programming errors like these found at the system level represent up to 90% of production problems, while at the unit level, even if far more, programming errors cause less than 10% of production problems. As a result, code quality without the overall context of the system, as described by W. Edwards Deming, has limited value.
To view, explore, analyze, and communicate software quality measurements, information visualization concepts and techniques provide visual, useful interactive tools, in particular, if some software quality measures must be related to each other or to software or system components. For example, the software map is a special approach that "can express and combine information about software development, software quality, and system dynamics".
Video Software quality
Motivation
"Science is just as mature as the measuring instrument," (Louis Pasteur in Ebert & Dumke, p.Ã, 91). Measuring software quality is motivated by at least two reasons:
- Risk Management: Software failure has caused more than inconvenience. Software errors have caused human deaths. The causes range from poorly designed user interfaces to redirect programming errors. Examples of programming errors that cause some deaths are discussed in Dr. Leveson. This results in requirements for the development of some types of software, especially and historically for software embedded in medical devices and others that govern critical infrastructures: "[Engineers who write embedded software] see Java programs stop for a third of a second to do garbage collecting and update the user interface, and they imagine the aircraft falling from the sky. ". In the United States, in the Federal Aviation Administration (FAA), the FAA Aircraft Certification Service provides software, policy, guidance and training programs, focusing on Complex Electronics software and Hardware that has an effect on air products ("products" are aircraft, engine, or propeller).
- Cost Management: As in other engineering fields, applications with good quality structural software cost less to maintain and are easier to understand and change in response to urgent business needs. Industry data show that poor application quality in core business applications (such as enterprise resource planning (ERP), customer relationship management (CRM) or large transaction processing systems in financial services) result in swelling of costs and schedules and creating waste in the form of rework ( up to 45% development time in some organizations). In addition, poor structural quality is highly correlated with high-impact business interruptions due to corrupted data, application outages, security breaches, and performance issues.
However, the difference between measuring and improving software quality in embedded systems (with emphasis on risk management) and software quality in business software (with an emphasis on cost management and maintenance) becomes somewhat irrelevant. Current embedded systems often include user interfaces and their designers pay great attention to issues that affect users' usability and productivity as their business-focused partner. The latter in turn see the ERP or CRM system as the company's uptime and performance nervous system is essential for the well-being of the company. This convergence is most visible in mobile computing: users who access ERP applications on their smartphones depend on software quality across all types of software layers.
Both types of software now use a pile of layered technologies and complex architecture so that software quality analysis and measurement must be managed comprehensively and consistently, separated from the ultimate purpose or use of the software. In both cases, engineers and management must be able to make rational decisions based on fact-based measurements and analysis in compliance with the rules. "In God (we) believe all others carry data". ((mis-) linked to W. Edwards Deming and others).
Maps Software quality
Definition
There are many different quality definitions. To some it is "the ability of a software product to meet the requirements." (ISO/IEC 9001, commented by) while for others it can be identical to "customer value" (Highsmith, 2002) or even defect rate.
The first definition of historical recall quality is from Shewhart in the early twentieth century: There are two general aspects of quality: one relates to the consideration of the quality of an object as an objective reality independent of human existence. Others relate to what we think, feel or feel as a result of objective reality. In other words, there is a subjective side of quality. (Shewhart)
Kitchenham.2C_Pfleeger.2C_and_Garvin.27s_five_perspectives_on_quality "> Kitchenham, Pfleeger, and Garvin's five perspectives on quality
Kitchenham and Pfleeger, further reporting David Garvin's teachings, identified five different perspectives on quality:
- The transcendental perspective deals with aspects of metaphysical quality. In the view of this quality, it is "something we strive for as an ideal, but may never really be implemented". It's almost indefinable, but is similar to what a federal judge once said about obscenity: "I know that when I see it".
- The user's perspective relates to the suitability of the product for a particular usage context. While the transcendental view is subtle, the user's view is more concrete, based on product characteristics that meet user needs.
- The manufacturing perspective represents quality as conformity to requirements. This quality aspect is emphasized by standards such as ISO 9001, which defines quality as "the extent to which a set of inherent characteristics meets the requirements" (ISO/IEC 9001).
- The product perspective shows that quality can be appreciated by measuring the inherent characteristics of the product.
- The end-quality perspective is value-based. This perspective recognizes that different quality perspectives may have different interests or values ââfor different stakeholders.
Software quality by Deming
The inherent problem in trying to determine the quality of a product, almost all of its products, is declared by master Walter A. Shewhart. The difficulty in defining quality is to translate the user's future needs into measurable characteristics, so that a product can be designed and proven to satisfy the price that the user will pay. This is not easy, and once a person feels quite successful in that endeavor, he finds that consumer needs have changed, competitors have moved, etc.
Software quality complies with Feigenbaum
Quality is the determination of the customer, not the determination of an engineer, not the determination of marketing, or the determination of general management. It is based on the customer's actual experience with the product or service, as measured by its terms - declared or not stated, conscious or perceived, technically operational or completely subjective - and always represents a target moving in a competitive market.
Software quality by Juran
The quality of words has many meanings. Two of these meanings dominate the use of the word: 1. Quality consists of product features that meet customer needs and thus provide product satisfaction. 2. Quality consists of freedom from deficiency. However, in a handbook like this it will be easier to standardize on the short definition of word quality as "fitness to use".
CISQ quality model
Although "quality is a perceptual attribute, conditional and somewhat subjective and can be understood differently by different people" (as stated in the article on quality in business), the quality characteristics of software structures have been clearly defined by the Consortium for Software Quality IT (CISQ ). Under the guidance of Bill Curtis, co-author of the first Skills Maturity Model skill and Director of CISQ; and Capers Jones, CISQ Distinguished Advisor, the CISQ has set out five key desirable characteristics of a software required to deliver business value. In the House of Quality model, this is the "Whats" that needs to be achieved:
- Reliability
- Attributes of robustness and structural solidity. Reliability measures the level of risk and potential failure of potential applications. It also measures defects that are injected due to modifications made to the software (its "stability" that is termed by ISO). The purpose of checking and monitoring Reliability is to reduce and prevent application downtime, application shutdowns and errors that directly affect users, and improve IT image and its impact on business performance.
- Efficiency
- The source code and software architecture attributes are elements that ensure high performance after the application is in run-time mode. Efficiency is critical for applications in high-speed execution environments such as algorithmic or transactional processing where performance and scalability are paramount. Analysis of source code efficiency and scalability provides a clear picture of latent business risks and the potential harm to customer satisfaction due to the degradation of response time.
- Security
- Size of potential security breaches due to poor coding and architecture practices. This quantifies the risk of dealing with a critical vulnerability that damages a business.
- Maintainability
- Maintainability includes understanding of adaptability, portability and transferability (from one development team to another). Measuring and monitoring maintenance is a must for mission-critical applications where change is driven by a tight time-to-market schedule and where it is important for IT to remain responsive to business-driven changes. It is also important to keep maintenance costs under control.
- Size
- Though not a quality attribute per se, the source code size is a software characteristic that clearly impacts maintenance. Combined with the characteristics of the above qualities, the size of the software can be used to assess the amount of work generated and must be done by the team, as well as their productivity through correlation with time sheets data, and other SDLC related metrics.
The functional quality of the software is defined as conformity with explicitly stated functional requirements, identified eg using Voice of the Customer analysis (part of the Design for Six Sigma toolkit and/or documented through use case) and the level of satisfaction experienced by the end user. The latter is referred to as usability and is concerned with how intuitive and responsive the user interface is, how easy simple and complex operations can be performed, and how useful the error message is. Typically, software testing practices and tools ensure that the software behaves according to the original design, the planned user experience and the desired test, ie some software to support acceptance criteria.
The dual structural/functional dimensions of software quality are consistent with the proposed model in Steve McConnell's Code Complete which divides the software characteristics into two parts: internal and external quality characteristics. The characteristics of external quality are those parts of the product that confront its users, where the internal quality characteristics are those that are not.
Alternate approach
One of the challenges in defining quality is that "everyone feels they understand it" and other definitions of software quality can be based on an expansion of the various descriptions of quality concepts used in business.
Dr. Tom DeMarco has proposed that "product quality is a function of how much the world changes for the better." This can be interpreted as meaning that functional quality and user satisfaction are more important than structural qualities in determining software quality.
Another definition, coined by Gerald Weinberg in Qualified Software Management: Systems Thinking, is "Quality is value for some people." This definition emphasizes that quality is subjective - different people will experience the same quality of software differently. One of the strengths of this definition is the question that invites the software team to consider, such as "Who are the people we want to value our software?" and "What is precious to them?".
Measurement
Although the concepts presented in this section apply to the quality of structural and functional software, the latter measurement is basically done through testing [see main article: Software testing].
Introduction
Software quality measurement is about measuring the extent to which a system or software has the desired characteristics. This can be done through a qualitative or quantitative or a mixture of both. In both cases, for each desired characteristic, there is a set of attributes that can be measured for existence in a software or system that tend to correlate and related to these characteristics. For example, the attribute associated with portability is the number of statements that depend on the target in a program. More precisely, using the Quality Function Deployment approach, this measurable attribute is the "how" that needs to be applied to enable "what" in the Software Quality definition above.
The structure, classification and terminology of attributes and metrics that apply to software quality management have been derived or extracted from ISO 9126-3 and the next ISO/IEC 25000: 2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle certain fields such as business application architectures and technical characteristics such as data access and manipulation or transaction ideas.
The dependency tree between software quality characteristics and its measured attributes is shown in the diagram on the right, where each of the 5 characteristics important to the user (right) or the owner of the business system depends on the measurable attribute (left):
- Application Architecture Practices
- The Encoding Practice
- Application Complexity
- Documentation
- Portability
- Technical and Functional Volume
The correlation between programming errors and production defects revealed that basic code errors accounted for 92% of total errors in source code. These code-level problems ultimately account for only 10% of defects in production. Bad software engineering practices at the architectural level accounted for only 8% of the total defects, but consumed more than half the effort spent on fixing the problem, and caused 90% of the problems of reliability, security, and serious efficiency in production.
Code-based analysis
Many existing software measures compute the structural elements of the application resulting from source code parsing for such individual instructions (Park, 1992), tokens (Halstead, 1977), control structures (McCabe, 1976), and objects (Chidamber & amp; ; Kemerer, 1994).
Software quality measurement is about measuring how far a system or software level is along these dimensions. The analysis can be done using a qualitative or quantitative approach or a mixture of both to provide an aggregate view [using the weighted average sample (s) reflecting the relative importance between measured factors].
This view of software quality on a linear continuum should be complemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail in the test cases, but they are the result of bad practices which in some circumstances may lead to disaster outages, performance degradation, security breaches, corrupted data, and a myriad of other problems (Nygard, 2007) that make certain systems de facto suitable for use regardless of its rating based on aggregate measurements. A well known example of vulnerability is the General Disadvantage, a vulnerability repository in the source code that makes the application hit by a security breach.
The measurement of critical application characteristics involves measuring the structural attributes of the application architecture, coding, and in-line documentation, as shown in the figure above. Thus, each characteristic is affected by attributes at different levels of abstraction in the application and all that must be entered calculates the size of the characteristics if it is to be a valuable predictor of the quality results affecting the business. The layered approach to calculating the characteristic size shown in the above figure was first proposed by Boehm and his colleagues at TRW (Boehm, 1978) and is an approach taken in ISO 9126 and 25000 series standards. These attributes can be measured from the parse results of static analysis of the app source code. Even the dynamic characteristics of applications such as reliability and performance efficiency have their root causes in the application's static structure.
Analysis and measurement of structural quality is done through source code analysis, architecture, software framework, database schema in relation to principles and standards that together define the conceptual and logical architecture of a system. This differs from basic, local level, basic level component analysis that is usually performed by development tools that are largely related to implementation considerations and are critical during debugging and testing.
Reliability
The root cause of poor reliability is found in a combination of non-compliance with good architectural practices and coding. This non-compliance can be detected by measuring an application's static quality attribute. Assessing the static attributes underlying the reliability of an application provides an estimate of the business risk level and potential failure of potential applications and defects the application will experience when placed in operation.
Assessing reliability requires examination of at least the best practices and technical attributes of the following software engineering:
Depending on the architecture of the application and the third-party components used (such as libraries or external frameworks), special checks should be set along the lines drawn by the best practices list above to ensure better assessment of the reliability of the software being delivered.
Efficiency
Like Reliability, the cause of performance inefficiencies is often found in violation of good architectural and coding practices that can be detected by measuring the static quality attributes of an application. This static attribute predicts potential operational performance bottlenecks and future scalability problems, especially for applications that require high execution speeds to handle complex algorithms or enormous volumes of data.
Assessing performance efficiency needs to examine at least the best practices and technical attributes of the following software engineering:
- Application Architecture Practices
- An appropriate interaction with expensive and/or remote resources
- Data access and data management performance â ⬠<â â¬
- Memory, network, and disk space management
- The Encoding Practice
- Compliance with Object Oriented Programming and Structured Programming practices (as appropriate)
- Compliance with SQL programming best practices
Security
Most security vulnerabilities result from bad encoding and architectural practices such as SQL injection or cross-site scripts. This is well documented in the list managed by CWE, and SEI/Computer Emergency Center (CERT) at Carnegie Mellon University.
Assessing security requires at least examining the best practices and technical attributes of the following software engineering:
- Application Architecture Practices
- Multi-layer design compliance
- Best security practices (Input Validation, SQL Injection, Site Cross Scripting, etc.)
- Programming Practices (code level)
- Errors & amp; Exception handling
- Best security practices (access system function, access control to the program)
Maintainability
Maintainability includes the concept of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. It does not take the form of a critical issue at the code level. Conversely, poor maintenance is usually the result of thousands of minor infractions with best practices in documentation, complexity avoidance strategies, and basic programming practices that make the difference between clean code and readability rather than unorganized and hard to read code.
Assessing the maintenance needs to examine the best practices and technical attributes of the following software engineering:
Maintainability is closely related to Ward Cunningham's concept of technical debt, which is an expression of the costs resulting from lack of maintenance. The reason why low maintenance can be classified as reckless vs wise and intentional vs. unintentional, and often have developer incapacity, lack of time and objectives, carelessness and incompatibility in the cost of making and benefiting from documentation and, in particular, source code that can be maintained.
Size
Measuring the size of the software requires that all source code be correctly collected, including database structure scripts, source data manipulation code, component headers, configuration files, etc. There are basically two types of software sizes to measure, technical size (trace) and functional size:
- There are several widely described software technical sizing methods. The most common technical measurement methods are the number of Code Rows (#LOC) per technology, number of files, functions, classes, tables, etc., which the Recovery Function Points can be calculated;
- The most common way to measure functional size is point function analysis. Point point analysis measures the size of software generated from the user's perspective. Function point measurements are performed based on user requirements and provide an accurate representation of both sizes for developers/assessors and values ââ(functionality to be delivered) and reflect business functions delivered to customers. This method involves the identification and weighting of user-recognizable input, output and data storage. Size values ââare then available for use in conjunction with various measures to measure and to evaluate software delivery and performance (development costs per function point, defects sent per function point, function point per staff month).
Standard point function measurements are supported by the International Focal Point User Group (IFPUG). This can be applied early in the software development lifecycle and is not dependent on lines of code like the rather inaccurate Backfiring method. The method is agnostic technology and can be used for cross-organizational and cross-industry comparative analysis.
Since the start of Function Point Analysis, several variations have evolved and family functional measurement techniques have been expanded to include size measures such as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FP, and the last is Story Points. However, Function Points have a history of statistical accuracy, and have been used as common measurement work units in various application development management (ADM) or outsourcing engagements, serving as "currency" in which services are delivered and performance measured.
One general limitation to the Function Point methodology is that it is a manual process and can therefore be labor-intensive and costly in large-scale initiatives such as application development or outsourcing engagement. The negative aspect of applying this methodology can be what motivates industry IT leaders to establish a Consortium for Software Quality IT that focuses on introducing computable metric standards to automate software size measurements while IFPUG still promotes a manual approach as most of its activities rely on FP counter certification.
CISQ announces the availability of its first metric standard, Automatic Function Points, to CISQ membership, at CISQ Technical. This recommendation has been developed in the Request for Comment OMG format and submitted to the OMG process for standardization.
Identify critical programming errors
Critical Programming Error is a bad architectural and/or coding practice that results in the highest risk of business interruption, soon or long.
This is quite often related to technology and relies heavily on context, business objectives and risks. Some people may consider honoring naming conventions while others - those who prepare the land for knowledge transfer for example - will regard it as very important.
Critical Programming Errors can also be classified as per CISQ Characteristics. Basic example below:
- Reliability
- Avoid software patterns that will cause unexpected behavior (un-initialized variables, null pointers, etc.)
- The methods, procedures, and functions that do Insert, Update, Delete, Create Table or Select must include error management
- A multi-threaded function must be a secure thread, for example a servlet or struts action class can not have an instance/non-final static field
- Efficiency
- Ensure the centralization of client requests (login and data) to reduce network traffic
- Avoid SQL queries that do not use indexes against large tables in a loop
- Security
- Avoid fields in the non-static final servlet class
- Avoid data access without including error management
- Check the return code and apply the error handling mechanism
- Ensure input validation to avoid cross-site scripting defects or SQL injection flaws
- Maintenance
- Inheritance and stacking trees should be avoided to improve understanding
- Modules should be loosely coupled (fanout, intermediate,) to avoid modification multiplication
- Implement a homogeneous naming convention
The operationalized quality model
New proposals for quality models such as Squale and Quamoco deploy direct integration of attribute definitions and quality measurements. By undermining quality attributes or even defining additional layers, complex abstract quality attributes (such as reliability or maintenance) are more manageable and measurable. These quality models have been applied in an industrial context but have not received wide adoption.
See also
Further reading
- International Organization for Standardization. Software Engineering - Product Quality - Part 1: Quality Model . ISO, Geneva, Switzerland, 2001. ISO/IEC 9126-1: 2001 (E).
- Diomidis Spinellis. Code Quality: Open Source Perspective . Addison Wesley, Boston, MA, 2006.
- Ho-Won Jung, Seung-Gweon Kim, and Chang-Sin Chung. Measuring the quality of software products: ISO/IEC 9126 Survey. IEEE Software , 21 (5): 10-13, September/October 2004.
- Stephen H. Kan. Metrics and Models in Software Quality Engineering . Addison-Wesley, Boston, MA, second edition, 2002.
- Omar Alshathry, Helge Janicke, "Optimizing Software Quality Assurance," compsacw, pp.Ã, 87-92, 2010 IEEE 34th Computer Software and Workshop Annual Conference, 2010.
- Robert L. Glass. Building a Qualified Software . Prentice Hall, Upper Saddle River, NJ, 1992.
- Roland Petrasch, "Software Quality Definition": A Practical Approach ", ISSRE, 1999
- Capers Jones and Olivier Bonsignour, "Software Quality Economics", Addison-Wesley Professional, first edition, December 31, 2011, ISBN 978-0-13-258220-9
- Measuring Software Product Quality: ISO 25000 Series and CMMI (SEI website)
- MSQF - A software quality framework based on measurements Cornell University Library
- Stefan Wagner. Quality Control of Software Products. Springer, 2013.
- Girish Suryanarayana, Software Process versus Quality Design: Tug of War?
References
Note
Bibliography
External links
- Linux: Fewer Bugs from Rivals Wired Magazine, 2004
- Auto Beta Function Points 1 by OMG
Source of the article : Wikipedia