< reference $Id: Logiscope.ref,v 1.2 2009/04/06 17:38:58 frmath Exp $ > # # Version: April 2009 by Kalimetrix p/o IBM Rational # #------------------------------------------------------------------ # | # The ISO/IEC 9126 international standard defines software | # quality according to the following six characteristics: | # - FUNCTIONALITY | # - RELIABILITY | # - USABILITY | # - EFFICIENCY | # - MAINTAINABILITY | # - PORTABILITY | # | # MAINTAINABILITY is defined as "a set of attributes relative | # to the effort required to make given modifications". | # It is broken down into the following four evaluation criteria: | # - ANALYZABILITY | # - CHANGEABILITY | # - STABILITY | # - TESTABILITY | # | # These criteria and their associated metrics are taken into | # account by the evaluation model described in this file | # | # | # The results provided by the criteria and the final report | # classify the components according to four levels of quality: | # | # A component whose QUALITY IS SATISFACTORY can be classified: | # | # EXCELLENT : When it respects all the rules set in the | # framework of this model | # GOOD : When it does not deviate much from the model. | # FAIR : When a large number of rule violations have been | # found. | # | # A component whose QUALITY IS UNSATISFACTORY is classified: | # | # POOR : When it is not very likely to guarantee an efficient| # maintenance activity. | # | # | # This file is an example of the LOGISCOPE QUALITY MODEL FILE | # that is used to evaluate MAINTAINABILITY of Ada, C, C++, or Java| # program in compliance with this ISO standard. | # | # C++, C, Java and Ada models are presented in this order: | # /C++ | # /C | # /JAVA | # /ADA | # | #------------------------------------------------------------------ /C++ *MD* # # Definition of the metrics used for the evaluation # ------------------------------------------------- # # Function Metrics: # ----------------- # "Average size of statements" : AVGS = (N1 + N2) / MAX(lc_stat,1) { Definition: ----------- This metric (AVerage Size of statements) corresponds to the average number of operands and operators used by each of the function's executable statements. It is calculated as follows: AVGS = (N1+N2) / (lc_stat); where: N1 is the number of operator occurrences, N2 is the number of operand occurrences, lc_stat is the number of statements in the function. Explanation: ------------ This metric is used to detect functions that, on average, have long statements. Statements that handle a large number of textual elements (operators and operands) require a special effort by the reader in order to understand them. This metric is a good indicator of the program's analyzability. Action: ------- Long statements can be broken down into several shorter statements and/or be commented on in greater detail to provide all the explanations necessary to make the reader's task easier. } "Vocabulary frequency" : VOCF = (N1+N2) / MAX((n1+n2),1) { Definition : ------------ VOCabulary Frequency corresponds to the average number of times the vocabulary is used (sum of distinct operands and operators) in a component. This metric is calculated as follows: VOCF = (N1+N2)/ (n1+n2) where : N1 is the number of operator occurrences, N2 is the number of operand occurrences, n1 is the number of distinct operators, n2 is the number of distinct operands. Explanation : ------------- When the value of this metric is high for a given component, this means that it contains very similar, or even duplicated statements. Vocabulary frequency is a good indicator of the effort that will be required to maintain the code and of the program's stability. This is because any corrections that are required will have to made at each place the code has been duplicated. Furthermore, the risk of not carrying the corrections over (or of carrying them over incorrectly) increases with the number of duplications. Action : -------- Factorize the duplicated statements into a single function which will be invoked as many times as there were duplications in the original code. } "Comments frequency": COMF = (lc_bcom + lc_bcob) / MAX(lc_stat,1) { Definition: ----------- Number of blocks of comments per statement in the function (COMments Frequency). The following formula is used to calculate this metric: COMF = (lc_bcom + lc_bcob) / (lc_stat) where: lc_bcom is the number of blocks of comments in the function, lc_bcob is the number of blocks of comments before the function, lc_stat is the number of statements in the function. Explanation: ------------ Although this metric cannot evaluate the relevance of the comments written in the code, experience has shown that it is a good indicator of the effort made by the developer to describe the function. To make it easier to understand the code when performing maintenance operations, the code must contain a sufficient number of comments. It is also better to distribute the comments evenly at the level of the statements that need commenting, rather than placing them all at the beginning of the function and leaving all the statements without comments. Action: ------- When the number of comments with respect to the number of statements is considered insufficient, it will be necessary to add comments to the code at the level of statements handling macro-instructions or performing complex calculations, or before the function's main control structures, for example. } "Number of levels": LEVL = ct_nest + 1 { Definition: ----------- Maximum number of control structure nestings in the function plus one (number of LEVeLs). } "Fan In": FAN_IN = ic_usedp + ic_varpi { Definition: ----------- FAN_IN is the sum of the uses of used parameters and class attribute parameters. Explanation: ------------ This metric is an indicator of the function's input flow. The more the flow is significant, the more difficult it is to understand the function and the more its behavior is affected by the outside environment. } "Fan Out": FAN_OUT = ic_paradd + ic_varpe { Definition: ----------- FAN_OUT is the sum of the uses of parameters passed by reference and of the uses of attributes which are external to the class. Explanation: ------------ This metric is an indicator of the function's output flow. The more the flow is significant, the more difficult it is to analyze the function and the more it represents a critical point in the system. Action: ------- A function with a high FAN_OUT value should be split up into several functions. } # # Class Metrics: # -------------- # "Fan in of a class": FAN_INclass = cl_data_prot + cl_data_publ + cl_usedp + cl_data_vari { Definition: ----------- The FAN_IN value of a class is the sum of: - number of attributes in the protected part of the class, - number of attributes in the public part of the class, - number of parameters used by the class methods, - number of class attributes used by the class methods. Explanation: ------------ FAN_IN is an indicator of the input flow into the class in terms of parameters and attributes. The greater the FAN_IN value, the greater the chances of modifying the class's behavior. } "Fan out value of a class": FAN_OUTclass = cl_data_prot + cl_data_publ + cl_usedp + cl_data_vare { Definition: ----------- The FAN_OUT value of a class is the sum of: - number of attributes in the protected part of the class, - number of attributes in the public part of the class, - number of parameters used by the class methods, - number of other class attributes used by the class. Explanation: ------------ FAN_OUT is an indicator of the output flow of the class. The greater the FAN_OUT value, the more the class affects the system. } "Class Comments Frequency": COMFclass = (cl_bcom + cl_bcob) / (cl_func_publ + cl_func_prot + cl_data_prot + cl_data_publ) { Definition: ----------- The Class COMments Frequency value is defined by the ratio of comment blocks over the sum of attributes and methods in the public and protected parts of the class. Explanation: ------------ Each attribute or method which is visible from outside should be commented. Comments facilitate class use and reuse. Action: ------- If the Class Comments Frequency value is too small, provide comments for the attributes and methods in the public and protected parts of the class. } "Encapsulation rules": ENCAP = cl_data_publ + cl_data_vare { Definition: ----------- Defined as the sum of: - number of attributes in the public part of the class, - number of other class attributes used by the class. Explanation: ------------ Encapsulation rules do not encourage direct access to class attributes. Access functions and attribute manipulations are preferable. The greater the ENCAP value, the more the class provides accesses to its attributes and/or accesses attributes belonging to other classes. ENCAP can be deemed too high if it does not define accesses to its own attributes, or if it repeatedly accesses attributes belonging to other classes. } "Usability": USABLE = (2 * cl_data_publ) + cl_func_publ { Definition: ----------- The usability of a class is defined as the sum of: - twice the number of public attributes in the class, - the number of public methods in the class. Explanation: ------------ This metric measures the intellectual effort necessary before using the class. The number of public attributes in the class is multiplied by two as an attribute in the public part should be encapsulated by a function handling its read access and by a function handling its write access. The greater the usability the more difficult to use the class is. } "Specializability": SPECIAL = 2 * (cl_data_publ + cl_data_prot) + (cl_func_publ + cl_func_prot) + 10 * in_bases { Definition: ----------- The specializability of a class is defined as the sum of: - twice the number of public and protected attributes in the class, - the number of public and protected methods in the class, - ten times the number of inherited classes. Explanation: ------------ This metric measures the intellectual effort necessary before specializing the class. The number of attributes in the class is multiplied by two as an attribute should be encapsulated by a function handling its read access and by a function handling its write access. The number of inherited classes is multiplied by 10 as each inherited class defines a set of attributes and methods which should also be analyzed. 10 is an estimate of twice the number of attributes plus the number of methods in the public and protected parts of inherited classes. The greater the specializability value is, the more the class is difficult to specialize. } "Rate of class autonomy ": AUTONOM = 100 * ((cl_func_priv + cl_func_prot + cl_func_publ - cl_dep_meth) + (cl_data_prot + cl_data_publ + cl_data_priv - cl_data_class)) / (cl_func_priv + cl_func_prot + cl_func_publ + cl_data_priv + cl_data_prot + cl_data_publ) { Definition: ----------- The rate of class autonomy is defined by the ratio between: - sum of autonomous attributes and methods in the class, - sum of attributes and methods in the class. An attribute is deemed autonomous if: - it is not of a class-type. A method is deemed autonomous if: - it does not call any function which is external to the class, - it does not use any attribute which is external to the class, - it has no class-type parameter, - it does not declare a local variable of a class-type. Explanation: ------------ The rate of autonomy of a class is an indicator of its stability. In fact, the less a class is autonomous, the more it is sensitive to the global modifications of a system. The rate of autonomy is also an indicator of the usability and reusability of a class. In fact, the less a class is autonomous, the more difficult it will be to reuse. } "Testability": TESTAB = cl_fprot_path + cl_fpriv_path + cl_fpubl_path + cl_data_vare + cl_func_calle { Definition: ----------- The testability of a class is the sum of: - the PATH of the class's methods, - the number of uses of attributes defined outside the class, - the number of calls to functions defined outside the class. Explanation: ------------ The higher the testability of a class is, the more difficult it is to test. The testability of a class is based on the sum of the execution paths and on the dependence of a class on the rest of the system. } # # Application Metrics: # -------------------- # "Ratio of repeated inheritances in the application": URI_Ratio = (ap_inhg_uri * 100) / ap_inhg_edge { Definition: ----------- The ratio of repeated inheritances in the application is: - number of repeated inheritances times 100, divided by the number of inheritance relations. A repeated inheritance consist in inheriting twice from the same class. The number of repeated inheritances is the number of inherited class couples leading to a repeated inheritance. Example: class A "basic class" | |-------------|--------------| | | | class B class C class D "subclasses" | | | |-------------|--------------| | class E In this example, class E repeatedly inherits from class A through couples (B,C) (C,D) and (B,D). The number of repeated inheritances is thus: 3 URI_Ratio = (3 * 100) / 6 = 50.0 Explanation: ------------ Repeated inheritance is a cause of complexity and naming conflict in cases of functions inherited several times. Nevertheless, in certain cases, repeated inheritance can be useful but should not be used excessively. } "Percentage of non-member functions": NMM_Ratio = ((ap_func - ap_mdf) / ap_func) * 100 { Definition: ----------- Percentage of non-member functions. A non-member function is a function which does not belong to a class. Explanation: ------------ An application containing a high percentage of non-member functions is an application which does not comply with object-oriented principles. } "Average coupling between objects": AVG_CBO = ap_cbo / ap_clas { Definition: ----------- The average coupling between objects is defined as follows: - ap_cbo / ap_clas where: - ap_cbo is the sum for all classes of the calls to functions which are external to the class plus the number of class-type attributes, - ap_clas is the number of classes in the application. Explanation: ------------ An AVG_CBO value which is too high indicates that a great number of classes in the application have a high coupling rate. A high coupling average indicates that the application includes a great number of interconnections. The system's entropy is therefore high and any modification becomes extremely delicate. } "Average of the VG of the application's functions": AVG_VG = ap_vg / ap_func { Definition: ----------- Average of the VG (Cyclomatic Number) of the application's functions. Explanation: ------------ An AVG_VG value which is too high indicates that a great number of the application's functions have a high VG value. A high AVG_VG value indicates that the application contains a great number of functions which are too complex. The system will thus be difficult to maintain due to function complexity. } "Ratio of recursive edges on the call graph": RECU_Ratio = (ap_cg_cycle * 100) / ap_cg_edge { Definition: ----------- RECU_Ratio is the percentage of recursive edges on the application's call graph. Explanation: ------------ Excessive use of recursivness increases the global complexity of the application and may diminish system performance. } *ME* # Definition of metric limits # --------------------------- # # Function scope metrics # # mnemonic format min max lc_stat I 0 20 ct_vg I 0 10 AVGS F 0.00 9.00 VOCF F 1.00 4.00 COMF F 0.20 +oo FAN_IN I 0 4 ic_varpe I 0 2 FAN_OUT I 0 4 ct_bran I 0 0 LEVL I 1 4 ic_parval I 0 2 ic_paradd I 0 2 ct_exit I 0 1 ct_path I 0 60 dc_calls I 0 5 dc_calling I 0 7 dc_lvars I 0 5 ic_param I 0 5 # #relative call graph metrics # cg_levels I 1 12 cg_hiercpx F 1.00 5.00 cg_strucpx F 0.00 3.00 IND_CALLS I 1 30 cg_testab F 0.00 1.00 # # Class scope metrics # cl_wmc I 0 25 in_bases I 0 3 cl_dep_meth I 0 6 FAN_INclass I 0 15 FAN_OUTclass I 0 20 COMFclass F 0.2 +oo ENCAP I 0 5 USABLE I 0 10 SPECIAL I 0 25 AUTONOM F 30.0 100.0 in_noc I 0 2 cl_cobc I 0 12 cu_cdused I 0 4 cu_cdusers I 0 4 TESTAB I 0 100 # # Application scope metrics # ap_mhf F 0.1 0.4 ap_ahf F 0.7 1.0 ap_mif F 0.6 0.8 ap_aif F 0.3 0.6 ap_pof F 0.3 1.0 ap_cof F 0.03 0.18 AVG_VG F 1.0 5.0 NMM_Ratio F 0.0 10.0 ap_inhg_levl I 1 4 URI_Ratio F 0.0 10.0 AVG_CBO F 0.0 10.0 RECU_Ratio F 0.0 5.0 ap_inhg_cpx F 1.0 2.0 ap_cg_levl I 2 9 *MC* # DEFINITION OF THE QUALITY SUBCHARACTERISTICS # --------------------------------------------- function_TESTABILITY = dc_calls + LEVL + ct_path + ic_param { Function TESTABILITY : Characteristics of the source code which are used to assess the effort necessary to validate the modified software } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_STABILITY = dc_calling + ic_varpe + ct_exit + dc_calls + ic_param { Function STABILITY : Characteristics of the source code which are used to assess the risk of unexpected effects resulting from modifications. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 5 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_CHANGEABILITY = ic_param + dc_lvars + VOCF + ct_bran { Function CHANGEABILITY : Characteristics of the source code which are used to assess the effort necessary to modify the source code, correct errors, or change environment. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_ANALYZABILITY = ct_vg + lc_stat + AVGS + COMF { Function ANALYZABILITY : Characteristics of the source code which are used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 relativeCall_ANALYZABILITY = cg_strucpx + cg_levels { ANALYZABILITY : Characteristics used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 relativeCall_STABILITY= IND_CALLS + cg_hiercpx { STABILITY : Characteristics used to assess the risk of unexpected effects resulting from modifications. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 relativeCall_TESTABILITY = cg_testab + IND_CALLS { TESTABILITY : Characteristics used to assess the effort necessary to validate the modified software. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 class_ANALYZABILITY = cl_wmc + in_bases + cl_dep_meth + FAN_INclass + FAN_OUTclass + COMFclass { Class ANALYZABILITY: attribute of a class characterizing the effort necessary to diagnose failures or failure causes or to identify the parts of the source code to be modified. The evaluation of this effort is highly correlated with the intrinsic complexity indicators of the class: - intrinsic complexity of the class's methods (cl_wmc), - number of inherited classes (which must also be studied), - class entry and exit flow. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 6 6 3 GOOD 4 5 2 FAIR 2 3 1 POOR 0 1 0 class_CHANGEABILITY = ENCAP + USABLE + SPECIAL { Class CHANGEABILITY: attribute of a class characterizing the effort necessary to modify the class or remedy defects. The evaluation of this effort depends on the following three class properties: - encapsulation characterizing accesses to the class, - class usability - the specializability which characterizes the aptitude of a class to be specialized to create a new, less abstract class. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 class_STABILITY = AUTONOM + in_noc + cl_cobc + cu_cdusers { Class STABILITY: attribute of a class characterizing the risk of unexpected consequences of modifications. This evaluation is related to the class's level of autonomy and coupling (high autonomy and low coupling level) as well as to the number of classes which depend on the class studied. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 class_TESTABILITY = in_bases + TESTAB + cu_cdused { Class TESTABILITY: attribute of a class characterizing the test effort necessary to validate the studied class. The evaluation of this effort depends on the unit test effort for the class's methods, the number of times this test effort should be repeated (number of inherited classes) and the number of used classes. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 class_USABILITY = USABLE + ENCAP + AUTONOM { Class USABILITY: attribute of a class characterizing the effort necessary to understand the class prior to using it. The evaluation of this effort depends on: - the encapsulation characterizing the respect of encapsulation principle for the class's data, - class use which characterizes the class's aptitude to be used, - the degree of autonomy of the class. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 class_SPECIALIZABILITY = SPECIAL + ENCAP + AUTONOM { Class SPECIALIZABILITY: attribute of a class characterizing the effort necessary to understand the class prior to specialization. The evaluation of this effort depends on: - the aptitude of the class to be specialized to create a new, less abstract class, - the degree of autonomy of the class. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 application_ANALYZABILITY = ap_inhg_levl + AVG_CBO + ap_aif + ap_mif + ap_cof + RECU_Ratio { Application ANALYZABILITY: attribute of software characterizing the effort necessary to diagnose the deficiencies or causes of failure or to identify the parts of the software to be modified. The evaluation of this effort depends on: - the inheritance graph depth, - the mean value of object interconnections, - the method inheritance factor, - the attribute inheritance factor, - the coupling factor, - the recursive paths rate. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 6 6 3 GOOD 4 5 2 FAIR 2 3 1 POOR 0 1 0 application_CHANGEABILITY = ap_inhg_levl + URI_Ratio + NMM_Ratio + ap_pof + ap_mif { Application CHANGEABILITY: attribute of software characterizing the effort necessary to modify the software or remedy defects. The evaluation of this effort depends on: - inheritance graph depth, - rate of repeated inheritances, - the proportion of non-member functions, - the polymorphism factor, - the method inheritance factor. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 5 5 3 GOOD 3 4 2 FAIR 1 2 1 POOR 0 0 0 application_STABILITY = AVG_CBO + ap_inhg_cpx + ap_mhf + ap_ahf + ap_cof { Application STABILITY: attribute of software characterizing the risk of unexpected consequences of modifications. The evaluation of this effort depends on: - the mean value of interconnections between objects, - hierarchical complexity of the inheritance graph. - method hiding factor - attribute hiding factor - the coupling factor } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 5 5 3 GOOD 3 4 2 FAIR 1 2 1 POOR 0 0 0 application_TESTABILITY = AVG_VG + NMM_Ratio + ap_mhf + ap_ahf + ap_cg_levl { Application TESTABILITY: attribute of software characterizing the effort necessary to validate modified software. The evaluation of this effort depends on: - the mean value of the functions' cyclomatic number, - the proportion of non-member functions - the method hiding factor - the number of levels in call graph. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 5 3 GOOD 2 3 2 FAIR 1 1 1 POOR 0 0 0 *BQ* function_MAINTAINABILITY: component = function_ANALYZABILITY + function_CHANGEABILITY + function_STABILITY + function_TESTABILITY { Function MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 12 12 GOOD 8 11 FAIR 4 7 POOR 0 3 class_MAINTAINABILITY: class = class_ANALYZABILITY + class_CHANGEABILITY + class_STABILITY + class_TESTABILITY { Class MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 12 12 GOOD 8 11 FAIR 4 7 POOR 0 3 class_REUSABILITY: class = class_USABILITY + class_SPECIALIZABILITY + class_ANALYZABILITY { Class REUSABILITY: Characteristics used to assess the class capability to be reused. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 9 9 GOOD 6 8 FAIR 3 5 POOR 0 2 application_MAINTAINABILITY: application = application_ANALYZABILITY + application_CHANGEABILITY + application_STABILITY + application_TESTABILITY { Application MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 12 12 GOOD 8 11 FAIR 4 7 POOR 0 3 relativeCall_MAINTAINABILITY: component = relativeCall_ANALYZABILITY + relativeCall_STABILITY + relativeCall_TESTABILITY { Relative call graph MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of subchar. categories weights | #| categories | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 6 6 GOOD 4 5 FAIR 1 3 POOR 0 0 #-------o-------o-------o-------o-------o-------o-------o-------o /C *MD* # Definition of the metrics used for the evaluation # --------------------------------------------------- "Program length" : N = N1 + N2 { Definition: ------------ HALSTEAD metric which represents the observed length of the program. } "Vocabulary" : n = n1 + n2 { Definition: ------------ HALSTEAD metric which represents the vocabulary of the program. } "Estimated Length" : CN = LOG2(n1) * n1 + LOG2(n2) * n2 { Definition: ------------ HALSTEAD metric which represents the estimated length of the program. } "Volume" : V = N * LOG2(n) { Definition: ------------ HALSTEAD metric which represents the program volume which corresponds to the minimum of bits necessary to represent the program. } "Difficulty" : D = (n1 * N2) / (2 * n2) { Definition: ------------ HALSTEAD metric which represents the program difficulty. It is the product of the relative level of difficulty due to number of distinct operators and the average times an operand is used. } "Level" : L = 1 / D { Definition: ------------ HALSTEAD metric which represents the program level. } "Mental Effort" : E = V / L { Definition: ------------ HALSTEAD metric which represents the mental effort required to develop and understand a program. } "Intelligent Content" : I = V * L { Definition: ------------ HALSTEAD metric which represents the complexity of the algorithm regardless of the language used. } "Vocabulary frequency" : VOCF = (N1+N2) / MAX((n1+n2),1) { Definition : ------------ VOCabulary Frequency corresponds to the average number of times the vocabulary is used (sum of distinct operands and operators) in a component. This metric is calculated as follows: VOCF = (N1+N2) / (n1+n2) where : N1 is the number of operator occurrences, N2 is the number of operand occurrences, n1 is the number of distinct operators, n2 is the number of distinct operands. Explanation : ------------- When the value of this metric is high for a given component, this means that it contains very similar, or even duplicated statements. Vocabulary frequency is a good indicator of the effort that will be required to maintain the code and of the program's stability. This is because any corrections that are required will have to made at each place the code has been duplicated. Furthermore, the risk of not carrying the corrections over (or of carrying them over incorrectly) increases with the number of duplications. Action : -------- Factorize the duplicated statements into a single function which will be invoked as many times as there were duplications in the original code. } "Number of nestings" : NEST = LEVL - 1 { Definition : ------------ Maximum number of control structure nested levels in a function (number of NESTed levels). The following formula is used to calculate this metric: NEST = LEVL - 1 Explanation : ------------- Source code becomes more and more difficult to read as the number of control structures increases. Indeed, in order to follow (visually in the code) program execution, it is necessary to memorize the various starts and ends of the control structures nested in each other. This metric is also a good indicator of the effort required to test the function, since its value corresponds to the number of conditions that must be accumulated in order to execute the most highly nested statements. Action : -------- The number of control structure nestings can be decreased by creating a new subroutine with the most highly nested statements. } "Number of macro-instructions" : MAC = MACP + MACC { Definition : ------------ Total number of times macro-instructions are used in the function. This metric is calculated with the following formula: MAC = MACP + MACC where: MACP is the number of macro-instructions with parameters, MACC is the number of macro-instructions without parameters. Explanation : ------------- This metric is used to detect intensive use of macro-instructions in functions, which is detrimental to program readability and stability. This is because the reader has to skip back and forth between the code he is reading and the places where these macro-instructions are declared. If this part of the code is not available to the maintenance engineer, there is a risk he will interpret the macro-instructions incorrectly and unintentionally introduce new errors into the code if it has to be modified. Furthermore, although the implementation of an algortithm in a macro-instruction rather than in a function may be justified if there are severe performance constraints, this style of writing can generate a very large volume of expanded code (and therefore of the program). It is therefore whorthwile checking that macro-instructions are used reasonably. Action : -------- Limit the use of macro-instructions to the strict minimum. For example, it is better to create a function than a macro-instruction if the time constraints are not really important. In any event, it is better to indicate at the beginning of the function the list of macro-instructions it uses and the name of the INCLUDE file in which they are declared. } "Average size of statements" : AVGS = (N1+N2) / MAX(STMT,1) { Definition : ------------ This metric (AVerage Size of statements) corresponds to the average number of operands and operators used by each of the function's executable statements. It is calculated as follows: AVGS = (N1+N2) / STMT; where : N1 is the number of operator occurrences, N2 is the number of operand occurrences, STMT is the number of executable statements in the function. Explanation : ------------- This metric is used to detect components that, on average, have long statements. Statements that handle a large number of textual elements (operators and operands) require a special effort by the reader in order to understand them. This metric is a good indicator of the program's analyzability. Action : -------- Long statements can be broken down into several shorter statements and/or be commented on in greater detail to provide all the exaplanations necessary to make the reader's task easier. } "Comments frequency" : COMF = (BCOM+BCOB) / MAX(STMT,1) { Definition : ------------ Number of blocks of comments per statement in the function (COMments Frequency). The following formula is used to calculate this metric: COMF = (BCOM+BCOB) / STMT where : BCOM is the number of blocks of comments in the function, BCOB is the number of blocks of comments before the function, STMT is the number of executable statements in the function. Explanation : ------------- Although this metric cannot evaluate the relevance of the comments written in the code, experience has shown that it is a good indicator of the effort made by the developer to describe the component. To make it easier to understand the code when performing maintenance operations, the code must contain a sufficient number of comments. It is also better to distribute the comments evenly at the level of the statements that need commenting, rather than placing them all at the beginning of the function and leaving all the statements without comments. Action : -------- When the number of comments with respect to the number of statements is considered insufficient, it will be necessary to add comments to the code at the level of statements handling macro-instructions or performing complex calculations, or before the function's main control structures, for example. } *ME* # Definition of metric limits # --------------------------- # Function scope metrics STMT I 0 50 DRCT_CALLS I 0 7 VG I 0 10 GOTO I 0 0 RETU I 0 1 LVAR I 0 5 LEVL I 0 4 COMF F 0.2 +oo VOCF F 0.00 4.00 AVGS F 0.00 9.00 NBCALLING I 0 5 PARA I 0 5 PATH I 0 80 #relative call graph metrics LEVELS I 1 12 HIER_CPX F 1.00 5.00 STRU_CPX F 0.00 3.00 IND_CALLS I 1 30 TESTBTY F 0.00 1.00 #application metrics ap_cg_cycle I 0 +oo ap_cg_edge I 0 +oo ap_cg_leaf I 0 +oo ap_cg_levl I 0 +oo ap_cg_maxdeg I 0 +oo ap_cg_maxin I 0 +oo ap_cg_maxout I 0 +oo ap_cg_node I 0 +oo ap_cg_root I 0 +oo ap_func I 0 +oo ap_sline I 0 +oo ap_vg I 0 +oo *MC* # DEFINITION OF THE QUALITY SUBCHARACTERISTICS # --------------------------------------------- function_TESTABILITY = DRCT_CALLS + LEVL + PATH + PARA { TESTABILITY : Characteristics of the source code which are used to assess the effort necessary to validate the modified software } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_STABILITY = NBCALLING + RETU + DRCT_CALLS + PARA { STABILITY : Characteristics of the source code which are used to assess the risk of unexpected effects resulting from modifications. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_CHANGEABILITY = PARA + LVAR + VOCF + GOTO { CHANGEABILITY : Characteristics of the source code which are used to assess the effort necessary to modify the source code, correct errors, or change environment. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_ANALYZABILITY = VG + STMT + AVGS + COMF { ANALYZABILITY : Characteristics of the source code which are used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 relativeCall_ANALYZABILITY = STRU_CPX + LEVELS { ANALYZABILITY : Characteristics used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 relativeCall_STABILITY= IND_CALLS + HIER_CPX { STABILITY : Characteristics used to assess the risk of unexpected effects resulting from modifications. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 relativeCall_TESTABILITY = TESTBTY + IND_CALLS { TESTABILITY : Characteristics used to assess the effort necessary to validate the modified software. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 *BQ* function_MAINTAINABILITY: component = function_ANALYZABILITY + function_CHANGEABILITY + function_STABILITY + function_TESTABILITY { Function MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 12 12 GOOD 8 11 FAIR 4 7 POOR 0 3 relativeCall_MAINTAINABILITY: component = relativeCall_ANALYZABILITY + relativeCall_STABILITY + relativeCall_TESTABILITY { MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of subchar. categories weights | #| categories | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 6 6 GOOD 4 5 FAIR 1 3 POOR 0 0 #-------o-------o-------o-------o-------o-------o-------o-------o /JAVA *MD* # # Definition of the metrics used for the evaluation # ------------------------------------------------- # # Function Metrics: # ----------------- # "Comments frequency": COMLF = (lc_comm) / MAX(lc_stat,1) { Definition: ----------- Number of lines of comments per statement in the function (COMments Line Frequency). The following formula is used to calculate this metric: COMLF = (lc_comm) / (lc_stat) where: lc_comm is the number of lines of comments in the function, lc_stat is the number of statements in the function. Explanation: ------------ Although this metric cannot evaluate the relevance of the comments written in the code, experience has shown that it is a good indicator of the effort made by the developer to describe the function. To make it easier to understand the code when performing maintenance operations, the code must contain a sufficient number of comments. It is also better to distribute the comments evenly at the level of the statements that need commenting, rather than placing them all at the beginning of the function and leaving all the statements without comments. Action: ------- When the number of comments with respect to the number of statements is considered insufficient, it will be necessary to add comments to the code at the level of statements handling macro-instructions or performing complex calculations, or before the function's main control structures, for example. } "Number of levels": LEVL = ct_nest + 1 { Definition: ----------- Maximum number of control structure nestings in the function plus one (number of LEVeLs). } # # Class Metrics: # -------------- # "Class Comments Frequency": COMLFclass = (cl_comm) / (cl_func_publ + cl_func_prot + cl_data_prot + cl_data_publ) { Definition: ----------- The Class COMments Lines Frequency value is defined by the ratio of comment lines over the sum of attributes and methods in the public and protected parts of the class. Explanation: ------------ Each attribute or method which is visible from outside should be commented. Comments facilitate class use and reuse. Action: ------- If the Class Comments Frequency value is too small, provide comments for the attributes and methods in the public and protected parts of the class. } "Usability": USABLE = (2 * cl_data_publ) + cl_func_publ { Definition: ----------- The usability of a class is defined as the sum of: - twice the number of public attributes in the class, - the number of public methods in the class. Explanation: ------------ This metric measures the intellectual effort necessary before using the class. The number of public attributes in the class is multiplied by two as an attribute in the public part should be encapsulated by a function handling its read access and by a function handling its write access. The greater the usability the more difficult to use the class is. } "Specializability": SPECIAL = 2 * (cl_data_publ + cl_data_prot) + (cl_func_publ + cl_func_prot) + 10 * in_inherits { Definition: ----------- The specializability of a class is defined as the sum of: - twice the number of public and protected attributes in the class, - the number of public and protected methods in the class, - ten times the number of inherited classes. Explanation: ------------ This metric measures the intellectual effort necessary before specializing the class. The number of attributes in the class is multiplied by two as an attribute should be encapsulated by a function handling its read access and by a function handling its write access. The number of inherited classes is multiplied by 10 as each inherited class defines a set of attributes and methods which should also be analyzed. 10 is an estimate of twice the number of attributes plus the number of methods in the public and protected parts of inherited classes. The greater the specializability value is, the more the class is difficult to specialize. } "Testability": TESTAB = cl_fprot_path + cl_fpriv_path + cl_fpubl_path { Definition: ----------- The testability of a class is the sum of: - the PATH of the class's methods, - the number of uses of attributes defined outside the class, - the number of calls to functions defined outside the class. Explanation: ------------ The higher the testability of a class is, the more difficult it is to test. The testability of a class is based on the sum of the execution paths and on the dependence of a class on the rest of the system. } # # Application Metrics: # -------------------- # "Ratio of repeated inheritances in the application": URI_Ratio = (ap_inhg_uri * 100) / ap_inhg_edge { Definition: ----------- The ratio of repeated inheritances in the application is: - number of repeated inheritances times 100, divided by the number of inheritance relations. A repeated inheritance consist in inheriting twice from the same class. The number of repeated inheritances is the number of inherited class couples leading to a repeated inheritance. Example: class A "basic class" | |-------------|--------------| | | | class B class C class D "subclasses" | | | |-------------|--------------| | class E In this example, class E repeatedly inherits from class A through couples (B,C) (C,D) and (B,D). The number of repeated inheritances is thus: 3 URI_Ratio = (3 * 100) / 6 = 50.0 Explanation: ------------ Repeated inheritance is a cause of complexity and naming conflict in cases of functions inherited several times. Nevertheless, in certain cases, repeated inheritance can be useful but should not be used excessively. } "Average of the VG of the application's functions": AVG_VG = ap_vg / (ap_func - ap_interf_func) { Definition: ----------- Average of the VG (Cyclomatic Number) of the application's functions. Explanation: ------------ An AVG_VG value which is too high indicates that a great number of the application's functions have a high VG value. A high AVG_VG value indicates that the application contains a great number of functions which are too complex. The system will thus be difficult to maintain due to function complexity. } *ME* # Definition of metric limits # --------------------------- # # Function scope metrics # # mnemonic format min max lc_stat I 0 20 ct_vg I 0 10 COMLF F 0.50 +oo LEVL I 1 4 ct_bran I 0 1 ct_exit I 0 1 ct_path I 0 60 ic_param I 0 5 # # Class scope metrics # in_bases I 0 3 COMLFclass F 0.2 +oo USABLE I 0 10 SPECIAL I 0 25 in_noc I 0 2 TESTAB I 0 100 cu_cdused I 0 4 cu_cdusers I 0 4 # # Application scope metrics # AVG_VG F 1.0 5.0 ap_inhg_levl I 1 4 URI_Ratio F 0.0 10.0 ap_inhg_cpx F 1.0 2.0 *MC* # DEFINITION OF THE QUALITY SUBCHARACTERISTICS # --------------------------------------------- function_TESTABILITY = LEVL + ct_path + ic_param + ct_exit { Function TESTABILITY : Characteristics of the source code which are used to assess the effort necessary to validate the modified software } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_ANALYZABILITY = ct_vg + lc_stat + COMLF + ct_bran { Function ANALYZABILITY : Characteristics of the source code which are used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 class_ANALYZABILITY = cl_wmc + in_bases + COMLFclass { Class ANALYZABILITY: attribute of a class characterizing the effort necessary to diagnose failures or failure causes or to identify the parts of the source code to be modified. The evaluation of this effort is highly correlated with the intrinsic complexity indicators of the class: - intrinsic complexity of the class's methods (cl_wmc), - number of inherited classes (which must also be studied), - class entry and exit flow. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 class_CHANGEABILITY = USABLE + SPECIAL { Class CHANGEABILITY: attribute of a class characterizing the effort necessary to modify the class or remedy defects. The evaluation of this effort depends on the following three class properties: - encapsulation characterizing accesses to the class, - class usability - the specializability which characterizes the aptitude of a class to be specialized to create a new, less abstract class. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 class_STABILITY = in_noc + cu_cdusers { Class STABILITY: attribute of a class characterizing the risk of unexpected consequences of modifications. This evaluation is related to the number of classes which depend on the class studied (classes which inherit from or use the current class). } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 class_TESTABILITY = in_bases + TESTAB + cu_cdused { Class TESTABILITY: attribute of a class characterizing the test effort necessary to validate the studied class. The evaluation of this effort depends on the unit test effort for the class's methods, the number of times this test effort should be repeated (number of inherited classes) and the number of used classes. } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 application_CHANGEABILITY = ap_inhg_levl + URI_Ratio { Application CHANGEABILITY: attribute of software characterizing the effort necessary to modify the software or remedy defects. The evaluation of this effort depends on: - inheritance graph depth, - rate of repeated inheritances, } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 application_TESTABILITY = AVG_VG { Application TESTABILITY: attribute of software characterizing the effort necessary to validate modified software. The evaluation of this effort depends on: - the mean value of the functions' cyclomatic number, } #|--------------------------------------------------| #| category | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 1 1 1 POOR 0 0 0 *BQ* function_MAINTAINABILITY: component = function_ANALYZABILITY + function_TESTABILITY { Function MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 6 6 GOOD 4 5 FAIR 2 3 POOR 0 1 class_MAINTAINABILITY: class = class_ANALYZABILITY + class_CHANGEABILITY + class_STABILITY + class_TESTABILITY { Class MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 10 10 GOOD 7 9 FAIR 3 6 POOR 0 2 application_MAINTAINABILITY: application = application_CHANGEABILITY + application_TESTABILITY { Application MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 3 3 GOOD 2 2 FAIR 1 1 POOR 0 0 #-------o-------o-------o-------o-------o-------o-------o-------o /ADA *MD* # Definition of the metrics used for the evaluation # --------------------------------------------------- "Vocabulary frequency" : VOCF = (N1+N2) / MAX((n1+n2),1) { Definition : ------------ VOCabulary Frequency corresponds to the average number of times the vocabulary is used (sum of distinct operands and operators) in a component. This metric is calculated as follows: VOCF = (N1+N2) / (n1+n2) where : N1 is the number of operator occurrences, N2 is the number of operand occurrences, n1 is the number of distinct operators, n2 is the number of distinct operands. Explanation : ------------- When the value of this metric is high for a given component, this means that it contains very similar, or even duplicated statements. Vocabulary frequency is a good indicator of the effort that will be required to maintain the code and of the program's stability. This is because any corrections that are required will have to made at each place the code has been duplicated. Furthermore, the risk of not carrying the corrections over (or of carrying them over incorrectly) increases with the number of duplications. Action : -------- Factorize the duplicated statements into a single function which will be invoked as many times as there were duplications in the original code. } "Number of levels": LEVL = ct_nest + 1 { Definition: ----------- Maximum number of control structure nestings in the function plus one (number of LEVeLs). } "Average size of statements" : AVGS = (N1+N2) / MAX(lc_stat,1) { Definition : ------------ This metric (AVerage Size of statements) corresponds to the average number of operands and operators used by each of the function's executable statements. It is calculated as follows: AVGS = (N1+N2) / lc_stat; where : N1 is the number of operator occurrences, N2 is the number of operand occurrences, lc_stat is the number of executable statements in the function. Explanation : ------------- This metric is used to detect components that, on average, have long statements. Statements that handle a large number of textual elements (operators and operands) require a special effort by the reader in order to understand them. This metric is a good indicator of the program's analyzability. Action : -------- Long statements can be broken down into several shorter statements and/or be commented on in greater detail to provide all the exaplanations necessary to make the reader's task easier. } "Comments frequency" : COMLF = (lc_comm) / MAX(lc_stat,1) { Definition : ------------ Number of lines of comments per statement in the function (COMments Line Frequency). The following formula is used to calculate this metric: COMLF = (lc_comm) / (lc_stat) where: lc_comm is the number of lines of comments in the function, lc_stat is the number of statements in the function. Explanation : ------------- Although this metric cannot evaluate the relevance of the comments written in the code, experience has shown that it is a good indicator of the effort made by the developer to describe the component. To make it easier to understand the code when performing maintenance operations, the code must contain a sufficient number of comments. It is also better to distribute the comments evenly at the level of the statements that need commenting, rather than placing them all at the beginning of the function and leaving all the statements without comments. Action : -------- When the number of comments with respect to the number of statements is considered insufficient, it will be necessary to add comments to the code at the level of statements handling macro-instructions or performing complex calculations, or before the function's main control structures, for example. } *ME* # Definition of metric limits # --------------------------- # Function scope metrics lc_stat I 0 50 ct_vg I 0 10 ct_goto I 0 0 ct_exit I 0 1 dc_vars I 0 5 LEVL I 1 4 COMLF F 0.2 +oo VOCF F 0.00 4.00 AVGS F 0.00 9.00 dc_calling I 0 5 ic_param I 0 5 ct_path I 0 80 #relative call graph metrics cg_levels I 1 12 cg_hiercpx F 1.00 5.00 cg_strucpx F 0.00 3.00 IND_CALLS I 1 30 cg_testab F 0.00 1.00 *MC* # DEFINITION OF THE QUALITY SUBCHARACTERISTICS # --------------------------------------------- function_TESTABILITY = ct_nest + ct_path + ic_param { TESTABILITY : Characteristics of the source code which are used to assess the effort necessary to validate the modified software } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 function_STABILITY = dc_calling + ct_exit + ic_param { STABILITY : Characteristics of the source code which are used to assess the risk of unexpected effects resulting from modifications. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 3 3 3 GOOD 2 2 2 FAIR 1 1 1 POOR 0 0 0 function_CHANGEABILITY = ic_param + dc_vars + VOCF + ct_goto { CHANGEABILITY : Characteristics of the source code which are used to assess the effort necessary to modify the source code, correct errors, or change environment. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 function_ANALYZABILITY = ct_vg + lc_stat + AVGS + COMLF { ANALYZABILITY : Characteristics of the source code which are used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| class | limit | weight in | #| designation | min max | the report | #|-----------------|--------|--------|--------------| EXCELLENT 4 4 3 GOOD 3 3 2 FAIR 2 2 1 POOR 0 1 0 relativeCall_ANALYZABILITY = cg_strucpx + cg_levels { ANALYZABILITY : Characteristics used to assess the effort necessary to diagnose the causes of the deficiencies and/or failures, or used to identify the portions to be modified. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 relativeCall_STABILITY= IND_CALLS + cg_hiercpx { STABILITY : Characteristics used to assess the risk of unexpected effects resulting from modifications. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 relativeCall_TESTABILITY = cg_testab + IND_CALLS { TESTABILITY : Characteristics used to assess the effort necessary to validate the modified software. } #|--------------------------------------------------| #| categories | min max | final Report | #|-----------------|--------|--------|--------------| EXCELLENT 2 2 2 GOOD 1 1 1 POOR 0 0 0 *BQ* function_MAINTAINABILITY: component = function_ANALYZABILITY + function_CHANGEABILITY + function_STABILITY + function_TESTABILITY { Function MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of the weights obtained at the | #| category | level of the subcharacteristics | #| designation | | #| | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 12 12 GOOD 8 11 FAIR 4 7 POOR 0 3 relativeCall_MAINTAINABILITY: component = relativeCall_ANALYZABILITY + relativeCall_STABILITY + relativeCall_TESTABILITY { MAINTAINABILITY: Characteristics used to assess the effort required to make given modifications. } #|---------------------------------------------------------------| #| | sum of subchar. categories weights | #| categories | minimum | maximum | #|------------------------|--------------------|-----------------| EXCELLENT 6 6 GOOD 4 5 FAIR 1 3 POOR 0 0 #-------o-------o-------o-------o-------o-------o-------o-------o