WPCI 2%.B pz W"S^11>bbu"::Dg1:11bbbbbbbbbb11gggbuuuk1Xubuukuuuk111Rb:bbXbb1bb''X'bbbb:X1bXXXX;.;g:=::m:::mmmmm::::::mm:k1mubububububXubububub11111111bbbbbbbbbuXubbkbuXmmmmumububXXXXbububububbmbbbbbb:k:k::=kmmX:uXb'b:b:b:b'bmbbbb:::uXuXuXuXk:k:k:mbbbmbuXkXkXKQmmmm^b:kbbbbmbA@mmbmmbmmmmmmm:b:mmmbbmmmmmmmmmmmmXXmmmmmmmmmmmmmmmmmmcm`m`mm`m:mmmmmm}}}mjjmmmmmmmmmmmmmmm0mm}mmmmmmmmmmmmmmmmmmmmmmm}Mmmmmmmmmmmmmjmmmtmmmmmmmmm`'mmm`mmjmlWmmmmmmmmmmmmmmmmmmmW`mmmmjmM-lvetica#|x`H1`D4PkCQMS PS Jet Plus /800 II QPJPII.PRSPl`D4PkCg2W_qr|xHelveticaCourier@ ,`H1`D4PkCmQrrr r  @C2 KLG@ ,`H1`D4PkCmQrrr r  @C;,>>> >  @C ` X` hp x (#%'H   x|@  3'3'Standard6'6'StandardC6QMS $=R- NtR  ` `  <  <AP IX43E NNt  ` `  <  <AP IX43E Nc (3184) K (3184)  X Recommendation X.290 <OSI CONFORMANCE TESTING METHODOLOGY AND FRAMEWORK <FOR PROTOCOL RECOMMENDATIONS FOR CCITT APPLICATIONS HThe CCITT, considering  ` X H(a)  that Recommendation X.200 defines the Reference Model of Open Systems for CCITT Applications; H(b)  that the objective of OSI will not be completely achieved until systems dedicated to CCITT applications can be tested to determine whether they conform to the relevant OSI protocol Recommendations; H(c)  that standardized test suites should be developed for each OSI protocol Recommendation as a means to:  X HH©X obtain wide acceptance and confidence in conformance test results produced by different testers,%  `   ` 8 HH©X provide confidence in the interoperability of equipments which passed the standardized conformance tests;Ơ#  8`   ` H(d)  the need for defining an international Recommendation to specify the framework and general principles for the specification of conformance test suites and the testing of protocol implementations, Hunanimously declares the view that 1.Hthe general principles, definition of terms and concepts of OSI protocol conformance testing shall be in accordance with Part 1 of this Recommendation; 2.Hthe test methods, test suites, test notation shall be in accordance with Part 2 of this Recommendation. t%CONTENTSă PART 1 GENERAL CONCEPTS 0. Introduction 1. Scope and Field of Application 2. References Section 1: Terminology 3. Definitions 4. Abbreviations Section 2: Overview N*Ԍ 5. The Meaning of Conformance in OSI* 6. Conformance and testing 7. Test Methods 8. Test Suites 9. Relationships between Parts, Concepts and Roles 10. Compliance PART 2 ABSTRACT TEST SUITE SPECIFICATION 0. Introduction 1. Scope and Field of Application 2. References 3. Definitions 4. Abbreviations 5. Compliance Section 1: Requirements on Protocol Specifiers 6. Conformance Requirements in OSI* Recommendations* 7. PICS Proformas Section 2: Requirements on Abstract Test Suite Specifiers 8. Test Suite Production Process 9. Determining Conformance Requirements and PICS 10. Test Suite Structure 11. Generic Test Case Specification 12. Abstract Test Methods 13. Specification of Abstract Test Suites 14. Use of an Abstract Test Suite Specification 15. Test Suite Maintenance Annex A: Options Annex B: Guidance for Protocol Recommendations* writers Annex C: Incomplete Static Conformance Requirements Annex D: Tree and Tabular Combined Notation N*Ԍ Appendix I: Applicability of the Test Methods to OSI* Protocols Appendix II: Index to Definitions of Terms Appendix III: Examples for guidance for PICS proforma Appendix IV: Example of choice of Abstract Test Methods   PART 1: GENERAL CONCEPTS Introduction HThe objective of OSI will not be completely achieved until systems can be tested to determine whether they conform to the relevant "OSI or related CCITT XSeries or TSeries" (hereafter abbreviated to "OSI*") protocol "standard(s) or Recommendation(s)" (hereafter abbreviated to "Recommendation(s)*"). HStandardized test suites should be developed for each OSI* protocol Recommendation, for use by suppliers or implementors in selftesting, by users of OSI products, by the administrations* or other third party testers. This should lead to comparability and wide acceptance of test results produced by different testers, and thereby minimize the need for repeated conformance testing of the same system. HThe standardization of test suites requires international definition and acceptance of a common testing methodology and appropriate testing methods and procedures. It is the purpose of this Recommendation to define the methodology, to provide a framework for specifying conformance test suites and define the procedures to be followed during testing. HConformance testing involves testing both the capabilities and behaviour of an implementation and checking what is observed against the conformance requirements in the relevant Recommendation(s)* and against what the implementor states the implementation's capabilities are. HConformance testing does not include assessment of the performance nor the robustness or reliability of an implementation. It cannot give judgements on the physical realization of the abstract service primitives, how a system is implemented, how it provides any requested service, nor the environment of the protocol implementation. It cannot, except in an indirect way, prove anything about the logical design of the protocol itself. HThe purpose of conformance testing is to increase the probability that different implementations are able to interwork. This is achieved by verifying them by means of a protocol test suite, thereby increasing the confidence that each implementation conforms to the protocol specification. Confidence in conformance to a protocol specification is particularly important when equipment supplied by different vendors is required to interwork. HHowever, it should be borne in mind that the complexity of most protocols makes exhaustive testing impractical on both technical and economic grounds. Also, testing cannot guarantee conformance to a specification since it detects errors rather than their absence. Thus conformance to a test suite alone cannot guarantee interworking. What it does do is give confidence that an implementation has the required capabilities and that its behaviour conforms consistently in representative instances of communication. HIt should be noted that the OSI reference model for CCITT applications (Recommendation X200) states (in section 4.3): H"Only the external behaviour of Open Systems is retained as the GCstandard of behaviour of real Open Systems". HThis means that although aspects of both internal and external behaviour are described in OSI* Recommendations*, only the requirements on external behaviour have to be met by real open systems. Although some of the methods defined in this Recommendation do impose certain constraints on the implementor, for example that there be some means of realizing control and observation at one or more service access N* points, it should be noted that other methods defined herein do not impose such constraints. HHowever, in the case of partial OSI* endsystems which provide OSI* protocols up to a specific layer boundary, it is desirable to test both the external behaviour of the implemented protocol entities and the potential of those entities for supporting correct external behaviour in higher layers. HDetailed investigation of relative benefits, efficiency and constraints of all methods is addressed in various parts of this Recommendation. However, any organization contemplating the use of test methods defined in this Recommendation in a context such as certification should carefully consider the constraints on applicability and the benefits of the different possible test methods. HTesting is voluntary as far as ISO/CCITT is concerned. Requirements for testing in procurement and other external contracts are not a matter for standardization. 1.HScope and field of application 1.1HThis Recommendation specifies a general methodology for testing the conformance to OSI* protocol Recommendation(s)* of products in which the Recommendation(s)* are claimed to be implemented. The methodology also applies to testing conformance to transfer syntax Recommendation(s)* to the extent that can be determined by testing each in combination with a specific OSI* protocol. 1.2HThis Recommendation is structured into two separate parts: HPart 1 identifies the different phases of conformance testing process, these phases being characterized by four major roles. These roles are: Ha)  the specification of abstract test suites for particular OSI* protocols;& Hb)  the derivation of executable test suites and associated testing tools;& Hc)  the role of a client of a test laboratory, having an implementation of OSI* protocols to be tested;& Hd)  the operation of conformance testing, culminating in the production of a conformance test report which gives the results in terms of the Recommendation(s)* and the test suite(s) used.& HAdditionally, this Part provides tutorial material, together with definition of concepts and terms. HPart 2 defines the requirements and guidance for the specification of abstract test suites for OSI* protocols. 1.3HIn both parts of this Recommendation, the scope is limited to include only such information as is necessary to meet the following objectives: Ha)  to achieve an adequate level of confidence in the tests as a guide to conformance;& Hb)  to achieve comparability between the results of the corresponding tests applied in different places at different times;& Hc)  to facilitate communication between the parties responsible for the roles described above.& N*Ԍ 1.4HOne such aspect of this scope involves the framework for development of OSI* test suites. For example: Ha)  how they should relate to the various types of conformance requirement;& Hb)  the types of test to be standardized and the types not needing standardization;& Hc)  the criteria for selecting tests for inclusion in a conformance test suite;& Hd)  the notation to be used for defining tests;& He)  the structure of a test suite.& 1.5HCertification, an administrative procedure which may follow conformance testing, is outside the scope of this Recommendation. Requirements for procurement and contracts are also outside the scope of this Recommendation. 1.6HThe Physical layer and Media Access Control protocols are outside the field of application of this Recommendation. 2.HReferences HRecommendation X.200, Reference Model of Open Systems Interconnection for CCITT Applications. (See also ISO 7498) HRecommendation X.210, Open Systems Interconnection Layer Service Definition Conventions. (See also ISO TR 8509) HRecommendation X.209, Specification of Basic Encoding Rules for Abstract Syntax Notation One (ASN.1). (See also ISO 8825) Section 1: Terminology 3.HDefinitions 3.1HReference model definitions HThis Recommendation is based upon the concepts developed in Reference Model of Open Systems Interconnection for CCITT Applications (CCITT X.200), and makes use of the following terms defined in that Recommendation: Ha)  (N)entity H  b)(N)service Hc)  (N)layer Hd)  (N)protocol He)  (N)serviceaccesspoint Hf)  (N)relay Hg)  (N)protocoldataunit Hh)  (N)protocolcontrolinformation N*Ԍ Hi)  (N)userdata Hj)  real open system Hk)  subnetwork Hl)  applicationentity Hm)  applicationserviceelement Hn)  transfer syntax Ho)  Physical layer Hp)  Data link layer Hq)  Network layer Hr)  Transport layer H  s)Session layer Ht)  Presentation layer Hu)  Application layer Hv)  systemsmanagement Hw)  applicationmanagement Hx)  layermanagement 3.2HTerms defined in other Recommendations HThis Recommendation uses the following terms defined in the OSI Service Conventions (Recommendation X.210): Ha)  serviceuser Hb)  serviceprovider HThis Recommendation uses the following term defined in the ASN.1 Basic Encoding Rules Recommendation (Recommendation X.209): Hc)  encoding 3.3HConformance testing definitions HFor the purpose of this Recommendation the definitions in 3.4 to 3.8 apply. 3.4HBasic terms 3.4.1HImplementation under test (IUT) HThat part of a real open system which is to be studied by testing, which should be an implementation of one or more OSI* protocols in an adjacent user/provider relationship.  N*Ԍ3.4.2HSystem under test (SUT) HThe real open system in which the IUT resides. 3.4.3HDynamic conformance requirements All those requirements (and options) which determine what observable behaviour is permitted by the relevant OSI* Recommendation(s)* in instances of communication. 3.4.4HStatic conformance requirements HConstraints which are placed in OSI* Recommendations* to facilitate interworking by defining the requirements for the capabilities of an implementation. Note Static conformance requirements may be at a broad level, such as the grouping of functional units and options into protocol classes, or at a detailed level, such as the ranges of values that are to be supported for specific parameters or timers. 3.4.5HCapabilities of an IUT HThat set of functions and options in the relevant protocol(s) and, if appropriate, that set of facilities and options of the relevant service definition which are supported by the IUT. 3.4.6HProtocol implementation conformance statement (PICS) HA statement made by the supplier of an OSI* implementation or system, stating the capabilities and options which have been implemented, and any features which have been omitted. 3.4.7HPICS proforma HA document, in the form of a questionnaire, designed by the protocol specifier or conformance test suite specifier, which when completed for an OSI* implementation or system becomes the PICS. 3.4.8HProtocol implementation extra information for testing (PIXIT) HA statement made by a supplier or implementor of an IUT which contains or references all of the information (in addition to that given in the PICS) related to the IUT and its testing environment, which will enable the test laboratory to run the appropriate test suite against the IUT. 3.4.9HPIXIT proforma HA document, in the form of a questionnaire, provided by the test laboratory, which when completed during the preparation for testing becomes a PIXIT. 3.4.10HConforming implementation HAn IUT which is shown to satisfy both static and dynamic conformance requirements, consistent with the capabilities stated in the PICS. 3.4.11HSystem conformance statement HA document summarizing which OSI* Recommendations* are implemented and to which conformance is claimed.  N*Ԍ3.4.12HClient HThe organization that submits a system or implementation for conformance testing. 3.4.13HTest laboratory HAn organization that carries out conformance testing. This can be a third party, a user organization, an administration*, or an identifiable part of the supplier organization. 3.5HTypes of testing 3.5.1HActive testing HThe application of a test suite to a SUT, under controlled conditions, with the intention of observing the consequent actions of the IUT. 3.5.2HPassive testing HThe observation of PDU activity on a link, and checking whether or not the observed behaviour is allowed by the relevant Recommendation(s)*. 3.5.3HMultilayer testing HTesting the behaviour of a multilayer IUT as a whole, rather than testing it layer by layer. 3.5.4HEmbedded testing HTesting the behaviour of a single layer within a multilayer IUT without accessing the layer boundaries for that layer within the IUT. 3.5.5HBasic interconnection testing HLimited testing of an IUT to determine whether or not there is sufficient conformance to the main features of the relevant protocol(s) for interconnection to be possible, without trying to perform thorough testing. 3.5.6HCapability testing HTesting to determine the capabilities of an IUT. Note This involves checking all mandatory capabilities and those optional ones that are stated in the PICS as being supported, but not checking those optional ones which are stated in the PICS as not supported by the IUT. 3.5.7HStatic conformance review HA review of the extent to which the static conformance requirements are met by the IUT, by comparing the static conformance requirements expressed in the relevant Recommendation(s)* with the PICS and the results of any associated capability testing. 3.5.8HBehaviour testing HTesting the extent to which the dynamic conformance requirements are met by the IUT.  N* Ԍ3.5.9HConformance testing HTesting the extent to which an IUT is a conforming implementation. 3.5.10HConformance assessment process HThe complete process of accomplishing all conformance testing activities necessary to enable the conformance of an implementation or a system to one or more OSI* Recommendations* to be assessed. It includes the production of the PICS and PIXIT documents, preparation of the real tester and the SUT, the execution of one or more test suites, the analysis of the results and the production of the appropriate system and protocol conformance test reports. 3.6HTerminology of test suites 3.6.1HAbstract test method HThe description of how an IUT is to be tested, given at an appropriate level of abstraction to make the description independent of any particular implementation of testing tools, but with enough detail to enable tests to be specified for this method. 3.6.2HAbstract testing methodology HAn approach to describing and categorizing abstract test methods. 3.6.3HAbstract test case HA complete and independent specification of the actions required to achieve a specific test purpose, defined at the level of abstraction of a particular abstract test method. It includes a preamble and a postamble to ensure starting and ending in a stable state (i.e., a state which can be maintained almost indefinitely, such as the "idle" state or "data transfer" state) and involves one or more consecutive or concurrent connections. Note 1 The specification should be complete in the sense that it is sufficient to enable a verdict to be assigned unambiguously to each potentially observable outcome (i.e., sequence of test events). Note 2 The specification should be independent in the sense that it should be possible to execute the derived executable test case in isolation from other such test cases (i.e., the specification should always include the possibility of starting and finishing in the "idle" state that is without any existing connections except permanent ones). For some test cases, there may be pre requisites in the sense that execution might require some specific capabilities of the IUT, which should have been confirmed by results of the test cases executed earlier. 3.6.4HExecutable test case HA realization of an abstract test case. Note In general the use of the word "test" will imply its normal English meaning. Sometimes it may be used as an abbreviation for abstract test case or executable test case. The context should make the meaning clear. 3.6.5HTest purpose HA description of the objective which an abstract test case is designed to achieve. N* Ԍ 3.6.6HGeneric test case HA specification of the actions required to achieve a specific test purpose, defined by a test body together with a description of the initial state in which the test body is to start. 3.6.7HPreamble HThe test steps needed to define the path from the starting stable state of the test case up to the initial state from which the test body will start. 3.6.8HTest body HThe set of test steps that are essential in order to achieve the test purpose and assign verdicts to the possible outcomes. 3.6.9HPostamble HThe test steps needed to define the paths from the end of the test body up to the finishing stable state for the test case. 3.6.10 Test step HA named subdivision of a test case, constructed from test events and/or other test steps, and used to modularize abstract test cases. 3.6.11HTest event HAn indivisible unit of test specification at the level of abstraction of the specification (e.g., sending or receiving a single PDU). 3.6.12HTest suite HA complete set of test cases, possibly combined into nested test groups, that is necessary to perform conformance testing or basic interconnection testing for an IUT or protocol within an IUT. 3.6.13HTest case HA generic, abstract or executable test case. 3.6.14HTest group HA named set of related test cases. 3.6.15HGeneric test suite HA test suite composed of generic test cases, with the same coverage as the complete set of test purposes for the particular protocol, this being the set or a superset of the test purposes of any particular abstract test suite for the same protocol. 3.6.16HAbstract test suite HA test suite composed of abstract test cases. 3.6.17HExecutable test suite N* Ԍ HA test suite composed of executable test cases. 3.6.18HConformance test suite HA test suite for conformance testing of one or more OSI* protocols. Note It should cover both capability testing and behaviour testing. It may be qualified by the adjectives: abstract, generic or executable, as appropriate. Unless stated otherwise, an "abstract test suite" is meant. 3.6.19HBasic interconnection test suite HA test suite for basic interconnection testing of one or more OSI* protocols. 3.6.20HSelected abstract test suite HThe subset of an abstract test suite selected using a specific PICS. 3.6.21HSelected executable test suite HThe subset of an executable test suite selected using a specific PICS and corresponding to a selected abstract test suite. 3.6.22HParameterized abstract test case HAn abstract test case in which all appropriate parameters have been supplied with values in accordance with a specific PICS and PIXIT. 3.6.23HParameterized executable test case HAn executable test case in which all appropriate parameters have been supplied with values in accordance with a specific PICS and PIXIT. 3.6.24HParameterized abstract test suite HA selected abstract test suite in which all test cases have been made parameterized abstract test cases for the appropriate PICS and PIXIT. 3.6.25HParameterized executable test suite H A selected executable test suite in which all test cases have been made parameterized executable test cases for the appropriate PICS and PIXIT, and corresponding to a parameterized abstract test suite. 3.7HTerminology of results 3.7.1HRepeatability (of results) HCharacteristic of a test case, such that repeated executions on the same IUT lead to the same verdict, and by extension a characteristic of a test suite. 3.7.2HComparability (of results) HCharacteristic of conformance assessment processes, such that their execution on the same IUT, in different test environments, leads to the same overall summary. 3.7.3HOutcome N* Ԍ HA sequence of test events together with the associated input/output, either identified by an abstract test case specifier, or observed during test execution. 3.7.4HForeseen outcome HAn outcome identified or categorized in the abstract test case specification. 3.7.5HUnforeseen outcome HAn outcome not identified or categorized in the abstract test case specification. 3.7.6HVerdict HStatement of "pass", "fail" or "inconclusive" concerning conformance of an IUT with respect to a test case that has been executed and which is specified in the abstract test suite. 3.7.7HSystem conformance test report (SCTR) HA document written at the end of the conformance assessment process, giving the overall summary of the conformance of the system to the set of protocols for which conformance testing was carried out. 3.7.8HProtocol conformance test report (PCTR) HA document written at the end of the conformance assessment process, giving the details of the testing carried out for a particular protocol, including the identification of the abstract test cases for which corresponding executable test cases were run and for each test case the test purpose and verdict. 3.7.9HValid test event HA test event which is allowed by the protocol Recommendation*, being both syntactically correct and occurring or arriving in an allowed context in an observed outcome. 3.7.10HSyntactically invalid test event HA test event which syntactically is not allowed by the protocol Recommendation*. Note The use of "invalid test event" is deprecated. 3.7.11HInopportune test event HA test event which, although syntactically correct, occurs or arrives at a point in an observed outcome when not allowed to do so by the protocol Recommendation*. 3.7.12H"Pass" verdict HA verdict given when the observed outcome satisfies the test purpose and is valid with respect to the relevant Recommendation(s)* and with respect to the PICS. 3.7.13H"Fail" verdict HA verdict given when the observed outcome is syntactically invalid or inopportune with respect to the relevant Recommendation(s)* or the PICS. N* Ԍ 3.7.14H"Inconclusive" verdict HA verdict given when the observed outcome is valid with respect to the relevant Recommendation(s)* but prevents the test purpose from being accomplished. 3.7.15HConformance log HA record of sufficient information necessary to verify verdict assignments as a result of conformance testing. 3.8HTerminology of test methods 3.8.1HPoint of control and observation (PCO) HA point at which control and observation is specified in a test case. 3.8.2HLower tester HThe abstraction of the means of providing, during test execution, control and observation at the appropriate PCO either below the IUT or remote from the IUT, as defined by the chosen abstract test method. 3.8.3HUpper tester HThe abstraction of the means of providing, during test execution, control and observation of the upper service boundary of the IUT, plus the control and observation of any relevant abstract local primitive. 3.8.4HAbstract (N)serviceprimitive ((N)ASP) HAn implementation independent description of an interaction between a serviceuser and a serviceprovider at an (N)service boundary, as defined in an OSI* service definition Recommendation*. 3.8.5HAbstract local primitive (ALP) HAn abbreviation for a description of control and/or observation to be performed by the upper tester, which cannot be described in terms of ASPs but which relates to events or states defined within the protocol Recommendation(s)* relevant to the IUT. Note The PIXIT will indicate whether or not a particular ALP can be realized within the SUT. The ability of the SUT to support particular ALPs as specified in the PIXIT will be used as a criterion in the test selection process. 3.8.6HTest coordination procedures HThe rules for cooperation between the lower and upper testers during testing. 3.8.7HTest management protocol HA protocol which is used as a realization of the test coordination procedures for a particular test suite. 3.8.8HLocal test methods HAbstract test methods in which the PCOs are directly at the layer boundaries of the IUT. N*Ԍ 3.8.9HExternal test methods HAbstract test methods in which the lower tester is separate from the SUT and communicates with it via an appropriate lower layer serviceprovider. Note The serviceprovider is immediately beneath the (lowest layer) protocol which is the focus of the testing, and may involve multiple OSI layers. 3.8.10HDistributed test method HAn external test method in which there is a PCO at the layer boundary at the top of the IUT. 3.8.11HCoordinated test method HAn external test method for which a standardized test management protocol is defined as the realization of the test coordination procedures, enabling the control and observation to be specified solely in terms of the lower tester activity, including the control and observation of test management PDUs. 3.8.12HRemote test method HAn external method in which there is neither a PCO above the IUT nor a standardized test management protocol; some requirements for test coordination procedures may be implied or informally expressed in the abstract test suite but no assumption is made regarding their feasibility or realization. 3.8.13HReal tester HThe realization of the lower tester, plus either the definition or the realization of the upper tester, plus the definition of the test coordination procedures, as appropriate to a particular test method. 3.8.14HTest realizer HAn organization which takes responsibility for providing, in a form independent of client and IUT, the means of testing IUTs in conformance with the abstract test suite. 4.HAbbreviations HFor the purposes of this Recommendation the following abbreviations apply.   HHAdministration*: Administration or recognized private operating agency.!H  `  HALP: abstract local primitive HASP: abstract service primitive HDTE: data terminal equipment HIUT: implementation under test HOSI: open systems interconnection  N*Ԍ `  HOSI*: OSI or related CCITT XSeries or TSeries Recommendations HPCO: point of control and observation HPCTR: protocol conformance test report HPDU: protocol data unit HPICS:  protocol implementation conformance statement HPIXIT: protocol implementation extra information for testing HBBSAP: service access point HSCTR:  system conformance test report HHRecommendation*: Standard or RecommendationH!H  `  HSUT: system under test HTMPDU: test management PDU Section 2: Overview 5.HThe meaning of conformance in OSI* 5.1HIntroduction  ` X HIn the context of OSI*, a real system is said to exhibit conformance if it complies with the requirements of applicable OSI* Recommendations* in its communication with other real systems. HApplicable OSI* Recommendations* include protocol Recommendations*, and transfer syntax Recommendations* inasmuch as they are implemented in conjunction with protocols. HOSI* Recommendations* form a set of interrelated Recommendations* which together define behaviour of open systems in their communication. Conformance of a real system will, therefore, be expressed at two levels, conformance to each individual Recommendation*, and conformance to the set. Note If the implementation is based on a predefined set of Recommendations*, often referred to as a functional standard or profile, the concept of conformance can be extended to specific requirements expressed in the functional standard or profile, as long as they do not conflict with the requirements of the base Recommendations*. 5.2HConformance requirements 5.2.1HThe conformance requirements in a Recommendation* can be: Ha)  mandatory requirements: these are to be observed in all cases;ƀ% Hb)  conditional requirements: these are to be observed if the conditions set out in the Recommendation* apply;ƀ% Hc)  options: these can be selected to suit the implementation, provided that any requirements applicable to the option are observed. More information on options is provided in Annex A.ƀ% N*Ԍ HFor example, CCITT essential facilities are mandatory requirements; additional facilities can be either conditional or optional requirements. Note The CCITT terms "essential facilities" and "additional facilities" need to be considered in the context of the scope of the CCITT Recommendation concerned; in many cases, essential facilities are mandatory for networks but not for DTEs. 5.2.2HFurthermore, conformance requirements in a Recommendation* can be stated Ha)  positively: they state what shall be done;ƀ% Hb)  negatively (prohibitions): they state what shall not be done.ƀ% 5.2.3HFinally, conformance requirements fall into two groups: Ha)  static conformance requirements;ƀ% Hb)  dynamic conformance requirements.ƀ% HThese are discussed in 5.3. and 5.5, respectively. 5.3HStatic conformance requirements HStatic conformance requirements are those that define the allowed minimum capabilities of an implementation, in order to facilitate interworking. These requirements may be at a broad level, such as the grouping of functional units and options into protocol classes, or at a detailed level, such as a range of values that have to be supported for specific parameters of timers. HStatic conformance requirements and options in OSI* Recommendations* can be of two varieties: Ha)  those which determine the capabilities to be included in the implementation of the particular protocol;ƀ% Hb)  those which determine multilayer dependencies, e.g., those which place constraints on the capabilities of the underlying layers of the system in which the protocol implementation resides. These are likely to be found in upper layer Recommendations*.ƀ% HAll capabilities not explicitly stated as static conformance requirements are to be regarded as optional. 5.4HProtocol implementation conformance statement (PICS) HTo evaluate the conformance of a particular implementation, it is necessary to have a statement of the capabilities and options which have been implemented, and any features which have been omitted, so that the implementation can be tested for conformance against relevant requirements, and against those requirements only. Such a statement is called a Protocol Implementation Conformance Statement (PICS). HIn a PICS there should be a distinction between the following categories of information which it may contain: Ha)  information related to the mandatory, optional and conditional static conformance requirements of the protocol itself;ƀ%  N*ԌHb)  information related to the mandatory, optional and conditional static conformance requirements for multilayer dependencies.ƀ% HIf a set of interrelated OSI* protocol Recommendations* has been implemented in a system, a PICS is needed for each protocol. A System Conformance Statement will also be necessary, summarizing all protocols in the system for each of which a distinct PICS is provided. 5.5HDynamic conformance requirements HDynamic conformance requirements are all those requirements (and options) which determine what observable behaviour is permitted by the relevant OSI* Recommendation(s)* in instances of communication. They form the bulk of each OSI* protocol Recommendation*. They define the set of allowable behaviours of an implementation or real system. This set defines the maximum capability that a conforming implementation or real system can have within the terms of the OSI* protocol Recommendation*. HA system exhibits dynamic conformance in an instance of communication if its behaviour is a member of the set of all behaviours permitted by the relevant OSI* protocol Recommendation(s)* in a way which is consistent with the PICS. 5.6HA conforming system HA conforming system or implementation is one which is shown to satisfy both static and dynamic conformance requirements, consistent with the capabilities stated in the PICS, for each protocol declared in the System Conformance Statement. 5.7HInterworking and conformance 5.7.1HThe primary purpose of conformance testing is to increase the probability that different implementations are able to interwork. HSuccessful interworking of two or more real open systems is more likely to be achieved if they all conform to the same subset of an OSI* Recommendation*, or to the same selection of OSI* Recommendations*, than if they do not. HIn order to prepare two or more systems to interwork successfully, it is recommended that a comparison be made of the System Conformance Statements and PICSs of these systems. HIf there is more than one version of a relevant OSI* Recommendation* indicated in the PICSs, the differences between the versions need to be identified and their implications for consideration, including their use in combination with other Recommendations*. 5.7.2HWhile conformance is a necessary condition, it is not on its own a sufficient condition to guarantee interworking capability. Even if two implementations conform to the same OSI* protocol Recommendation*, they may fail to interwork because of factors outside the scope of that Recommendation. HTrial interworking is recommended in order to detect these factors. Further information to assist interworking between two systems can be obtained by extending the PICS comparison to other relevant information, including test reports and PIXIT (see clause 6.2). The comparison can focus on: Ha)  additional mechanisms claimed to work around known ambiguities or deficiencies not yet corrected in the Recommendations* or in peer real N* systems, e.g., solution of multilayer problems;ƀ% Hb)  selection of free options which are not taken into account in the static conformance requirements of the Recommendations*;ƀ% Hc)  the existence of timers not specified in the Recommendation* and their associated values.ƀ% Note The comparison can be made between two individual systems, between two or more types of product, or, for the PICS comparison only, between two or more specifications for procurement, permissions to connect, etc. 6.HConformance and testing 6.1HObjectives of conformance testing 6.1.1HIntroduction HConformance testing as discussed in this Recommendation is focused on testing for conformance to OSI* protocol Recommendations*. However, it also applies to testing for conformance to OSI* transfer syntax Recommendations*, to the extent that this can be carried out by testing the transfer syntax in combination with an OSI* protocol. HIn principle, the objective of conformance testing is to establish whether the implementation being tested conforms to the specification in the relevant Recommendation*. Practical limitations make it impossible to be exhaustive, and economic considerations may restrict testing still further. HTherefore, this Recommendation distinguishes four types of testing, according to the extent to which they provide an indication of conformance: Ha)  basic interconnection tests, which provide prima facie evidence that an IUT conforms;ƀ% Hb)  capability tests, which check that the observable capabilities of the IUT are in accordance with the static conformance requirements and the capabilities claimed in the PICS;ƀ% Hc)  behaviour tests, which endeavour to provide testing which is as comprehensive as possible over the full range of dynamic conformance requirements specified by the Recommendation*, within the capabilities of the IUT;ƀ% Hd)  conformance resolution tests, which probe in depth the conformance of an IUT to particular requirements, to provide a definite yes/no answer and diagnostic information in relation to specific conformance issues, such tests are not standardized.ƀ% 6.1.2HBasic interconnection tests 6.1.2.1HBasic interconnection tests provide limited testing of an IUT in relation to the main features in a Recommendation*, to establish that there is sufficient conformance for interconnection to be possible, without trying to perform thorough testing. 6.1.2.2HBasic interconnection tests are appropriate:  N*ԌHa)  for detecting severe cases of nonconformance;ƀ% Hb)  as a preliminary filter before undertaking more costly tests;ƀ% Hc)  to give a prima facie indication that an implementation which has passed full conformance tests in one environment still conforms in a new environment (e.g., before testing an (N)implementation, to check that a tested (N1)implementation has not undergone any severe change due to being linked to the (N)implementation);ƀ% Hd)  for use by users of implementations, to determine whether the implementations appear to be usable for communication with other conforming implementations, e.g., as a preliminary to data interchange.ƀ% 6.1.2.3HBasic interconnection tests are inappropriate: Ha)  as a basis for claims of conformance by the supplier of an implementation;ƀ% Hb)  as a means of arbitration to determine causes for communications failure.ƀ% 6.1.2.4HBasic interconnection tests should be standardized as either a very small test suite or a subset of a conformance test suite (including capability and behaviour tests). They can be used on their own or together with a conformance test suite. The existence and execution of basic interconnection tests are optional. 6.1.3HCapability tests 6.1.3.1HCapability tests provide limited testing of each of the static conformance requirements in a Recommendation*, to ascertain what capabilities of the IUT can be observed and to check that those observable capabilities are valid with respect to the static conformance requirements and the PICS. 6.1.3.2HCapability tests are appropriate: Ha)  to check as far as possible the consistency of the PICS with the IUT;ƀ% HHX HHb)  as a preliminary filter before undertaking more indepth and costly testing;ƀ% Hc)  to check that the capabilities of the IUT are consistent with the static conformance requirements;ƀ% Hd)  to enable efficient selection of behaviour tests to be made for a particular IUT;ƀ% He)  when taken together with behaviour tests, as a basis for claims of conformance.ƀ% 6.1.3.3HCapability tests are inappropriate: Ha)  on their own, as a basis for claims of conformance by the FtCCtCsupplier of an implementation; Hb)  for testing in detail the behaviour associated with each capability which has been implemented or not implemented;ƀ%  N*ԌHc)  for resolution of problems experienced during live usage or where other tests indicate possible nonconformance even though the capability tests have been satisfied.ƀ%  X 6.1.3.4HCapability tests are standardized within a conformance test suite. They can either be separated into their own test group(s) or merged with the behaviour tests. 6.1.4HBehaviour tests 6.1.4.1HBehaviour tests test an implementation as thoroughly as is practical, over the full range of dynamic conformance requirements specified in a Recommendation*. Since the number of possible combinations of events and timing of events is infinite, such testing cannot be exhaustive. There is a further limitation, namely that these tests are designed to be run collectively in a single test environment, so that any faults which are difficult or impossible to detect in that environment are likely to be missed. Therefore, it is possible that a nonconforming implementation passes the conformance test suite; one aim of the test suite design is to minimize the number of times that this occurs. 6.1.4.2HBehaviour tests are appropriate, when taken together with capability tests, as a basis for the conformance assessment process. 6.1.4.3HBehaviour tests are inappropriate for resolution of problems experienced during live usage or where other tests indicate possible non conformance even though the behaviour tests have been satisfied. 6.1.4.4HBehaviour tests are standardized as the bulk of a conformance test suite. Note Behaviour tests include tests for valid behaviour by the IUT in response to valid, inopportune and syntactically invalid protocol behaviour by the real tester. This includes testing the rejection by the IUT of attempts to use features (capabilities) which are stated in the PICS as being not implemented. Thus, capability tests do not need to include tests for capabilities omitted from the PICS. 6.1.5HConformance resolution tests 6.1.5.1HConformance resolution tests provide diagnostic answers, as near to definitive as possible, to the resolution of whether an implementation satisfies particular requirements. Because of the problems of exhaustiveness noted in  6.1.4.1, the definite answers are gained at the expense of confining tests to a narrow field. 6.1.5.2HThe test architecture and test method will normally be chosen specifically for the requirements to be tested, and need not be ones that are generally useful for other requirements. They may even be ones that are regarded as being unacceptable for (standardized) abstract conformance test suites, e.g., involving implementation specific methods using, say, the diagnostic and debugging facilities of the specific operating system. 6.1.5.3HThe distinction between behaviour tests and conformance resolution tests may be illustrated by the case of an event such as a Reset. The behaviour tests may include only a representative selection of conditions under which a Reset might occur, and may fail to detect incorrect behaviour in other circumstances. The conformance resolution tests would be confined to conditions under which incorrect behaviour was already suspected to occur, and would confirm whether or not the suspicions were correct. 6.1.5.4HConformance resolution tests are appropriate: Ha)  for providing a yes/no answer in a strictly confined and previously N* identified situation (e.g., during implementation development, to check whether a particular feature has been correctly implemented, or during operational use, to investigate the cause of problems);' Hb)  as a means for identifying and offering resolutions for deficiencies in a current conformance test suite.' 6.1.5.5HConformance resolution tests are inappropriate as a basis for judging whether or not an implementation conforms overall. 6.1.5.6HConformance resolution tests are not standardized. Note on 6.1 As a byproduct of conformance testing, errors and deficiencies in protocol Recommendations* may be identified. 6.2HProtocol implementation extra information for testing (PIXIT) HIn order to test a protocol implementation, the test laboratory will require information relating to the IUT and its testing environment in addition to that provided by the PICS. This "Protocol Implementation eXtra Information for Testing" (PIXIT) will be provided by the client submitting the implementation for testing, as a result of consultation with the test laboratory. HThe PIXIT may contain the following information: Ha)  information needed by the test laboratory in order to be able to run the appropriate test suite on the specific system (e.g., information related to the test method to be used to run the test cases, addressing information);' Hb)  information already mentioned in the PICS and which needs to be made precise (e.g., a timer value range which is declared as a parameter in the PICS should be specified in the PIXIT);' Hc)  information to help determine which capabilities stated in the PICS as being supported are testable and which are untestable;' Hd)  other administrative matters (e.g., the IUT identifier, reference to the related PICS).' HThe PIXIT should not conflict with the appropriate PICS. HThe abstract test suite specifier, test realizer and test laboratory will all contribute to the development of the PIXIT proforma. 6.3HConformance assessment process outline 6.3.1HThe main feature of the conformance assessment process is a configuration of equipment allowing exchanges of information between the IUT and a real tester. These are controlled and observed by the real tester. 6.3.2HIn conceptual outline, conformance testing should include several steps, involving both static conformance reviews and live testing phases, culminating in the production of a test report which is as thorough as is practical. 6.3.3HThese steps are: Ha)  analysis of the PICS;'  N*ԌHb)  test selection and parameterization; Hc)  basic interconnection testing (optional); Hd)  capability testing; He)  behaviour testing; Hf)  review and analysis of test results; Hg)  synthesis, conclusions and conformance test report production. HThese are illustrated in Figure 1. HPrior to the execution of any of the tests, the IUT's PICS and PIXIT are input to the test case selection and parameterization process. 6.4HAnalysis of results 6.4.1HGeneral 6.4.1.1HOutcomes and verdicts HThe observed outcome (of the test execution) is the series of events which occurred during execution of a test case; it includes all input to and output from the IUT at the points of control and observation. HThe foreseen outcomes are identified and defined by the abstract test case specification taken in conjunction with the protocol Recommendation*. For each test case, there may be one or more foreseen outcome(s). Foreseen outcomes are defined primarily in abstract terms. HA verdict is a statement of pass, fail or inconclusive to be associated with every foreseen outcome in the abstract test suite specification. HThe analysis of results is performed by comparing the observed outcomes with foreseen outcomes. HThe verdict assigned to an observed outcome is that associated with the matching foreseen outcome. If the observed outcome is unforeseen then the abstract test suite specification will state what default verdict shall be assigned. HThe means by which the comparison of the observed outcomes with the foreseen outcomes is made is outside the scope of this Recommendation. Note Amongst the possibilities are: Ha)  manual or automated comparison (or a mixture); Hb)  comparison at or after execution time; Hc)  translating the observed outcomes into abstract terms for comparison with the foreseen outcomes or translating the foreseen outcomes into the terms used to record the observed outcomes.' HThe verdict will be pass, fail or inconclusive: Ha)  pass means that the observed outcome satisfies the test purpose and is valid N* with respect to the relevant Recommendation(s)* and with respect to the PICS;' Hb)  fail means that the observed outcome is syntactically invalid or inopportune with respect to the relevant Recommendation(s)* or the PICS;' Hc)  inconclusive means that the observed outcome is valid with respect to the relevant Recommendation(s)* but prevents the test purpose from being accomplished.' HThe verdict assigned to a particular outcome will depend on the test purpose and the validity of the observed protocol behaviour. HThe verdicts made in respect of individual test cases will be synthesized into an overall summary for the IUT based on the test cases executed. 6.4.1.2HConformance test reports HThe results of conformance testing will be documented in a set of conformance test reports. These reports will be of two types: a System Conformance Test Report (SCTR), and a Protocol Conformance Test Report (PCTR). HThe SCTR, which will always be provided, gives an overall summary of the conformance status of the SUT, with respect to its single or multilayer IUT. A standard proforma for the SCTR is for further study. HThe PCTR, one of which will be issued for each protocol tested in the SUT, documents all of the results of the test cases giving references to the conformance logs which contain the observed outcomes. The PCTR also gives reference to all necessary documents relating to the conduct of the conformance assessment process for that protocol. HA standard proforma for the PCTR is for further study. The ordered list of test cases to be used in the PCTR will be specified in the conformance test suite Recommendation*. 6.4.2HRepeatability of results HIn order to achieve the objective of credible conformance testing, it is clear that the result of executing a test case on an IUT should be the same whenever it is performed. Statistically, it may not be possible to perform a complete conformance test suite and observe outcomes which are completely identical to those obtained on another occasion: unforeseen events do occur, and this is a feature of the environments involved. Nevertheless, at the test case level, it is very important that every effort is made by the test specifiers and test laboratories to minimize the possibility that a test case produces different outcomes on different occasions. 6.4.3HComparability of results HIn order to achieve the ultimate objectives of conformance testing, the overall summary concerning conformance of an IUT has to be independent of the test environment in which the testing takes place. That is to say, the standardization of all of the procedures concerned with conformance testing should result in a comparable overall N* summary being accorded to the IUT, whether the testing is done by the supplier, a user, or by any third party test house. There are a large number of factors to be studied to achieve this, of which some of the more important are: Ha)  careful design of the abstract test case specification to give flexibility where appropriate, but show which requirements have to be met; (which is the subject of this Recommendation);' Hb)  careful specification of the real tester which should be used to run the test suite; again this specification should give flexibility where appropriate, but show which requirements have to be met, including all test coordination procedures (if any);' * Hc)  careful specification of the procedure to be followed in determining how the contents of the PICS are to be used in the analysis of outcomes of test cases; there should be no room for "optimistic" interpretation;' Hd)  careful specification of the procedures to be followed by test laboratories as regards the repetition of a test case before making a final verdict for that test purpose;' He)  a proforma for a conformance test report;' Hf)  careful specification of the procedures necessary when synthesizing an overall summary.' 6.4.4HAuditability of results HFor legal reasons, as well as others, it may be necessary to review the observed outcomes from the execution of a conformance test suite in order to make sure that all procedures have been correctly followed. Whether or not analysis has been carried out in a manual or automatic mode, it is essential that all inputs, outputs, and other test events are careful logged, and the analysis of the results recorded. In some cases this may be the responsibility of the test realizer, who may elect to include the test criteria in the conformance log, as well as all outcomes. In others, it may be the responsibility of the test laboratory, which might be required to follow all standard procedures concerning the recording of results. Note As far as auditability is concerned, some automatic procedures would be preferred, but in the event it should be appreciated that from a legal standpoint such automatic procedures would have to be accredited themselves, if they are to be credible.