您好,欢迎来到纷纭教育。
搜索
您的当前位置:首页A Development Approach for Engineering Modeling and Analysis Environments

A Development Approach for Engineering Modeling and Analysis Environments

来源:纷纭教育
A Development Approach for

Engineering Modeling and Analysis Environments

David Coppit

June 27, 2001

University of Virginia Department of Computer Science

Thornton Hall, Charlottesville, VA 22903

david@coppit.org

1 Introduction

This addendum elaborates and clarifies aspects of the proposal presented on May 25, 2000. It

presents a reformulation of the three research questions in the context of an overarching approach for the development of engineering environments. It also presents a modified description of the work to be performed and the manner in which the work will be used to evaluate the proposed approaches. Section 2 presents the primary research problem and Section 3 presents the proposed approach. Section 4 describes the experimental design. Sections 5 and 6 describe expected data and contributions. Section 7 provides a timeline, and Section 8 summarizes.

2 Problem

Modeling and analysis environments are critical to engineering. Unfortunately, developing

trustworthy modeling and analysis environments in a cost-effective manner is difficult. In particular, we lack a development approach based on a standard architecture and set of development techniques for the construction of such systems.

Primary research problem: We lack an approach that enables the cost-effective construction of engineering environments having sophisticated modeling capabilities and trustworthy analysis capabilities.

3

3.1

Approach and Thesis

Overview of the Approach

The proposed approach is based on the application of specific development techniques in the

context of an architecture for modeling and analysis environments. The structure of our proposed architecture is illustrated in Figure 1. The boxes with solid lines represent modules. Arrows represent communication between modules, and ovals represent different representations of the model. Central to the architecture is the abstract representation of the model. Interacting with the model is a set of tools. The upper box represents the set of front-end (user-oriented) tools that host concrete representations of the model and are coordinated by a view manager. The front-end tools provide the user interface which allows the user to manipulate one or more concrete representations of the abstract model. The lower box consists of back-end (batch-oriented) modules for the analysis of the model. These tools do not allow the user to manipulate the model. Instead, they perform various analyses and report the results.

There are two key development techniques guiding the development of modeling environments

using this architecture. The first is the use of mass-market applications for the development of the modeling tools. We believe that this approach has the potential to significantly reduce the cost of developing environments while still delivering the levels of functionality that users expect. Secondly, the development of the environment is based on the judicious use of formal methods, in which the syntax and the semantics of the model is specified, validated, and then carefully implemented and verified. We

Page 1

believe that this approach can improve the trustworthiness of the overall environment, and significantly improve the dependability of the analysis tools.

3.1.1

Package-Oriented Modeling Capability

Package Wrapper 1Package View 1ConcreteRepr. 1Package Wrapper 2Front-EndToolsPackage View 2ConcreteRepr. 2The modeling capability is constructed

using package-oriented programming (POP). POP involves the specialization and tight integration of multiple mass-market packages as components. Each concrete representation is hosted by a package that provides a user interface as well as programmatic methods for manipulating the internal representation.

Package wrappers encapsulate the details of converting the abstract representation to and from the concrete representations, and also encapsulate the implementation of view-specific operations on the concrete representations. The view manager controls the conversion of one concrete representation to another via an intermediate abstract representation, and also provides an abstract representation for analysis by the back-end tools.

View ManagerAbstractRepr.DispatcherAnalysis Engine 1Intermed.Repr. 1Analysis Engine 2Intermed.Repr. 2Low-LevelRepr. 1Back-EndToolsLow-LevelRepr. 2POP is an innovative approach to the Low-Level Solver 1Low-Level Solver 2development of interactive systems such as engineering environments. Component-based Figure 1: Architectural overview software development (CBSD) approaches such as POP promise to increase the productivity of software developers. To date,

there have been few successful CBSD models, and none that support the construction of highly interactive software from multiple components. POP is a candidate approach that shows some promise but has not yet been carefully evaluated. Our research provides us the opportunity to contribute to the solution of this more general research problem:

Secondary research problem #1: The feasibility of the package-oriented programming model for component-based software development is still unknown, and we lack an understanding of the conditions under which it can succeed.

3.1.2 Formal Specification of Dynamic Fault Trees

The analysis tools depend on the application domain of the environment. For example, in Figure 1, the overall analysis capability utilizes a dispatcher which routes the abstract representation to one of two analysis engines, depending on the analysis that the user requests and possibly the characteristics of the model. Other analysis tools may implement additional functionality, such as a divide-and-conquer solution approach.

Engineering modeling and analysis environments are critical to engineering. Incorrect results can

result in less than optimal designs, which could in turn lead to serious failure in the resulting product. Both the fidelity of the model and the validity of analysis results depend critically on the precise

Page 2

definition of the modeling framework. However, despite the complex and subtle nature of many modeling frameworks, their semantics are often defined without mathematical rigor. A well-known approach to increasing our confidence in software systems is the use of formal methods. However, formal methods are often not applied in practice. One area in which the use of formal methods is needed is the specification of the semantics of modeling frameworks for computational engineering:

Secondary research problem #2: Despite the complexity of modeling frameworks and the potential risk associated with the use of invalid analysis results, few modeling frameworks are specified with the necessary precision..

In general, formal specification of the modeling framework has the potential to significantly

improve the trustworthiness of the overall modeling environment. Specification of the concrete representations and the abstract representation, and the correspondence between the two can reduce the chance of error in rendering to and from the various representations. Applying formal methods to the specification of the semantics of the abstract model can reduce the risk associated with the analysis tools by establishing a mathematical correspondence between the abstract model and the low-level mathematical representation. 3.2

Characteristics of the Approach

The approach we have proposed has several potential benefits. In the architecture, the abstraction

of the packages via the package wrappers decouples the packages from the environment, easing evolution and experimentation with the packages. The use of POP for the superstructure provides rich functionality at significantly reduced cost due to mass-market pricing [4]. The view manager provides a level of encapsulation for the modeling capability, allowing the rest of the environment to operate independently of the number and type of concrete representations available.

The use of the abstract representation of the model as the internal “lingua franca” of the system

simplifies the system and eases evolution. For example, the back-end tools are loosely coupled with the front-end tools, which allows for independent development of the modeling and analysis capabilities, enabling rapid experimentation and evolution of the language syntax, the intermediate representations, and the overall analysis capability.

The judicious use of formal specification techniques in a denotational style increases the

trustworthiness of the modeling framework and the resulting implementation. Conflicts in the semantics of the language can be discovered and resolved early, instead of leaving the resolution to the arbitrary and/or implicit decision of the developer. Formal specification also reveals regularity and orthogonality issues in the language, and better establishes the correspondence between the multiple concrete representations. 3.3 Thesis

Thesis: The approach we have proposed can contribute significantly to the cost-effective development of sophisticated, trustworthy modeling and analysis environments.

4 Experimental Design

To test our thesis we will use the approach for the end-to-end construction of a representative environment for the modeling and analysis of dynamic fault trees (DFTs). The environment will be representative of engineering modeling and analysis environments, providing functionality commensurate with other modeling environments. It will support the manipulation of multiple concrete representations of the model, and will support multiple analysis techniques. The concrete representations hosted by the modeling views and the DFT analysis tool will be based on a validated formal specification of DFTs.

Page 3

4.1 User-Driven Requirements

Our experimental evaluation will have several strengths. The first is that the software to be developed is representative of software environments. The environment will not be a “toy system”–it will be based on real requirements from real users, and will likely be used in practice. The development will be executed in an end-to-end fashion from requirements definition to delivered product, including a beta test phase and user feedback via bug reporting and surveys. The environment will not be under-designed in terms of the level of functionality or quality that will be delivered.

The modeling capability of the environment will be developed under the aegis of a NASA

contract by a small group of developers in a university context. As part of the experimental evaluation of our proposed approach, the environment will be evaluated by NASA based on the environment’s fulfillment of the documented requirements. These requirements will represent an independent standard for evaluating the extent to which the environment satisfies the modeling needs of real users. Upon delivery of the software, NASA will use the testing document to verify that each of the requirements is met, thereby fulfilling a condition for final acceptance of the software. The evaluation by NASA will likely also involve more subjective characteristics such as “high usability” which are not well represented in the requirements document. Additional data on these qualitative properties will be collected from NASA and other people through the use of a carefully designed survey.

Due to differing requirements, the version of the software evaluated by NASA (called

Galileo/ASSAP) will not benefit directly from the use of formal methods. Instead, it will contain an informally developed and verified implementation of the analysis tools. Our intent is to use the NASA version as a starting point for development of a version based on the formal specification that we develop. The NASA evaluation will be largely centered on the modeling capabilities, the aspect that NASA evaluation is best able to address. Internal characteristics such as the trustworthiness of the analysis tools are most influenced by the formal specification effort, and are difficult for users to ascertain. For such characteristics, we will develop reasoned arguments or evidence that suggests that the characteristics are fulfilled, and then present those arguments to domain experts for validation. 4.2

Sophisticated Modeling Capability

The development of the modeling capability is part of an ongoing effort in our research group to

evaluate the POP approach. Based on our experiences to date, we have already reported preliminary results in which we concluded that the POP was a high-risk, high-payoff approach [2]. The particular contribution we will make in this research will be design and development of phased mission modeling capabilities. Phased missions allow engineers to use multiple fault trees to model the system during different phases of operation.

This feature poses significant challenges for POP. The failure modes of each phase are modeled

using separate fault trees, with the basic components sharing the same characteristics. The user interface must be engineered so that the packages can host multiple fault trees in a usable manner. The front-end tools will need to manage multiple representations of the fault trees, allowing the user to convert the fault trees from one view to another automatically. A new level of integration of the packages will be required to prevent inconsistency in corresponding basic components in different phases. Lastly, fault tree editing operations will need to be expanded to handle multiple fault trees. 4.3

Trustworthy Analysis Capability

During the course of our research we will specify the DFT semantics using techniques from

software engineering and programming languages research. We will use formal methods for the development and validation of a specification of the framework’s semantics. The style in which we will develop the specification will be based on denotational semantics techniques in which the domain of DFTs is formalized and its semantics expressed in terms of simpler domains. This specification will be

Page 4

the basis for the careful implementation of the concrete representations, the parsers, the abstract representation, and most importantly, the analysis core.

The development of a trustworthy formal specification of DFTs will be done in collaboration with

domain experts using the Z [5] specification language. We will validate the specification with domain experts to ensure that it adequately captures their understanding of the framework, refining it in response to their feedback. Next we will revise the framework in order to improve its regularity, orthogonality, and other characteristics. Lastly, we will apply analysis tools such as type checkers and theorem provers to help check the integrity of the specification by ensuring that certain theorems of the specification are valid.

We have already formalized most of the DFT framework [3]. Once the specification development

and validation is complete, it can be used to drive a careful implementation of solution engines for models expressed in the framework. While we do not intend to attempt formal refinement of the specification to code, we will leverage the investment made in the specification during the development of the implementation. As the implementation of each abstraction is completed, it will be reviewed and tested to verify that it properly implements the corresponding abstraction in the specification. Developing the software in this manner will allow us to reason about the correctness of each abstraction independently and in a bottom-up fashion, thereby increasing the trustworthiness of the overall implementation by carefully constructing its constituent parts.

The verification of the implementation will be based on review of the resulting software

implementation and the arguments for its correctness. A test set consisting of a set of dynamic fault trees and their solutions will also be used to test the correctness of the implementation. During informal refinement of the specification to code, reasoned arguments will be developed to help justify that the transformations are correct.

5 Expected Data

We expect to acquire several types of data during the course of this research. The first is an informal analysis of user requirements for reliability analysis tools. These requirements will be used to collect data on the extent to which the environment we develop is able to meet the requirements of the users. We will also estimate the overall cost of developing a representative environment in this style. The implementation of phased missions along with experiences garnered from past software

development using POP will provide the basis for our evaluation of the feasibility of POP. The development of the modeling capability will also provide insight into the difficulties, pitfalls, risks, and benefits of applying the POP approach. This data will be based on an analysis of three factors of POP development: component specialization, component integration, and component evolution. We will also acquire anecdotal data on the suitability of today’s components for development in this style.

The specification of the syntax of DFTs will help us evaluate the benefits of such an approach to

the development of the syntax and semantics of engineering modeling frameworks. The specification of the notation’s semantics will also provide data on the impact of formalism on the design of the modeling framework and the reliability of the analysis core. Our validation of the specification will provide some data on the use of formal methods in multi-disciplinary collaborations between software engineers and domain experts.

6 Expected Contributions

If successful, our research will provide one data point supporting the feasibility of the proposed approach. It will provide data supporting the feasibility of the application of formal methods for the development of critical software. We will also have data regarding the ability of the proposed approach to produce software that meets the requirements of users.

Page 5

We expect our evaluation to determine the feasibility of using the POP approach to construct

sophisticated, interactive software. Our development efforts using the POP approach will allow us to evaluate the key pitfalls and difficulties. In a preliminary paper on POP [1] we describe the potential of the approach, as well as difficulties we have encountered thus far. We expect additional insights as a result of implementing features of the environment that require increased specialization and integration of the components. The result will be a better understanding of the POP model–both in the abstract and in terms of a concrete instantiation, with insights into both its potential, problems, and shortcomings. The evaluation will also provide much needed insight into the conditions upon which the success of component-based approaches, in general, is predicated.

Our work on the formal specification of modeling frameworks will have two basic contributions.

First, as a case study, our work will support the claim that the use of formal methods in the design of modeling frameworks for engineering is badly needed, and that it can be both practical and profitable. Second, the case study will result in the formal definition of the DFT framework. This definition will be a significant contribution to the field of reliability engineering. The DFT framework is recognized as important to modeling of fault-tolerant computer-based systems. Yet, the framework has been without a rigorously precise, i.e., scientific, definition. Formalizing the DFT framework will likely reveal a number of significant conceptual errors in design and program errors in implementation. It will also lead to significant insights concerning the underlying science and improvements in framework design.

7 Timeline

7.1

Work Already Performed

1997: Integration of legacy analysis engine into early Galileo prototype 1998: Development and validation of an initial specification of DFTs [1]

1998: Reimplementation of DFT analysis engine informed by initial DFT specification (with Ragavan Manian)

1999: Prototype integration of Monte Carlo analysis engine for DFTs

1999: Characterization of DFT subtypes and their corresponding semantic domains, and of subtleties in the modeling framework

2000: Early evaluation of POP based on experiences to date developing Galileo modeling capability [2] 2000-2001: Development of improved and complete specification of DFT syntax and semantics 2001 (Feb): Designed survey for evaluation of Galileo by users 7.2

Work to be Done

2001 (Feb-Aug): Design and implement phased mission capability (in cooperation with other members of the research group)

2001 (Aug): Deliver Galileo/ASSAP to NASA for evaluation 2001 (Mar, Nov): Survey beta testers and users of Galileo

2001 (Mar-Apr): Validate specification with domain experts and through proof of key theorems 2001 (Apr-Oct): Implement new fault tree code and DFT analysis engine based on specification 2001 (Oct-Nov): Integrate new fault tree and DFT analysis engine into Galileo

Page 6

7.3 Work Not to be Done

Formal specification and implementation of modularization of DFTs Formal specification and implementation of phased mission capability

Formal specification and implementation of transfer gates and view-dependent information in the concrete views

Formal specification and implementation of DFT semantics in terms of other low-level domains such as BDDs or Monte Carlo simulation

NASA will provide an evaluation of the capabilities of POP through their evaluation of Galileo/ASSAP. However, they will not be involved in the evaluation of the formally-based DFT analysis engine implementation, as it is not part of their requirements and will not be integrated until after delivery of Galileo/ASSAP.

8 Summary

In this document I have more clearly established the relationship between the three overall

research problems. I have also described the activities to be performed in the context of the development of an environment for dynamic fault tree modeling and analysis, and indicated how those activities will allow me to gain insight into the research questions. The work, if successful, could have significant impact on many engineering domains. The ability to build sophisticated, high assurance environments at low cost could further progress in several disciplines by making previously promising but impractical environments feasible. The use of formalism in the development of modeling environments can provide a level of assurance that will help the increase the model’s acceptance in the academic community, and will provide a level of assurance to practitioners for the model and analysis implementations.

References

[1] David Coppit and Kevin J. Sullivan. Formal specification in collaborative design of critical software

tools. In Proceedings Third IEEE International High-Assurance Systems Engineering Symposium, pages 13-20, Washington, D.C., 13-14 November 1998. IEEE. [2] David Coppit and Kevin J. Sullivan. Multiple mass-market applications as components. In

Proceedings of the 22nd International Conference on Software Engineering, pages 273-82, Limerick, Ireland, 4-11 June 2000. IEEE. [3] David Coppit, Kevin J. Sullivan, and Joanne Bechta Dugan. Formal Semantics of Models for

Computational Engineering: A Case Study on Dynamic Fault Trees. Proceedings of the International Symposium on Software Reliability Engineering, pages 270-282, San Jose, California, 8-11 October 2000. IEEE. [4] Joanne Bechta Dugan, Kevin J. Sullivan, and David Coppit. Developing a low-cost high-quality

software tool for dynamic fault tree analysis. Transactions on Reliability, December 1999, pages 49-59. [5] J.M. Spivey. The Z Notation: A Reference Manual. Prentice Hall International Series in Computer

Science, 2nd edition, 1992.

Page 7

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- fenyunshixun.cn 版权所有 湘ICP备2023022495号-9

违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务