Table of Contents
Information gathering and assembly
The first two weeks consisted mainly of the gathering of and assembling of information concerning existing products related to our work. Several articles were ordered from the library at Luleå tekniska högskola, and some were found on internet news. Most are articles from different magazines though. The following sections will describe the findings so far, and summarize the important issues in these articles. We have concentrated upon articles that discuss requirements engineering, requirements traceability, graphical user interfaces, prototypes, and tools for developing graphical user interfaces.
Participants in this search were Mikael Nyström and Kåre Synnes, last-year graduates for masters degree in computer science and engineering, and currently doing their practicum at Erisoft.
Written by Ramón D. Acosta et al. (1994)
Rapid prototyping techniques have been recognized as an important technology for requirements engineering. By developing and exercising executable prototypes as part of the requirements specification process, it is possible to address the well known problems of ambiguity, incompleteness, and inconsistency in capturing requirements for complex software systems. The Requirements Engineering Environment (REE), under development at Rome Laboratory since 1985, provides an integrated toolset for rapidly presenting, building, and executing models of critical aspects of complex systems. This article presents an overview of the REE toolset. Modelling aspects covered in the study include designing user interfaces, and iterative modification of functional prototypes.
Ideally requirements describe external, user-visible characteristics rather than internal system structure. It also specify the constraints placed on what is needed. Requirements engineering is the activity of forming a model based on the requirements and then validating that the model accurately represents what is needed. Development of evolutionary prototypes, with continuous user involvement and review, can reduce risk, cost, and time associated with requirements specification and construction of software systems.
Progress in developing rapid prototyping environments, however, has been slowed by the lack of unifying models and technology capable of representing the complex data relationships associated with requirements. Rome Laboratory (RL) have been conducting a research and development program in requirements engineering since 1985. One result has been the evolutionary development of the REE, an integrated set of tools that allows systems analysts to rapidly build functional, user interface, and performance prototype models of system components. The major components of REE include Proto, the Rapid Interface Prototyping System (RIP), and an interface routine package that integrates Proto and RIP. Proto is a rapid prototyping computer-aided software engineering (CASE) system that supports specification and design of systems incorporating both sequential and parallel processing elements. RIP is a collection of tools that support building, executing, and analyzing user interface prototypes. Access to all of the RIP capabilities is accomplished through graphic, menu, and template driven interfaces, allowing requirements engineers who are not programmers to readily utilize the system.
The RIP component of REE contains a set of tools to prototype user interfaces. A user interface prototype can model a system's screen contents and layout as well as execute its associated functions. The REE technology developed under the RL program provides a toolset for requirements engineering through rapid prototyping of functional, user interface, and performance aspects of critical system components. Incorporating of this helps eliminate ambiguities, incompleteness, and inconsistencies in requirements. Increased preciseness of requirements, in turn, leads to improved quality in delivered software products, as well as reduced cost and improved predictability of schedules.
Authors: Orlena C. Z. Gotel & Anthony C. W. Finkelstein (1994)
This paper investigates and discuss the underlying nature of the requirements traceability problem. It introduces the distinction between pre-requirements specification (pre-RS) traceability and post-requirements specification (post-RS) traceability, to demonstrate why an all-encompassing solution to the problem is unlikely, and to provide a framework through which to understand its multifaceted nature. It also reports how the majority of the problems attributed to poor requirements traceability are due to inadequate pre-RS traceability and shows the fundamental need for improvements here.
Despite many advances, RT remains a widely reported problem area by industry. This article attribute this to inadequate problem analysis. Definitions of "requirements traceability" are discussed in detail later, though the following are provided for orientation:
Numerous techniques have been used for providing RT, including: cross referencing schemes; keyphrase dependencies; templates; RT matrices; matrix sequences; hypertext; integration documents; assumption-based truth maintenance networks; and constraints networks. These differ in the quantity and diversity of information they can trace between, in the number of interconnections they can control between information, and in the extent to which they maintain RT when faced with ongoing changes to requirements.
- Requirements traceability refers to the ability to describe and follow the life of a requirement, in both a forwards and backwards direction.
- Pre-RS traceability refers to those aspects of a requirement's life prior to inclusion in the RS.
- Post-RS traceability refers to those aspects of a requirement's life that result from inclusion in the RS.
Many commercial tools and research products support RT, primarily because they embody manual or automated forms of the above techniques.
The traceability problem still exists because, to date, techniques have been thrown at the RT problem without any thorough investigation of what the problem is. One problem is the lack of a common definition of "requirements traceability", either by practitioners or in the literature. The definitions that currently are used were found to be either: purpose-driven (defined in terms of what it should do); solution-driven (defined in terms of how it should do it); information-driven (emphasizing traceable information); or direction-driven (emphasizing traceability direction). No single definition covers all concerns. How, then, can RT be coherently and consistently provided if each individual has his or her own understanding as to what RT is?
- General-purpose tools include: hypertext editors; word processors; spreadsheets; database systems; etc.
- Special-purpose tools support dedicated activities related to RE and some achieve restricted RT.
- Workbenches contain a collection of the above to support coherent sets of activities. Less restricted RT can be achieved, but the quality depends on the focal workbench activity. They are typically centered around a database management system, and have tools to document, parse, organize, edit, interlink, change, and manage requirements.
- Environments which integrate tools for all aspects of development, can enable RT throughout a project's life.
Each practitioner also had his or her own understanding as to the main cause of the RT problem. It was found that, the phrase "RT problem" is commonly used to umbrella many problems, and that RT improvements are expected to yield the solution to further (and even ambitious or conflicting) problems.
The definition of requirements traceability most commonly found in literature is:
"A software requirements specification is traceable if (i) the origin of each of its requirements is clear and if (ii) it facilitates the referencing of each requirement in future development or enhancement documentation" (ANSI/IEEE Standard 830-1984).
This definition specifically recommends backward traceability to all previous documents and forward traceability to all spawned documents. Together with a definition found the Oxford English Dictionary of the word "trace", the following definition for RT was derived:
"Requirements traceability refers to the ability to describe and follow the life of a requirement, in both a forwards and backwards direction."
The investigations of this article further suggest that RT is of two basic types:
"Pre-RS traceability, which is concerned with those aspects of a requirement's life prior to its inclusion in the RS."
"Post-RS traceability, which is concerned with those aspects of a requirement's life that results from its inclusion in the RS."
The authors of the article emphasize the pre-RS and post-RS separation, because RT problems in practice were found to centre around a current lack of distinction here. Although both these types of RT are needed, it is crucial to understand their subtle differences, as each type imposes its own distinct requirements on potential support. Post-RS traceability depends on the ability to trace requirements from, and back to, a baseline (the RS), through a succession of artifacts in which they are distributed. Changes to the baseline need to be re-propagated through this chain. Pre-RS traceability depends on the ability to trace requirements from, and back to, their originating statement(s), through the process of requirements production and refinement, in which statements from diverse sources are eventually integrated into a single requirement in the RS.
Existing support mainly provides post-RS traceability. This is not suitable, because it generally treats an RS as a black box, with little to show that the requirements are in fact the end product of a complex and on-going process. Most of the problems attributed to poor RT were found to be due to the lack of (or inadequate) pre-RS traceability.
Having identified insufficient pre-RS traceability as the main contributor to continuing RT problems, and shown how it is likely to be the only contributor in formal development settings, the investigations were re-focused to determine: what improvements in pre-RS traceability would involve; and how these could be realized.
The challenge then lies in satisfying both the end-users and the providers. For end-users, pre-RS traceability must be sensitive to contextual needs, but they cannot predefine their anticipated requirements for it. The providers must identify and document relevant information, in a (re)usable form (either as a by-product of other work or through more explicit support), but they cannot foresee and address all possible needs.
This article focus the solution to the pre-RS traceability problem, on those basic requirements for which some solutions already exist, and they also make recommendations for additional research. An increasing awareness of information is necessary. Much progress have been made in the ability to obtain and record diverse types of RE information. Also, to support iterative development, information requires flexibility of contents and structure. Relevant work include, among others, the use of hypertext to provide explicit visibility of structure and maintain relations. RT is predominantly hardwired, predefining what can be traced, and its presentation. Developments in areas such as information retrieval, artificial intelligence, and human computer interaction, are often pertinent.
Surprisingly, the inability to locate and access the sources of requirements and pre-RS work was the most commonly cited problem across all the practitioners in their investigations. This problem was also reported to be a major contributor to others: An out of date RS; Poor collaboration, as the invisibility of changing work structures and responsibilities makes it difficult to transfer information amongst parties, integrate work, and assign work to those with relevant knowledge and experience; and other problems.
In projects consisting of individuals split into a number of teams, the location and access of sources was found to be either impossible, time consuming, or unreliable. This was due to: a lack of shared or project-wide commitment; information loss; inability to assess the overall state of the work or knowledge; little cross-involvement; poor communication; minimal distribution of information; and changing notions of ownership, accountability, responsibility, and working structure. Characteristics that reduced its occurrence were found in projects consisting of few individuals, due to: a clear visibility of responsibilities and knowledge areas; clarity of working structures; team commitment and ownership; and individuals who acted as common threads of involvement (also contributors to success).
Notions like ownership and responsibility are often transient. The ability to locate relevant individuals therefore deteriorates as the volume and complexity of communication paths grow over time. RT problems will persist when accurate responsibility cannot be located and these individuals cannot be accessed for the informal communication often necessary to deal with them.
In conclusion, to achieve any order of magnitude improvement with the RT problem, there is a need to re-focus research efforts on pre-RS traceability. Of particular concern is the intrinsic need for the on-going ability to rapidly locate and access those involved in specifying and refining requirements, to facilitate their informal communication. Continuous and explicit modelling of the social infrastructure in which requirements are produced, specified, maintained, and used (reflecting all changes), is fundamental to this re-orientation.
Authors: Jawed Siddiqi et al. (1994)
Requirements engineering is fraught with possibilities for misunderstanding and mistakes and it is well known that the earlier such errors occur in the lifecycle the more costly the consequences. Formal specifications provide from a developer's perspective a clear, concise and unambiguous statement of the system requirements. Prototyping enables effective user participation in the validation of requirements. This article reports on work towards a system that judiciously combines the strengths of formal specification and prototyping to assist in the construction, negotiation, clarification, discovery and formalisation of requirements that could make the crucial activity of requirements engineering less problematic.
The first stage in the software lifecycle consists of arriving at a set of requirements which are traditionally expressed informally in a requirements definition document. The formal methods approach of "build it right the first time" whilst having much to commend it is as yet not sufficiently mature for widespread industrial use.
More pertinently in this context is the lack of recognition of the role of iteration in software development and the lack of early feedback on whether we are building the right product. Prototyping provides one way of resolving the latter two problems. The purpose of prototyping is primarily to aid the task of analyzing and specifying user requirements, although it can also be used to study the feasibility and appropriateness of a system design, enabling the developer to contrast and compare the merits of alternative designs. The prototype becomes an effective communication medium which enables the developer and customer to learn about each other, without requiring them to have an in-depth understanding of each other's fields.
A more precise distinction between throw-away and evolutionary prototyping is made. The arguments are that a throw-away prototype should be built as quickly (and cheaply) as possible, and should implement only requirements that are poorly understood. Experimental use of the prototype will reveal which of the alleged requirements are real and which are not. The prototype is then discarded, and the developer incorporates what has been learned into a new requirements definition, and this forms the basis of a full-scale implementation. In contrast, an evolutionary prototype is built rigorously (i.e. subject to the usual constraints which would normally be imposed on software construction), and should implement only already confirmed requirements. Experimental use of this prototype will uncover unknown requirements - those that have not yet been thought of. To summarize:
"A throw-away prototype implements a poorly understood requirement and migrates it to the well-understood class. An evolutionary prototype implements a well-understood requirement, expecting users and developers to uncover previously unknown requirements."
The figure above depicts the Software Development Model for a system for Specification, Construction and Animation in Z (SCAZ) that can be incorporated in any software development model that has early activities corresponding to requirements capture and formalisation. It maximizes the strengths of formalizing specifications, and prototyping (i.e. providing the developer with early feedback from users). It also minimizes the weakness of prototyping and formal specification. SCAZ is a system which supports both throw-away and evolutionary prototyping based on Z specifications. SCAZ has essentially two components, ZED, a full-screen editor for constructing and syntax-analyzing Z specifications, and ZAL, a tool for transforming Z specifications into an executable version in LISP.
Author: Wade Guthrie (1994)
Source: Internet news
This is a summary from Wade Guthrie's FAQ concerning PIGUI - Platform Independent Graphical User Interface.
A PIGUI toolkit is a software library that a programmer uses to produce GUI code for multiple computer systems. A PIGUI will probably slow down the execution of the code, and limits the options to the feature set specified by the PIGUI. It only deals with the GUI aspects of a program.
There are three various approaches to providing platform independencies. The two most common approaches are the "layered" and the "emulated" user interface but an up-and-coming approach is "API emulated" interface.
Products using a layered interface access native, third party, GUI-building toolkits to provide the look-and-feel compliance for a particular GUI. They have the advantage that, since they depend on other products which concentrate on a single GUI, they have to provide less software. They are also most likely to get the native look-and-feel correct on all platforms. Most of the PIGUI products in this document fit in this category.
In an emulated user interface, the PIGUI's resultant code produces low-level calls and all the look-and-feel compliance is handled by the PIGUI software itself. It has the advantage that someone on a Motif workstation, for example, can see how the Macintosh-style UI will look. They provide a faster GUI than does a layered interface, and it does not require you to purchase (or learn how to use) other kits to build GUI software.
The third approach is to emulate one of the supported target's APIs to target other GUIs.
All products mentioned here are pretty similar in their basic functionality; they each provide function calls or classes that allow the user to build windows, buttons, menus, menu bars, and the like.
Explanation to the following table:
(a) This product is free for non-commercial use. If you make a profit, you'll have to check with the vendor for pricing and availability.
- If information for a cell is unknown, a period (`.') is placed there.
- If a platform is not supported, a hyphen (`-') is placed in the cell.
- `soon' means that this platform will be supported soon.
- If a price is known, that price is inserted, otherwise a `yes' means that the platform is supported.
Table 1 Platform, Price and Features (only the relevant tools and platforms
Vendor Win Win/ Motif Mac Type Eval Lang
Aspect, Open $1495 soon $2495 yes . 30 C
Don's Class (a) . (a) (a) . free C/C++
Guild, Guild $895 $895 soon - . . C
JAM, JYACC yes . yes . layered . C
ObjectViews yes yes yes yes layered . C++
StarView, $499 soon soon soon layered 30 C++
SUIT, Univer (a) . (a) (a) . free C
sity of Virginia
wxWindows, free soon free free . free C++
zApp, Inmark $495 $495 soon soon layered 60 C++
Zinc, Zinc $299 $299 $1499 $299 layered 60 C++
(b) Requires a single-time purchase of the Zinc GUI engine. This is $499. After this price, the individual GUIs to be supported are added-on.
The lower-priced group is usually C++, is a more recent introduction to the market, is almost always a layered package, and concentrates on PC-based operating systems. The higher priced usually offer a more stable platform with both greater breadth and depth than does the previous group. In either case, the cost premium for UNIX support is usually a factor of 3.
Source: UNIX-Review, oct 1993, 65-72
- Aspect, V1.2
This is a C library, though they're type safe for C++ compatibility. It is missing a help system and error handling support.
- Don's Class Application (DCLAP) library
This is a (free-of-charge) barebones C++ application framework with no detailed information. It may be achieved by anonymous ftp.
This is a C-language library, but they're type safe for C++ compatibility. The package includes a GUI builder and an event occurrence monitor. They give three months free phone tech support, and are working on a Unix/Motif version.
- JAM/Pi 5.03 (6.0 reportedly, is on the way)
This is a C-language library. The package includes a GUI builder.
- ObjectViews C++
This is a full C++ class library. It is a superset of a non-proprietary API based on "InterViews".
- StarView 2.1
This is a full C++ class library that comes with their DesignEditor which creates resource files. It also comes with several general-purpose C++ classes including Strings and a very complete complement of container classes (e.g., Queues, Lists, and Tables). Lots of users really like StarView, although many have complained about their technical support.
- Simple User Interface Toolkit (SUIT), v2.3
SUIT is a (free-of-charge with strings attached) C-language library. It comes with source, a 10 page tutorial, and a 160 page reference manual. SUIT's prime directive is easy of learning. The software has the unusual trait that it's user interface is editable even while a SUIT application program is running.
- wxWindows, V1.5
This is a (free-of-charge) C++ library with source. It includes hypertext help, printer support, and more.
- zApp, v2.0
This is a full C++ class library. It comes with zApp Programmer's guide (330 pages) and zApp Programmer's reference (890 pages). They have a GUI builder, called Object/Designer, which is offered as an aftermarket product for $499. Free support is given for ever. It is apparently MS-Windows-oriented.
- Zinc, v3.6
This is a full C++ class library that comes with the Zinc Designer (a WYSIWYG GUI builder). It also comes with 4 manuals.
The same graphical user interface (GUI) that makes it easy for a customer to use an application makes it hard for a programmer to write one. GUIs require thousands of line of code and may need to be rewritten when an application is ported to a different architecture. Many developers of multi-platform applications find that portable GUI builders speed software development by providing tools to graphically lay out screens and then automatically generating a compliant interface for each target architecture.
This article examines five GUI builders that run under Motif and Microsoft Windows. The five products reviewed are Open Interface 2.0.1 from Neuron Data Inc., Aspect 1.2 from Open Inc., Opus 2.03 from WNDX Software Inc., XVT Design 2.0 from XVT Software Inc., and Zinc Application Framework 3.5 from Zinc Software Inc.
Portable GUI builders let you concentrate on better functionality and the portability issues of the application internals. GUI builders also give immediate feedback on the usability of an application, allowing you to prototype an interface before fixing the design specifications.
All GUI builders work in the same basic manner: They include a mechanism for placing objects on your window, menu, or dialog. You can edit the attributes that affect the behavior and appearance of the objects. In addition, the libraries in these GUI builders provide a higher level of functionality than native GUI libraries and some GUI builders provide the ability to hook up subordinate objects (pull-down and pop-up menus, dialogs, and so forth).
These portable GUI builders were evaluated on the scope and type of widgets; the capabilities of the screen widgets, including the features shown in the section about "Best-buy features;" the extent of the library's functionality; ease of start-up, including installation, documentation, learning curve, and ability to work with current code and development practices; and the usability of the generated code-type (C, C++, resource files) and compatibility with third-party compilers.
The evaluation began by learning how to use the software. Each package was then used to build a test GUI, which exercised all the widgets: radio buttons, fields with scrollbars, buttons with bitmaps, horizontal and vertical scroll boxes, progress bars, and pull-down and pop-up menus.
The test GUI was created on a 486-based PC with 16MB of RAM and a SuperVGA card. The PC was running DOS 5.0, Windows 3.1, and both Borland and Microsoft C/C++. GUIs for all packages but Zinc were compiled as C; Zinc was used with C++. The portability of each package was then tested by moving the GUI to a Sun SPARCstation 1+ under SunOS 4.1.2 and Quest's Motif builder was also installed and used on the Sun; each operated similarly to the MS Windows version.
Overall, the best low-priced product is Opus from WNDX. At the high end it's a close call between Zinc Application Framework, XVT Design and Open Interface (OI). OI wins on features, but applications built with Zinc and XVT are royalty-free.
Neuron Data's Open Interface and its GUI designer, Open Editor, come with three manuals of daunting size - the programs are loaded with features. Installing OI takes more effort than the other packages. The look of your interface defaults to an appropriate value at start-up, but it can be switched at run time.
A graphical palette of widgets provides easy access without having to memorize widget names. OI includes a resource browser, making it easy to get to each visual component of your application or to the resource library.
Of the GUI builders reviewed here, OI gives you the greatest control over the look and feel of your GUI by letting you pick the equivalent fonts to use for each platform and define the actions of the keyboard keys for each platform. You can even define combinations of keyboard keys and mouse keys. Open Editor also has the easiest mechanism for designing menus.
Open Interface does not provide help-system support for your applications. You can also test the whole GUI, but this general test mode is of limited use in that it does not allow activation of submenus and other widgets. A scripting language in the next release, scheduled for fall of 1993, addresses this testing limitation. It also includes new built-in widgets, C++ support, and the ability to link to relational databases from within Open Interface.
In creating the test GUI, the enormous flexibility of OI was discovered. OI also saves you the extra effort of creating makefiles by generating them automatically.
Open Interface/Open Editor is a powerful product with a larger-than-average learning curve. Be aware that Neuron Data charges run-time royalties. If you need to create complex applications and control their look exactly on all platforms, OI will fill your needs.
Open Inc.'s Aspect makes it easy to lay out a GUIs windows and dialogs, but that is all it does. This release will not generate C code and it does not provide image editing for bitmaps and icons.
It's a good thing Aspect is easy to learn because it doesn't have on-line help, and its documentation is poor, lacking such basics as page numbers and an index.
On the positive side, Aspect is particularly flexible is some of its layout options.
On the negative side, sliders and progress bars were not available in the builder, nor were the basic graphical widgets.
The creation of the test GUI took a few hours, with most of the time spent inventing graphical widgets and replacements for the progress bar and slider.
Aspect needs additional features before it is ready to be used for serious GUI development. A new version scheduled for fall of 1993 has a code generator, as well as built-in standard dialogs, icon and image support, and a C++ class library. However, this next release does not include help-system or error-handling support.
Developers already using a GUI builder will have a small problem learning WNDX's Opus because it follows a different paradigm than other packages. Rather than attribute editing that is tailored to the widget type, Opus gives the same list of attributes for every widget. Some attributes aren't appropriate, but you wont be warned - they just won't do anything. If you get confused when using Opus, consult the on-line help: It is sensitive to the current screen, and during attribute editing, to the currently selected attribute.
Example programs worked flawlessly, but building the test GUI revealed a few problems with WNDX's generalized attribute-editing approach. This method gives no direct way of putting images on buttons.
XVT has some features that greatly ease GUI development. You can group radio buttons logically in a box and make them exclusive to each other, which gives you complete freedom in the layout of buttons. The text edit field can provide automatic vertical and horizontal scrollbars, and you can set whether users are allowed to cut, copy, and paste to and from this field.
XVT's ability to generate makefiles automatically for an interface's target platforms made it simple to port the test GUI to the Sun. We simply created the makefile, transferred all the code, and recompiled.
XVT Design is well suited for high-end application development. Plus, it handles the drudge work with its abilities to accept direct code entry in the builder, generate makefiles, and automatically test an interface's logic.
Zinc is the easiest GUI builder to learn. It comes with thorough on-line context-sensitive help and three compact, comprehensive manuals. Zinc even bundles training videos with some configurations. And, to help you make your application as easy to learn as Zinc, the builder comes bundled with an excellent help editor that allows you to create context-sensitive help for your GUI.
Zinc's most distinctive feature is the wide variety of tools supplied for creating bitmaps and icons. A third-party paint program is still superior for creating large bitmaps with more colors, but this built-in image editor is the best of the ones that came with the GUIs review in this article.
These are the features of the best GUI builders.
Additional portability functions help you write your application in a portable manner. While the GUI builder works out all problems; Open interface and XVT provide the most functions.
Automatic scrollbars only appear when there are more items or lines than can be displayed in the current field, as opposed to regular scroll bars, which appear on every field regardless of the length of their contents. Automatic scroll bars help the user know that the field is empty; if a scroll bar is present but the field appears empty the user must operate the scroll bar to check the field contents.
Bitmap editor: see image or icon editor.
Callbacks are functions that are called when a widget is manipulated by the user, such as displaying a dialogue to choose a file name when a Save widget is selected. All GUI builders have the ability to book up callbacks, but some make it more convenient and flexible.
Error-handling support supplies built-in routines that provides error messages and logging. Ideally, error-handling support adheres to the current platform's standard style (Motif, MS Windows, and so forth). This saves the programmer the effort of developing error handling and helps the user by presenting error messages in a familiar style.
Graphical palettes give you the ability to pick widgets for your interface by clicking on a graphical representation, then placing the widget on your layout window graphically. This allows the programmer to operate more efficiently, you recognize icons faster than words and you do not need to remember a windows particular name for a particular widget.
Help-system support makes it easy for you to develop on-line context sensitive help for your applications. The name of the game in GUIs is user friendliness and a quick end-user learning time. A good GUI builder's help-system support handles the portability issues
Image or icon editors let you create graphical images that identify buttons or that identify other widgets on your GUIs screen. A built-in editor for icons and bitmaps is a necessity when the GUI builder uses its own graphical format, if a decent image editor is provided in the GUI builder, the programmer does not need to learn a new paint program for each different platform.
Internationalization support is the ability to present interfaces in languages and formats that are specific for different countries and locales. Applications built with GUI builders are inherently easier to internationalize than hard-coded applications because builders store all strings in a separate resource file.
Progress bars show how complete a task is, as it is running. An important part of user friendliness is telling the user how much longer to wait. Some form of progress bar provided by the GUI library makes this feature easier to include in your application.
Sliders let a user set a value by sliding a pointer on a scale.
Test mode lets you try out your interface. All GUI builders provide some form of test model. At the lowest level a test mode only checks that a widget's appearance changes appropriately when selected, the better test modes check the logs of the underlying actions that occur when a widget is selected. However, all test modes are useful to prove that radio buttons are grouped correctly.
Author: Bengt Asker (1990)
Source: Ericsson review, 3, 138-146
A modern network for telecommunications is a complex system. Automatization is, however, increasing rapidly, but so do the number of functions and services that are available. Because of this, the need for human surveillance and guidance are increasing too. It is obvious that the interaction between man and machine have a very predominant role and that a clear and precise dialogue, that as far as possible protects the operator from making expensive mistakes, is an important demand on products.
Today, monitors with high resolution, and often rich color palettes, are used. Memory and processing power are also inexpensive. Due to this, the dialogue may take on greater importance when implementing various systems. This brings the real problem into focus: How should a good dialogue be constructed, and how do you achieve it? Before this question is answered, the possibilities presented by the technique today are presented.
A terminal consist of three parts, which are well-known today; monitor, keyboard and mouse. The second important component is the computer itself where the application is executed and where the user interface is built up and controlled. It is most often a standardized UNIX-computer. Graphical components refer to all graphical object presented on the screen. The architecture of the user interface is another issue that is discussed. One of the foremost requirements on a good user interface is fast response times to input from the operator. UNIX have by tradition been a system built on a multi-user system, where several users connect to one server by aid of simple text-terminals. Efforts to solve the problem of achieving this with graphical user interfaces was therefore begun early. This has been achieved today. The most common solution is that of X windows. It is a high-level protocol developed at MIT (Massachusetts Institute of Technology). This protocol is now a stable de facto standard in industry, and it will only be a matter of time before it in fact becomes an ISO-standard.
X windows is a high-level protocol, but not so high that it decides the look-and-feel of the user interface. This is called policy-free. It is of course an advantage to have a standard on a low level that allows for variations higher up. This is the same principle as that of the ISO protocol stack. However, for those that develops applications, this low level has some drawbacks:
Therefore, programming packages on a higher level exists, that accepts commands to generate complete menus, panels, scrollbars etc. These packages therefore assert a certain style to the user interface, that makes it consistent on this level. The problem here, is that it does not exist one standard. In the UNIX-society, there exists two camps. One of them support what is called Open Look and the other is called Motif. The differences between these two user interfaces are not dramatic, but big enough to generate problems for both developers and end users.
- It is hard and takes a lot of time to implement the user interface.
- The end user does not receive a fixed interface if every application programmer decides how menus and other components shall look and behave.
Ericsson use Open Look, and all examples in this article are gathered from that. Even with tools such as Open Look and Motif, lots of effort is required to achieve a good user interface. The next higher level of tool is called User Interface Management System (UIMS). These enable the developer to interactively develop menus, panels and the like, insert texts and in some meaning evaluate and test the interface. UIMS-programs exists on the market today, among others a Swedish one that initially were developed in the BaseOpen-project and is called TeleUse. It is stated that the technique is not mature enough yet.
A good user interface should be:
The last two properties can probably be given to a developer and it can be expected that they are understood. The two former, which perhaps are the most important, are more difficult to handle as specifications.
- Consistent; and
- Respond quickly.
A user interface is effective if it demands the least possible interaction from an experienced user to achieve a certain result.
Intiutivity in a program means that, in a wide sense, the program should behave as expected. But, the person that writes the specifications and the person that implements the program, have probably not the same expectations than the one that will eventually use it. It is the user that controls the machine and not the other way around.
It is needed - and are beginning to evolve - a new profession, with education in psychology, to specify the user interface. They foremost use their knowledge and experience and complement these with prototyping. This implies that a more or less complete version of the system is developed relatively fast, and that people representative for the end users may evaluate it. By observing and interviewing the test crew and adjust the user interface in an iterative fashion, it is possible for the expert to achieve a solution that probably will be used in the final product.
A necessary but far from complete demand for a consistent user interface is that a tool is used, that enables the creation of a certain standard look, i.e. Open Look or Motif. Several of the demands are contradictory. An effective interface is not necessarily the easiest one to learn.
The meaning of driveability is that if a subset of the complete user interface is created, and it is possible to standardize it as well as agree upon it, makes it easy for the operator to work with the system. The important issue here is the behavior of the system, not its outlook. The standardization is therefore limited to the functions that are important and where misusage may have great consequences.
The term driveability is gathered from the automobile-industry. Even though the driver-seat differ among different cars, it is usually possible to drive the car with-out having to read the user's manual. You might have some trouble with the windshield-wipers, but you can be assured that the break is situated to the left and so on.
User interfaces will continue to demand lots of computer resources; the need for memory and processing power is far from satisfied. Graphics, different fonts, extensive texts and usability examples, demand lots of storing capacity.
Another development that is coming into focus is that of tailoring the interfaces to a specific user. This may be achieved more or less automatically by aid of so called artificial intelligence; the computer learns the specific preferences of each user without having to be specifically informed about them.
As the technical possibilities are increasing, so are the risks of creating a hard-to-understand and difficult-to-use, and maybe over-worked user interface. Therefore it will become more and more important to utilize professionals that are educated in psychology in an early stadium of the specification process. The user interface is and always will be what presents the first impression of a product.
Author: Stephen J Andriole, Drexel University
Editor: Alan Davis
Source: IEEE Software, march 1994
In this report from the trenches, Steve Andriole, who has run his own requirements-modeling and prototyping business since 1980, explains why he believes prototyping is a prerequisite to specifying requirements.
Over the years, with the help of colleagues from industry and academia, Andriole has identified what he thinks is a requirements-modeling and prototyping process that is fast, powerful, cost-effective, sane, and objective. Time after time, they have been accused of
Partly in response to this criticism and partly out of plain defensiveness, they developed a requirements-modeling and prototyping process that seem to work very well. The main lesson they learned is that throwaway prototyping (sometimes called exploratory prototyping) is always cost-effective and always improves specifications.
- impending design and development,
- accelerating design and development on the basis of false assurances about the accuracy of their requirements,
- spending too much money bothering users,
- writing too much down,
- not writing enough down, and (the unkindest cut)
- doing stuff that is boring and fundamentally unimportant.
Figure 2 diagrams their process, which is designed to be consistent with widely accepted software-engineering objectives.
Those who want more serious attention paid to requirements must come to grips with the realities of the requirements-analysis and prototyping process:
- Elicit initial requirements. If you can't find some knowledgeable, articulate users, then rely on your team's domain knowledge - and get to prototyping fast so users can react to something tangible.
- Model requirements. After talking with several users, designers, and managers, you will be confused. You must convert this raw requirements data into a more understandable, manipulable form. You need a model. Modeling and prototyping is an inherently iterative process that is never complete, mainly because users are often unavailable. Why? Because the whole process is imprecise and ambiguous. You are always at the mercy of your last model or prototype.
- Identify constraints. Truth be told, constraints are more of a factor on any project than developer creativity. You are far more likely to hear what you cannot do than what you can.
- Prioritize initial requirements. As a requirements analyst, you must stay objective: Prioritize requirements with respect to each other and with respect to constraints. What falls out constitutes a reasonable design-target - no more, no less. Then you should show the prioritized list around, and be prepared to modify it. It was learned - hold on to your seats - not to take the modeling and prioritizing process to seriously. This is not to say you don't need models or priorities. In fact, to get to prototyping, you must know something about the application and its developmental constraints. And therein lies the rub: Analyses, models, priorities, and designs are necessary inputs to prototyping, but by no means do they guarantee the production of a killer prototype everyone will swoon over.
- Design. Try to develop a detailed description of the system's external behavior - how it will look and feel to it's users - and its internal organization, its data structures, architecture and algorithms.
- Evaluate designs. Here, it's especially important to identify key drivers: What features will make or break the system? Make sure all stakeholders have a say in identifying key drivers.
- Specification. The "winning" system design now moves on to specification, in which you add more details about the system's internal and external behavior. It is at this point that they ask what is needed to transition to a prototype. If a lot of knowledge about the requirements exists and there have been relatively few conflicts about priorities and trade-offs, one moves to evolutionary prototyping. If either one of those isn't true (which is usually the case), one moves to throwaway prototyping.
- Interactive prototyping. Input during design and development is always over-shadowed by feedback from prototypes. Detailed requirements should never emerge without prototyping feedback. This is obviously true when throwaway prototypes are built, but it also holds during the initial evolutionary iterations.
- Requirements validation. Two important ingredients are process templates and experienced professionals, focused squarely on the objectives. A template is a step-by-step guide for moving toward a relative consensus about what the system must do and how it should do it. Finally, it is very important to appreciate the fluid, changing nature of requirements and the role prototyping plays in the discovery process. In spite of your best efforts, key requirements may elude you.
Author: Carl Gustaf Rumenius
- It takes time. Requirements are vague, ambiguous, and buried in business processes we barely understand and have little time to investigate.
- It costs money. Anecdotes and testimonials aside, we still cannot predict savings likely to result from enhanced analysis, modeling, and prototyping.
- It takes talent. The skills needed to validate requirements and build prototypes are fundamentally different from those needed to write good algorithms.
- It must be justified. Distinguishing between throwaway, evolutionary, and operational prototypes is an intellectual exercise. The fact is that we must be able to convince management that there is no reason to skip a step in the design process that is fast, cheap, and will always improve the product.
- It is a wild ride. Requirements are discovered, evolve, change, metamorphosize, and disappear without warning. Success is measured by how well you can adapt to uncertainty and unpredictability, not your mastery of a CASE tool.
Source: Datornytt, 1991, 9, 22
It is possible to complete the work 10-100 times faster by using prototyping in system development than with traditional design. And, by testing the system even before the specification is finished, the chances of creating a final result that satisfies the demands of the final application greatly increases.
Prototyping has a special meaning in specification processes and ADB development. Prototyping can be made in several levels depending on what features exists in the toolbox you are using while creating the prototype.
When starting at the lowest level, your toolbox has to contain a 4GL-tool. This means that you must have access to an editor, a programming language, a database handler, and a compiler. When you are starting at higher levels, your toolbox will, in addition, also contain programming modules and complete databases.
In traditional system development, experience have shown that the final product often do not meet the demands of the end user. The most common reason for these problems, is that it is hard to define constraints on the specifications. Among other things, this is due to the fact that new ideas often evolve during the later stages, and that all facts and possibilities not are known in the beginning, but surface during the evolution of the project.
The immediate advantages with prototyping are that users and developers can sketch their system fast, and without having to conform to a before-hand developed specification. Another important advantage with prototyping is that the development group may use program, that has been written in an earlier stage or that are available in the toolbox. The time for specification when using prototyping is often quite long compared to the time to implement the interface, but often give rise to a better product. The time from the beginning of the project until it is complete, is shortened by a factor of 10-100 if prototyping is used in all stages. Prototyping is most often used only in specification and ADB-revision.
Author: Johan Rönn
Source: Corporate Computing (1992), December, 69-71
With Open Interface it is possible to fast and elegantly create a user interface for different platforms i one environment. This article describes the testing of Open Interface on the platforms of MS Windows, Macintosh, and Sun/Open Windows.
Today, there's a number of different GUIs to choose between. When you want to port a program to a different platform you usually have to rewrite the whole interface manually - all calls to routines that creates windows, dialogboxes and menus have their own specific syntax for the different platforms. With Open Interface, this can be abstracted since you may define an interface that functions on all the above platforms, and you need only work on one single platform.
The idea behind Open Interface is to reuse previous work. If it is possible to define the interface for windows, dialogboxes, buttons, menus and so on, you should already have done the hard work. The reason this works is due to two things. First and foremost is the fact that all GUIs today have a similar structure. Programs using GUIs are mainly event driven, i.e. they repeatedly wait for some action from the user. Basically, all GUIs have the same set of graphical objects with similar functions. The other component in Open Interface that makes it possible to define a GUI only once, and then choose another GUI on the fly, is due to Open Interface-design. Open Interface is built up from a module called the Virtual Graphics Machine (VGM). By making all system calls to this module instead of to the actual platforms interface functions, one can guarantee that all calls made in all interfaces you are developing the programs for will actually work. If, for example, you are working on a Macintosh, you may choose from a menu that your GUI should be shown with Motif- or Windows-look.
Another advantage with this architecture is that you may actually use graphical objects that are not present in the basic setup for a particular platform.
A graphical object that is not normally present as standard on any platform is a browser. With the aid of a browser, you may navigate through structured information-sets easily, and in detail look at the level you are interested in.
Open Editor is the program you are using when developing interfaces in Open Interface. Open Editor only consists of a number of routines for calling functions in Open Interface, which among other things imply that you may work on a Macintosh and show your environment in Motif- or Windows-look.
Open Editor is built up on the simple idea of symmetry which have given us several graphical development platforms by now: If a call to the operating system with certain parameters results in a graphical object, then why not turn the idea around? Draw the object and generate the code!
When using Open Editor, you create a project and are then shown a browser, where you choose your graphical objects which should be instantiated. You may choose order of tagging for checkbuttons, define shortcut keys for buttons and menus as well as create default values in fields and listboxes. It is also possible to create your own widgets if you need a special graphical object.
When you are satisfied with your GUI (it is possible to test it), you have only to generate the code for the interface. Open Editor generates four different files. Of these, two are resource-files (one text file and one binary file), one constitutes a C header file, and finally a platform-specific makefile is generated tailored for the platform you want to run your application on.
When all of this is completed, you may open the application and run it. Unfortunately you need to have Open Interface installed on your system. If you want to distribute your applications, you have to buy runtime modules which after compilation contain as much code as is needed from the VGM. These modules costs about $100 each.
Open Interface is an impressive program with large potential within several different areas. The author of this article believes that, in spite of the rather high costs, the purchasers of this product will benefit from it rather fast. This is mainly due to the fact that the development of nice, platform-specific and work-saving interfaces for the users that want to work with a specific system, is very fast.
Authors: M A Stephens and P E Bates
Source: Information and software technology (1990), vol 32, 4, 253-257
Rapid software prototyping is used in a complex costing and estimating project within the metal-finishing industry. Structured design methods proved unsuccessful in designing the nature of the problem. Interface and functional prototypes are used to gain a better understanding and to help define the boundaries of the system. The importance of user evaluation and feedback is acknowledged as a valuable means of increasing accuracy and improving validation. The investigation reinforces much of the literature but raises further questions of prototyping management, maintenance, and installation.
A specialist metal-plating company identified the need for a more rigorous method of pricing and estimating. Their established pricing and estimating techniques, and hence the quotations produced, relied heavily on the experience of particular specialists. Inconsistencies in pricing occurred due to different interpretations of the policy and complexity the processes involved. Pricing and estimating are further complicated by constant changes in trading conditions, particularly metal price fluctuations.
The open-ended brief was reduce the incidence of erroneous and inconsistent estimates. Current practices in pricing and estimating were to be analyzed, the cause of problems reported, and an automated system to streamline processing specified. It was discovered that such open-ended briefs are indicative of systems that are particularly suited to prototyping.
Initially, the problem was approached using a traditional structured design method. This was found to be difficult because the users were uncertain of details of the present and proposed systems. Hence rapid software prototyping was used to formulate (engineer) system requirements by simulating a joint learning process between users and developers.
Prototypes may be build for a variety of purposes. These include identifying, learning about, and clarifying user requirements (requirements engineering). They involve the building of working `mock-up' version of part of, or the whole, system. These are demonstrated to users with the intention of maximizing product quality; the assumption being that it is easier to focus on a working version rather than on paper documents. For this to be successful, developers must encourage users to be constructively critical.
While it is difficult to formulate a universal definition of prototypes, they do exhibit some common characteristics in that they:
There are many approaches to prototyping. Hekmatpour and Ince and Ince classify prototyping into three categories:
- involve a high degree of user evaluation, which substantially affects requirements, specification, or design
- are analogous to experiments built to test hypotheses
- stimulate a joint learning process for users and developers
Throw-away prototyping is often used for requirements identification and clarification. As the name suggests the prototype built is `discarded' once the developers and users have evaluated it.
With incremental prototyping, a complete system design is specified, subtasks identified, and prototypes built, evaluated, and modified according to user feedback. The overall design, however, remains unaltered.
Evolutionary prototyping allows a system to evolve, according to user feedback, so that dynamic change is allowed. At the onset of prototyping no system specification need exist; the prototyping process produces the specification.
An alternative classification can be used dependent on the purpose of the prototype. If it is concerned with system function it is a functional prototype. It it concentrates on user interface it is an interface prototype. However, interface prototyping can also reveal much about function.
This article outlines the use of prototypes that conform to both of the above classifications. Evolutionary and throw-away prototypes are used for both interface design and the discovery of functional requirements.
This work has identified a close correspondence between interface and function, thus reinforcing the view that the major purpose of interface prototyping is to gain insight into users' functional requirements.
During the early stages of the project it was anticipated that analysis of the existing pricing procedures and practices (including documentation) using structured analysis would identify the areas causing difficulty.
During interviews with estimators it was found that some subjects had difficulty in explaining the factors they were considering when calculating the price: attempts to decompose parts of the system often ended with a series of examples of individual circumstances cited, with no easily identifiable underlying procedures. Whatever the reason, it meant that there was inadequate information for the creation of a complete specification that modelled the existing system.
The analysis did, however, provide a partial view of current procedures. It increased both the users' and developers' understanding, and in particular it identified the main determinants for estimating.
This work formed a basis for discussion with management and helped identify the main problems:
Furthermore, the relationship between costs and prices remained unclear and no rules could be identified for the transformation of determinants into quotations.
- Quotations were inconsistent (most significant).
- Cross-referencing of previous quotations was too difficult, because of insufficient indexing and no standard layout.
- Manual quotations were expensive to produce and modify.
Following these discussions, management, who were now more aware of the extent of the problem, wished to reformulate the requirements of the proposed system. However, they were not fully aware of what they actually wanted. This phenomenon is recognized by Yourdon , who states that one of the good candidates for prototypes is where:
"The user is unable to articulate (or `prespecify') his or her requirements in any form and can only determine the requirements through a process of trial and error."
At this stage it was agreed to build prototypes to solve some of the identified problems. As prototyping was relatively novel to the authors, planning and control were limited. Evolutionary interface prototyping was selected to help define the boundaries of the specification and to provide easy access to quotations. The prototypes were designed to be used as the interfaces of the final complete system.
On completion of the interface prototypes it was hoped that the further information gained from them would enable functional prototypes of the pricing process to be built.
An evolutionary approach was adopted with interfaces developed on the target machine. It was intended to try achieve several objectives:
This last aim requires explanation. The users were not clear about the level of expertise that the system would exhibit in automating pricing.
- To produce user-acceptable input screens.
- To clarify the information on quotations and identify the key fields used in historical cross-references.
- To identify management information reports.
- To consolidate understanding of determinants.
- To help identify the boundaries of the system.
Prototypes were designed and implemented on the company's IBM minicomputer using RPG2 and the screen painter facilities available. Screen designs and quotations and report layouts were changed directly as an result of criticism from both estimators and management. The approach brought usable system to completion within a reasonable time-scale, undergoing the first trials within three months of the exercise starting. This would have been faster if the developers had been more familiar with RPG2 at the outset. The difficulties of interface prototyping and the need for designers to be both skilled programmers as well as have expertise in human factors is a current research area.
The `interim' system built as a result of the interface prototyping was given a trial of several months in the factory. This system is now being used by the estimators for the production of all quotations, facilitating a more streamlined operation. The data captured will be used in future research for comparison with prices produced by prototypes.
There is no doubt that the function of the final system was developed from an exercise that was only considered to be interface prototyping.
Post-delivery modifications to the system have been few (mainly cosmetic changes to some of the printed output), and it has met with enthusiastic user response despite the lack of a well defined specification.
However the exercise raised as many questions about prototyping as it answered. The developers felt that the users were tending to control the process rather than the developers.
Analysis of the outcome of the interface prototyping and a review with senior management led to a change of system boundary. The project was no longer trying to produce a fully automated pricing system. Instead, an automated costing system that would provide a consistent base for manual pricing was adopted.
Three components were identified as essential to a consistent costing and pricing process:
As a result of the analysis of interface prototyping it was decided to proceed with further prototyping. This pointed to the need for functional prototyping.
- For costing, identify the `route' through the factory; and apply an up-to-date cost on a `route'.
- To extend into the area of pricing, manually adjust a cost to yield a price.
The `route' through a factory is particularly significant because costs associated with apparently similar components can vary by a factor of up to 10.
Functional prototypes for the predication of route have been built with the aim of discovering the relationship between determinants of process and route. Due to the complexity of this problem, current work has been restricted to silver finishes to evaluate feasibility. The prototypes were constructed using a `throw-away' approach, the intention being to enhance the system already installed on the IBM minisystem once the requirements could be specified.
Prototypes to tackle the route problem were built using a commercially available package. The expertsystem shell Crystal was chosen for this task.
With the hindsight of the earlier prototyping exercise, the developers wished to maximize the control and monitoring of the exercise. This was to be improved in three ways:
Using the latter method, all changes to prototypes suggested by users are recorded and classified into three categories (cosmetic, local and global) according to their impact on the system. It is recommended that both cosmetic and local changes are made immediately, with the latter being subject to later scrutiny. Global changes that have an impact beyond the parts of the system currently under review are recorded, but change is delayed until all the implications have been researched. Hence the developers retain control over changes that have wide-reaching effects, while still leaving users with the impression that it is their system.
- Extensive records were to be kept of prototyping sessions.
- Some prototyping sessions were to be attended by more than one developer to enhance critical analysis.
- A strategy was to be developed to control change.
The prototyping sessions did not produce any global changes. Cosmetic changes were few, again not surprising as it was throw-away functional prototyping, rather than interface prototyping, that was carried out. It was not possible to make functional changes during sessions; alterations to one rule in an expert system can have a ripple effect on other rules.
The results of the exercise can be provisionally described as a success. After about six sessions with users, plus three review sessions and non-prototype meetings, the prototype was accepted.
Management are impressive with the outcome to the extent that they have requested that it be developed into a usable system. They would like it to be used as a sales support system as well as being used by the estimating staff. This in spite of the fact that it was not intended to be an evolutionary prototype. The developers paid particular attention to educating the users of the dangers of adopting rapidly built throw-away prototypes as fully implemented systems. This is leading to problems that were not anticipated. Because the prototyping tool was not chosen with the intention of building a finished system, installation and maintenance were not considered relevant at the time. It is difficult to deny access to a prototype, even though it was never intended for everyday use.
There are three areas that require further work as a result of the investigation carried out so far:
If the decision is made to remain with the rule-based system, the use of rapid prototyping may lead to future maintenance problems. Diaper  states:
- Can the methods used in developing the limited domain system be extended so that the whole problem can be tackled?
- Will the use of prototype systems as installed products cause maintenance problems?
- Can the control of the prototyping be enhanced by the possible development of a metric?
"such expert systems are likely to be expensive and difficult, if not impossible, to maintain and up-date."
Yourdon  also draws attention to this problem and writes:
"There is significant danger that either the user or the developers team may try to turn the prototype into a production system. This usually turns out to be a disaster..."
Further work on the installation, maintenance, and restructuring or re-specifying of the installed system will thus be required.
The paper has attempted to show how interface and functional prototyping have contributed to the success, so far, of the development of a costing system.
The user-interface prototyping allowed a payback early on in the project by evolution of a usable system, albeit with limited function. This was despite the extra time required by the developers to learn about the software tools and how to conduct prototype evaluation. For prototyping to be adopted, it is necessary for project managers to recognize the potential payback and also be prepared to budget for prototyping activities. A dynamic process such as prototyping requires careful planning and control. The developers found it highly iterative and difficult to estimate the time and effort involved in the different activities. Identifying milestones was also found to be a challenging problem. This reinforces the comments that the management of prototyping requires more research, development, and publication.
Interface prototyping provided communication about functional requirements as well as inspiring major user management decisions. There also was a strong user reluctance to throw away a prototype that was designed to be discarded. The users wish to change the prototype evolutionary for good reasons. The developers could accept this provided doubts about maintainability can be resolved.
The authors are encourages by the results of the use of prototyping in the project. It was felt that progress was more effective than using a conventional structured design method where user and developer perceptions were uncertain. More extensive testing is planned for functional prototypes currently under construction. It is felt that more work is required to develop this type of prototyping into a controllable design methodology.
Author: Hermann Kaindl, Siemens AG Austria, Program and System Eng.
Source: ACM SIGSOFT software engineering notes, 18(1993):2 s 30-39
Especially the early phase of requirements engineering is one of the most important and least supported parts of the software life cycle. Since pure natural language has its disadvantages, and directly arriving at a formal representation is very difficult, a link through a mediating representation is missing. Hypertext is used for this purpose, providing also links among requirements statements and the representation of objects in a domain model. This possibility of explicit representation of links allows the users and analysts to make relationships and dependencies explicit and helps to be aware of them. Actually, this approach and the tool supporting it use a combination of various technologies, including also object-oriented approaches and a grain of artificial intelligence (in particular frames). Therefore, inheritance is provided by this tool already in the early phase of requirements engineering. In particular, it was found that it was very useful to view requirements as objects. A key idea is to support the ordering of ideas especially through classification already in the early stages. While this approach is not intended to substitute useful existing techniques emphasizing more formal representations, it can be combined with them.
Requirements engineering is one of the most important parts of the life cycle of any project as it defines what has to be done later on. However, nearly no support is available.
While from a theoretical point of view it would be desirable to have formal representations of requirements, in practice often just unstructured natural language is used informally. The approach presented here attempts to bridge the gap between these extremes in providing semiformal hypertext representations. Therefore, this approach and the tool supporting it are named RETH (Requirements Engineering Through Hypertext). Actually RETH uses a combination of various technologies.
A key idea of RETH is to help already at the very beginning in the process of organizing ones ideas. It is possible to view this as support for brainstorming. The mediating hypertext representation is used then in the transition between informal and formal.
Unfortunately, the literature about requirements engineering is not completely consistent in its terminology. However, this task can be decomposed into the subtasks problem analysis, requirements definition and requirements analysis. Problem analysis can be further decomposed into domain analysis and modeling, and requirements formation. Generally it should be noted that these activities are not (necessarily) to be performed in a strict sequence.
First, the approach used here for tightly integrating hypertext into a frame system, is sketched. Based on this, the support of RETH for activities in the course of requirements engineering is described. In particular, the use of inheritance already in the early stages of development is emphasized.
The authors of this article propose an intermediary representation that contains informal and formal parts intertwined, and they call such a representation semiformal. They further propose hypertext technology for this, additionally using object-oriented approaches as supported in frame(1) systems. The frame system of PROKAPPATM was selected as the basis of the tool RETH. The key difference between object-oriented languages and frame systems related to this approach is that attributes (slots) in classes can contain and inherit values.
A hypertext node is represented as a frame, also every link and partition is represented by a frame each. A frame representing a partition contains the text and the outgoing links from this partition (more precisely the frames representing the links). The partitioning of a hypertext node is explicitly represented by slots, which contain the frames representing partitions as values.
In order to support convenient navigation they use a kind of bi-directional link. Generally, this representation using frames offers all the possibilities to easily implement more sophisticated hypertext concepts.
The user interface of RETH handles hypertext links as follows: if the underlined string representing the link is clicked with the mouse, the window of the target node is displayed by the tool. In contrast, the display of partitions of hypertext nodes is implemented in the tool like expand buttons. When the name of a partition (inverted in the display) is clicked, the context is expanded or shrinked (implemented as a toggle). In contrast to some hypertext systems on the market, this approach allows the user to mix browsing and editing of nodes deliberately, though one node can either be edited or browsed at one point in time.
In particular, the authors attempt to support the activities problem analysis and requirements definition in the course of requirements engineering. Requirements analysis can only be marginally supported since the formality achieved is insufficient for automatic consistency and completeness check. However, hypertext provides an excellent opportunity for explicitly linking annotations to requirements during review.
It is further explicitly suggested that the sub-activities of problem analysis - domain analysis and modeling and requirements formation - be performed concurrently.
While the approach presented here should be generally useful, the authors want to support especially requirements engineering in the context of object-oriented approaches. Support for arriving at an OOA model is therefore described here. Before or concurrently to developing OOA diagrams, it is suggested that mediating hypertext representations that include textual representations in natural language are used. Hypertext nodes represent (potential) domain objects and their classes seminformally.
The structure inside (attributes) and services can be described in partitions of these nodes. Associations between classes can be represented via (typed) hypertext links. For the representation of taxonomic relationships between classes, the frame system of PROKAPPA can be directly used, providing for inheritance. The grouping of objects into subjects or modules can be realized via PROKAPPA modules. As a side effect of building a hypertext representation as described above, a data dictionary arises, which is in contrast to conventional ones interlinked.
They also propose to represent the semantic content of OOA diagrams in an appropriate internal representation based on frames. The hypertext nodes and the representations of these objects can be linked to each other, which is facilitated by the tight integration of hypertext in the frame system. The more tightly the tools are integrated, the better the traceability of requirements can be supported.
One scenario is that the users themselves develop and form their requirements statement. Since this may be too difficult for inexperienced users, another possibility is to get help by an analyst (a requirements engineer). In the latter case this task can be viewed as one of elicitating the requirements from the users.
In order to help gathering and structuring of requirements an attempt to support brainstorming through hypertext is attempted. It is proposed to represent each requirement in a different hypertext node. The representation of a requirement should include links to the nodes representing domain objects. Moreover, the representation of a requirement may contain links to other nodes representing requirements, making dependencies between these requirements explicit. Relationships between functional and non-functional requirements are important since the latter can be viewed as constraints on the former. Further, it is also proposed that requirements be viewed as objects in this approach.
Whenever the editing of a node (or partition) is finished, a parser should scan the text searching for object names (even using a thesaurus).
While the result of brainstorming may already represent the majority of requirements, the overall requirements statement will have to be made more precise. While some structuring of the requirements may already be done in the course of their formation, it is suggested to put more emphasis on this aspect once the majority of requirements is already sketched somehow.
Taking an object-oriented view on requirements, a particularly useful approach to structuring is the classification of requirements. The tight integration of hypertext into a frame system that is presented here, provides special support for this, allowing the user to define classes of requirements. It even supports inheritance, each requirement is an instance of a class of requirements.
Thus, in the course of requirements definition only adaptions will be necessary.
Due to the tight integration of hypertext in a frame system, it is possible to utilize inheritance already in the semiformal representation. Since classes (of the domain model as well as of requirements) are described in hypertext nodes, and since these are represented as frames, the text contained in them can be inherited.
In particular, it is proposed to use inheritance in RETH as follows:
Generally, an (analysis) method has two ingredients: a set of notations, and a set of rules and heuristics that guide the process of using these notations for the (analysis) task. Below are sketched the proposed steps of RETH. These steps are of course not just performed sequentially.
- Together with the concept of partitions of nodes, inheritance supports templates, e.g., for requirements to be filled in. Whenever a node for a requirement is created as an instance of a class of requirements, the appropriate structure is already given initially through inheriting a template.
- When requirements are organized into classes, all the requirements of a specific class can have a special attribute in common - represented as a partition. For instance, functional requirements are likely to be described differently from non-functional requirements. An important point is that inheritance allows one to define special attributes (including a value or not) once in the definition of the class, without the necessity to copy.
- In contrast to many current OOA tools, this approach implements OOA inheritance already in the semiformal hypertext representation.
Actually, these guidelines may be insufficient for an inexperienced user. Unfortunately, however, there is no general agreement on the OOA process, and even less so on the analysis process in general.
- Create and edit hypertext nodes for the representation of requirements and (potential) domain objects.
- Order these nodes with special emphasis on taxonomies. Whenever appropriate, move nodes in the taxonomy.
Use also non-taxonomic links
- between objects,
- between requirements,
- between requirements and objects.
- Structure the nodes internally using partitions for
The approach presented here named RETH, intends to support several activities in the course of requirements engineering, especially focusing on its early phase. While it uses and partially supports object-oriented approaches, much of the support is also possible for conventional system development.
RETH provides a mediating representation between the completely informal ideas of the user in the very beginning and the more formal representation of domain models and requirements using graphics, for instance. Its hypertext links also allow the users and analysts to make relationships and dependencies explicit and helps to be aware of them.
Viewing requirements as objects helps in structuring them via classification. Generally, a key idea of this approach is to support the ordering of ideas already in the early stages. Since the implementation of this approach is based on frames, inheritance is provided by the tool already in the early phase of requirements engineering, avoiding redundant representation of information. In particular, it provides users automatically with templates of the internal structure of requirements, which depends on the kind of requirement. This way, the users are guided to fill in important information like the reason and priority of each requirement.l
Authors: William Rzepka, Rome Air Development Center
Yutaka Ohno, Kyoto University, Japan
Source: Computer, IEEE Computer Society, v18 Apr, 1985, p9-12
Requirements are precise statements of need intended to convey understanding about a desired result. They describe the external characteristics, or user-visible behavior, of the result, as well as constraints such as performance, reliability, safety, and cost.
Requirements engineering is a systematic approach to the development of requirements through an iterative process of analyzing the problem, documenting the resulting requirement insights, and checking the accuracy of the understanding so gained. A requirements engineering environment must provide the requirements engineer with appropriate mechanisms to facilitate the analysis, documentation, and checking activities. What this means and how it is accomplished, now and in the future, is the subject of this article.
Analysis is the systematic process of reasoning about a problem and its consistuent parts to understand what is needed or what must be done. Analysis also involves communicating with many people.
The requirements engineering environment must support analysis in several ways. First, the reasoning process mustbe guided by an analysis methodology and facilitate its application through an appropriate supporting work place. The work place must provide the requirements engineer with the software tools needed to gather the information necessary to reason about and understand the problem domain. Such tools must include rigorous, but natural ways to describe models of real-world problem domains.
Second, information must be organized to permit quick location and easy access to pertinent facts.
Third, communications between the requirements engineer and those in the community of interest will take place at scheduled, face-to-face meetings, as well as at spontaneous meetings or in phone conversations as the need arises.
While analyzing user needs, the requirements engineer documents several types of information because the requirements must capture and convey the overall scope of a problem, the semantics of its important objects and activities, their relationships and their connections with the problem domain.
The requirements engineer's environment must make several appropriate and convenient forms of representation available.
Checking is the process of insuring that the documented requirements are an accurate representation of the problem (verification) and also that what is represented is indeed that which is desired (validation). The checking activity must insure that the problem statement is syntactically accurate, internally consistent, and as complete as the current understanding allows.
The requirements engineer's environment must support the checking process by providing tools for understanding and communicating. The end result od the checking process will be a refined understanding of the problem, an understanding that will be the basis for subsequent analysis and eventually a completed statement of the problem.
Author: John K Ousterhaut
Source: Internet newsl
This article summarizes what happened in the two sessions that Ousterhaut led in the Tcl/Tk workshop in New Orleans in june 1994. The first session was divided into two talks, the first of which gave an update on Tk 4.0, and the second of which talked about Ousterhaut's plans at Sun.
So far, Ousterhaut have already made many of the most-wanted changes for Tk 4.0, including the following:
Ousterhaut is currently in the middle of an all-out-attack on Motif compatibility, which will add support for Motif keyboard traversal highlight rings and completely overhaul the widget bindings to bring them into better Motif compliance. Emacs-like bindings will also be provided for entries and texts where they don't conflict with Motif bindings.
- A major overhaul of text widgets (embedded windows, horizontal scrolling, better vertical scrolling, more display options such as vertical spacing, margins, justification, and baseline offsets).
- An overhaul of bindings, including the binding tags discussed at last year's workshop and a change in the evaluation mechanism to make bindings more composable.
- A few other changes to event handling, including a new "fileevent" command (similar to Mark Diekhans' "addinput") and the ability to cancel "after" handlers.
- A general-purpose and user-extensible mechanism for images, intended to handle things like color icons, full-color images, and video.
- Improvements to colormap and visual handling.
- A solution to the X resource id wrap-around problem that tends to cause errors in long-running applications.
- A bunch of improvements to widgets, including justification in entries and the ability to have multi-line text in widgets like buttons and labels.
There are several more things Ousterhaut hope to get in before freezing 4.0 for release:
There are several things that were on the list of possibilities that he posted at the beginning of this year, but which he decided not to try to do in 4.0. They include:
- Application embedding (like Sven Delmas' "tksteal" stuff, I think).
- Overhauls of the send command, the selection, and the input focus mechanism.
Ousterhaut's current hope (not a promise!) is to make the first beta release of Tk 4.0 in August and to make the final 4.0 release by the end of 1994.
- A table geometry manager.
- A new font naming scheme.
- An overhaul of the option database.
- Rotation of canvas items.
He is putting together a group at Sun to make Tcl and Tk into a uniform platform for programming the Internet.They hope to make it possible to write scripts that will run on virtually any machine in the Internet (workstations, PCs, and Macs) and to use Tcl/Tk scripts as the "coin of the realm" for neat new applications such as active mail, active documents, and agents.
The short-term plans include four things:
It is expected that all of these things will happen in 6-12 months (and he hopes it will be more like 6 months). Everything except the graphical designer will be released freely like the current Tcl and Tk; Sun will impose no restrictions on them. The graphical designer will probably be free in the early releases and include source code, but eventually he hopes to see it turn into a Sun product, at which point sources will no longer be available and you'll have to pay for it (pricing is likely to be cheap like Visual Basic).
- Ports of Tcl and Tk to both the PC and Mac, so scripts written on one platform will run on any of the others, presenting their UI in the look and feel of the platform on which they run.
- A commercial-quality graphical interface designer like Visual Basic or NextStep.
- Dynamic loading of C code in Tcl.
- Incorporating Nathaniel Borenstein's Safe-Tcl back into the Tcl core, so that there is a safe mechanism for executing untrusted scripts that arrive via the Internet.
Over the longer term (1-2 years) he hope to improve the internationalization in Tcl and Tk to support Asian fonts, build an on-the-fly compiler for Tcl to get 5-20 times improvement in performance, and perhaps build some neat network applications like active documents.
Overall he hope that his move to Sun will not change the basic process by which Tcl and Tk evolve (they'll still discuss changes openly on comp.lang.tcl and solicit input), but he hope that it will provide a better support structure and allow Tcl and Tk to evolve more rapidly. He also hope that this will make Tcl and Tk appear more legitimate so that it's easier for you to get them accepted by your organizations and your customers. It's very important to him that everyone in the current Tcl/Tk community continues to be happy with the systems.
The second session that Ousterhaut ran was a relatively open discussion based on the concerns and suggestions of the audience. Six people, Hank Walker, Gerald Lester, Douglas Pan, Alberto Biancardi, Lindsay Marshall, and Andy Payne, presented short informal position statements, and they had a brief discussion after each presentation. Here's a list of some of the things said; Ousterhaut states that he doesn't necessarily agree with all of the comments, although he do agree with many of them.
At the end of the second session, he asked how many of the new features in Tk 4.0 would make a big difference to people in the audience. Three people said none of the new features mattered much, four said that one feature mattered, fifteen said that they would get two or three major benefits from Tk 4.0, and ten said that they would get more than three major benefits from Tk 4.0. Several people didn't vote.
- It's important to avoid UNIX-isms in Tcl/Tk, so that programs will be easier to port to PCs and Macs. There was a heated discussion about the degree to which Tk should take advantage of multi-button mice.
- It is necessary to standardize the extensions to make it easier to install the systems and to distribute scripts that depend on the extensions.
- There needs to be a mechanism for opaque scripts, where a company can distribute a script to customers without the customers being able to see the source for the script.
- Tcl needs mechanisms to access the protocols for various distributed systems such as DCE, Sun RPC, DOE, Cairo, etc. Ideally there should be a uniform access mechanism that works across all of these protocols.
- It's hard to translate Tcl code into C (if that's needed for performance improvement) because the basic interfaces look very different in C than in Tcl.
- Some form of compilation is needed, even if it's only for simple things like expressions. When Ousterhaut asked, about 60% of the people present said that performance has been an challenging issue for them in at least one situation over the last year. 10% of the people said that their performance problems seemed to be getting worse, and about 10-20% said that they had written unclean code in at least one situation in order to get around performance problems. He also asked if a 10 times performance improvement is enough (this is about what a decent compiler will get, he writes). About 10% of the people said that even 10 times isn't enough.
- Tcl commands should be first-class C objects, complete with reference counts.
- It should be possible to invoke commands at C level with pre-parsed argument lists. It was also suggested that it should be possible to invoke command procedures directly from C, without going through the Tcl parser.
- Extensions cause portability problems.
- There needs to be a framework for making binary distributions both of Tcl/Tk and extensions, so that it becomes easier for people to install the systems.
- Visual Basic is taking over the PC world and it will be very difficult for Tcl/Tk to make headway against it. At the very least, Tk on the PC needs to support VBX's so that Tcl scripts can use existing VBX's. They discussed what advantages Tcl/Tk might have over Visual Basic today, and came up with the following:
- Scripts are first-class objects in Tcl: there is nothing equivalent to Tcl_Eval in Visual Basic, so you can't send a VB script over the network and execute it at the other side.
- Tcl and Tk may be more portable (Tk's portability hasn't been proven yet).
- Tcl/Tk has Safe-Tcl for evaluating untrusted scripts. It's not clear how to achieve a similar level of security in VB.
- Tcl and Tk are free.
He also asked people how many major needs of theirs were not met by Tk 4.0. He told people not to count the ports, a compiler, and dynamic loading, since those were already near the top of his list for future improvements. Seven people said that they had no other major needs; eight people said they had one additional major need, eight people said they had two additional major needs, and one person said he/she had three or more additional needs. Unfortunately, the session ended before he had a chance to find out what the additional needs are.
Overall, three issues came up over and over again at the workshop, much more than any other issues:
Fortunately, he plans to address all of these issues at Sun. Unfortunately, none of these issues will be addressed in Tk 4.0, and the compiler issue won't be addressed for another year or so.
- It's hard to manage extensions. Dynamic loading is absolutely needed, but additional mechanisms are needed too, such as binary distributions to avoid compilation at every site.
- Performance. Everyone would like to see a compiler, or even half-way short-term solutions like special-purpose compilation of expressions or procedure bodies.
- Ports. The demand for a PC port of Tk seems to be increasing rapidly.
- In this context, a frame can be viewed as a data structure that combines data stored in slots.