A local area hospital read great success stories of project management being used in industry and has read recent stories about this being used in a hospital setting. For many years, the hospital has struggled securing patients’ information under current HIPPA guidelines, and their IT department has had many issues with new software rollouts within the hospital and off-site emergency care and surgical centers. The CEO of the hospital has also discovered through her research that IT projects historically have a lower success rate as compared to other types of projects.
She has tasked you to draft your findings on whether or not the hospital should consider opening a “Continuous Improvement Department'' to help cut costs, improve quality, and better streamline their IT services.
> Draft a response detailing the pros and cons of opening such a department within a medical setting that will prepare the CEO for an upcoming meeting that she has with the Board of Directors.
Need 4-6 pages in APA format with introduction and conclusion. Include minimum of 8 peer-reviewed citations.
CHAPTER
12 System build, implementation and
maintenance: change management
LEARNING OUTCOMES
After reading this chapter, you will be able to:
■ state the purpose of the build phase, and its difference from changeover and implementation;
■ specify the different types of testing required for a system;
■ select the best alternatives for changing from an old system to a new system;
■ recognise the importance of managing software, IS and organisational change associated with the introduction of a new BIS.
MANAGEMENT ISSUES
Eff ective systems implementation is required for a quality system to be installed with minimal disruption to the business. From a mana- gerial perspective, this chapter addresses the following questions:
■ How should the system be tested?
■ How should data be migrated from the old system to the new system?
■ How should the changeover between old and new systems be managed?
■ How can the change to a process-oriented system be managed?
CHAPTER AT A GLANCE
MAIN TOPICS
■ System build and implementation 440
■ Maintenance 448
■ Change management 450
CASE STUDIES
12.1 Business-process management (BPM) 458
12.2 Play pick-and-mix to innovate with SOA 463
M12_BOCI6455_05_SE_C12.indd 439 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT440
System build occurs after the system has been designed. It refers to the creation of software using programming or incorporation of building blocks such as existing software components or libraries. The main concern of managers in the system build phase is that the system be adequately tested to ensure it meets the requirements and design specifications developed as part of the analysis and design phases. They will also want to closely monitor errors generated or identified in the build phase in order to control on-time delivery of the system. System implementation follows the build stage. It involves setting up the right environment in which the test and finished system can be used. Once a test version of the software has been produced, this will be tested by the users and corrections made to the software followed by further testing and fixing until the software is suitable for use throughout the company.
Maintenance deals with reviewing the IS project and recording and acting on problems with the system.
Change management in this chapter is considered at the level of software, information systems and the organisation. Software change management deals with meeting change requests or variations to requirements that arise during the systems development project from business managers, users, designers and programmers. IS change management deals with the migration from an old to a new IS system. Organisational change management deals with managing changes to organisational processes, structures and their impact on organisational staff and culture. Business process management (BPM) provides an approach to this challenge.
INTRODUCTION
System build
The creation of software by programmers involving programming, building release versions of the software and testing by programmers and end-users. Writing of documentation and training may also occur at this stage.
System implementation
Involves the transition or changeover from the old system to the new and the prepararion for this, such as making sure the hardware and network infrastructure for a new system are in place, testing of the system and also human issues of how best to educate and train staff who will be using or affected by the new system.
Maintenance
This deals with reviewing the IS project and recording and acting on problems with the system.
Change management
The management of change which can be considered at the software, information system and organisational levels.
SYSTEM BUILD AND IMPLEMENTATION
System development, which includes programming and testing, is the main activity that occurs at the system build phase.
The coverage of programming in this book will necessarily be brief, since the technical details of programming are not relevant to business people. A brief coverage of the techniques used by programmers is given since a knowledge of these techniques can be helpful in managing technical staff. Business users also often become involved in end-user development, which requires an appreciation of programming principles.
Software consists of program code written by programmers that is compiled or built into files known as ‘executables’ from different modules, each with a particular function. Executables are run by users as interactive programs. You may have noticed application or executable files in directories on your hard disk with a file type of ‘.exe’ , such as winword. exe for Microsoft Word, or ‘.dll’ library files.
There are a number of system development tools available to programmers and business users to help in writing software. Software development tools include:
■ Third-generation languages (3GLs) include Basic, Pascal, C, COBOL and Fortran. These involve writing programming code. Traditionally this was achieved in a text editor with limited support from other tools, since these languages date back to the 1960s. These languages are normally used to produce text-based programs rather than interactive graphical user interface programs that run under Microsoft Windows. They are, however, still used extensively in legacy systems, in which there exist millions of lines of COBOL code that must be maintained.
System development
M12_BOCI6455_05_SE_C12.indd 440 10/13/14 5:56 PM
441ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
■ Fourth-generation languages (4GLs) were developed in response to the difficulty of using 3GLs, particularly for business users. They are intended to avoid the need for programming. Since they often lack the flexibility for building a complex system, they are often ignored.
■ Visual development tools such as Microsoft Visual Studio, Visual Basic and Visual C++ use an ‘interactive development environment’ that makes it easy to define the user interface of a product and write code to process the events generated when a user selects an option from a menu or button. They are widely used for prototyping and some tools such as Visual Basic for Applications are used by end-users for extending spreadsheet models. These tools share some similarities with 4GLs, but are not true application generators since programming is needed to make the applications function. Since they are relatively easy to use, they are frequently used by business users.
■ CASE or computer-aided software engineering tools (see Chapter 11 for coverage of CASE tools) are primarily used by professional IS developers and are intended to assist in managing the process of capturing requirements, and converting these into design and program code.
Computer-aided software engineering (CASE) tools
Primarily used by professional IS developers to assist in managing the process of capturing requirements, and converting these into design and program code. Software metrics are used by businesses developing information systems to establish the
quality of programs in an attempt to improve customer satisfaction through reducing errors by better programming and testing practices. Software or systems quality is measured according to its suitability for the job intended. This is governed by whether it can do the job required (Does it meet the business requirements?) and the number of bugs it contains (Does it work reliably?). The quality of software is dependent on two key factors:
1. the number of errors or bugs in the software; 2. the suitability of the software to its intended purpose, i.e. does it have the features
identified by users which are in the requirements specification?
It follows that good-quality software must meet the needs of the business users and contain few errors. We are trying to answer questions such as:
■ Does the product work? ■ Does it crash? ■ Does the product function according to specifications? ■ Does the user interface meet product specifications and is it easy to use? ■ Are there any unexplained or undesirable side-effects to using the product which may
stop other software working?
The number of errors is quite easily measured, although errors may not be apparent until they are encountered by end-users. Suitability to purpose is much more difficult to quantify, since it is dependent on a number of factors. These factors were referred to in detail earlier (in Chapters 8 and 11) which described the criteria that are relevant to deciding on a suitable information system. These quality criteria include correct functions, speed and ease of use.
Assessing software quality
Software or systems quality
Measures software quality according to its suitability for the job intended. This is governed by whether it can do the job required (Does it meet the business requirements?) and the number of bugs it contains (Does it work reliably?).
What is a bug? Problems, errors or defects in software are collectively known as ‘bugs’, since they are often small and annoying! Software bugs are defects in a program which are caused by human error during programming or earlier in the lifecycle. They may result in major faults
Software bug
Software bugs are defects in a program which are caused by human error during programming or earlier in the lifecycle. They may result in major faults or may remain unidentified.
M12_BOCI6455_05_SE_C12.indd 441 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT442
Software quality also involves an additional factor which is not concerned with the functionality or number of bugs in the software. Instead, it considers how well the software operates in its environment. For example, in a multitasking environment such as Microsoft Windows, it assesses how well a piece of software coexists with other programs. Are resources shared evenly? Will a crash of the software cause other software to fail also? This type of interaction testing is known as ‘behaviour testing’.
Software metrics
Software metrics have much in common with measures involved with assessing the quality of a product in other industries. For example, in engineering or construction, designers want to know how long it will take a component to fail or the number of errors in a batch of products. Most measures are defect-based, measuring the number and type of errors. The source of the error and when it was introduced into the system are also important. Some errors are the result of faulty analysis or design and many are the result of a programming error. By identifying and analysing the source of the error, improvements can be made to the relevant part of the software lifecycle. An example of a comparison of three projects in terms of errors is shown in Table 12.1. It can be seen that in Project 3, the majority of errors are introduced during the coding (programming) stage, so corrective action is necessary here.
While the approach of many companies to testing has been that bugs are inevitable and must be tested for to remove them, more enlightened companies look at the reasons for the errors and attempt to stop them being introduced by the software developers. This implies that longer should be spent on the analysis and design phases of a project. Johnston (2003) suggests that the balance between the phases of a project should be divided as shown in Table 12.2, with a large proportion of the time being spent on analysis and design.
In software code the number of errors or ‘defect density’ is measured in terms of errors per 1000 lines of code (or KLOC for short). The long-term aim of a business is to reduce the defect rate towards the elusive goal of ‘zero defects’.
Errors per KLOC is the basic defect measure used in systems development. Care must be taken when calculating defect density or productivity of programmers using KLOC, since this will vary from one programming language to another and according to the style of the programmer and the number of comment statements used. KLOC must be used consistently between programs, and this is usually achieved by only counting executable statements, not comments, or by counting function points (function point analysis is covered in Chapter 9).
or may remain unidentified. A major problem in a software system can be caused by one wrong character in a program of tens of thousands of lines. So it is often the source of the problem that is small, not its consequences.
Computing history recalls that the first bug was a moth which crawled inside a valve in one of the first computers, causing it to crash! This bug was identified by Grace Hopper, the inventor of COBOL, the first commercial programming language.
Software metrics
Measures which indicate the quality of software.
Table 12.1 Table comparing the source of errors in three different software projects
Project 1 Project 2 Project 3
Analysis 20% 30% 15%
Design 25% 40% 20%
Coding 35% 20% 45%
Testing 20% 10% 20%
Errors per KLOC
Errors per KLOc (thousand lines of code) is the basic defect measure used in systems development.
M12_BOCI6455_05_SE_C12.indd 442 10/13/14 5:56 PM
443ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
A significant activity of the build phase is to transfer the data from the old system to the new system. Data migration is the transfer of data from the old system to the new system. When data are added to a database, this is known as ‘populating the database’. One method of transferring data is to rekey manually into the new system. This is impractical for most systems since the volume of data is too large. Instead, special data conversion programs are written to convert the data from the data file format of the old system into the data file format of the new system. Con-version may involve changing data formats, for example a date may be converted from two digits for the year into four digits. It may also involve combining or aggregating fields or records. The conversion programs also have to be well tested because of the danger of corrupting existing data. Data migration is an extra task which needs to be remembered as part of the project manager’s project plan. During data migration data can be ‘exported’ from an old system and then ‘imported’ into a new system.
When using databases or off-the-shelf software, there are usually tools provided to make it easier to import data from other systems.
The technical quality of software can also be assessed by measures other than the number of errors. Its complexity, which is often a function of the number of branches it contains, is commonly used.
Another metric, more commonly used for engineered products, is the mean time between failures. This is less appropriate to software since outright failure is rare, but small errors or bugs in the software are quite common. It is, however, used as part of outsourcing contracts or as part of the service-level agreement for network performance.
A more useful measure for software is to look at the customer satisfaction rating of the software, since its quality is dependent on many other factors such as usability and speed as well as the number of errors.
Table 12.2 Ideal proportions of time to be spent on different phases of a systems development project, focusing on details of build phase
Project activities Suggested proportion
Definition, design and planning 20%
Coding 15%
Component test and early system test 15%
Full system test, user testing and operational trials 20%
Documentation, training and implementation support 20%
Overall project management 10%
Data migration
Data migration is the transfer of data from the old system to the new system. When data are added to a database, this is known as populating the database.
Import and export
Data can be ‘exported’ from an old system and then ‘imported’ into a new system.
Testing is a vital aspect of implementation, since this will identify errors that can be fixed before the system is live. The type of tests that occur in implementation tend to be more structured than the ad hoc testing that occurs with prototyping earlier in systems development.
Note that often testing is not seen as an essential part of the lifecycle, but as a chore that must be done. If its importance is not recognised, insufficient testing will occur. Johnston (2003) refers to the ‘testing trap’, when companies spend too long writing the software without changing the overall project deadline. This results in the amount of time for testing being ‘squeezed’ until it is no longer sufficient.
Testing information systems
Data migration
M12_BOCI6455_05_SE_C12.indd 443 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT444
During prototyping, the purpose of testing is to identify missing features or define different ways of performing functions. Testing is more structured during the implementation phase in order to identify as many bugs as possible. It has two main purposes: the first is to check that the requirements agreed earlier in the project have been implemented, the second is to identify errors or bugs. To achieve both of these objectives, testing must be conducted in a structured way by using a test specification which details tests in different areas. This avoids users’ performing a general usability test of the system where they only use common functions at random. While this is valid, and is necessary since it mirrors real use of the software, it does not give a good coverage of all the areas of the system. Systematic tests should be performed using a test script which covers, in detail, the functions to be tested.
Test specification
A detailed description of the tests that will be performed to check the software works correctly.
Test plan
Plan describing the type and sequence of testing and who will conduct it.
Jim Goodnight: crunching the numbers By Michael Dempsey
Addressing a recent business intelligence conference in London, Jim Goodnight’s considered responses and soft Southern drawl left the impression of a thoughtful figure who just happens to be chief executive of a $1.34bn business.
His taciturn aspect changed when the absolute quality of his company’s software was raised. ‘SAS is still quicker and better’, he states.
Despite the waves of re-labelling that have allowed his business to surf through management information systems and data warehousing to reach today’s focus on business intelligence and performance management, Mr Goodnight defines SAS in the light of a very old-fashioned customer grouse. ‘When we ship software, it’s almost bug-free. We learnt about doing that the hard way, many years ago.’
During the 1980s, SAS released some software before it was fully tested and provoked a vocal reaction from the users. ‘They let us know what was wrong with it.’ He jokes about the number of bugs that are still found in other large commercial systems and then generously redeems his competitors with the remark ‘but then we do so much more testing’.
Mini case study
Source: Dempsey, M. (2005) Jim Goodnight: crunching the numbers. Financial Times. 23 March. © The Financial Times Limited 2005. All Rights Reserved.
Given the variety of tests that need to be performed, large implementations will also use a test plan, a specialised project plan describing what testing will be performed when, and by whom. Testing is always a compromise between the number of tests that can be performed and the time available.
The different types of testing that occur throughout the software lifecycle should be related to the earlier stages in the lifecycle against which we are testing. This approach to development (Figure 12.1) is sometimes referred to as the ‘V-model of systems development’, for obvious reasons. The diagram shows that different types of testing are used to test different aspects of the analysis and design of the system: to test the requirements specification a user acceptance test is performed, and to test the detailed design unit testing occurs.
We will now consider in more detail the different types of testing that need to be conducted during implementation. This review is structured according to who performs the tests.
M12_BOCI6455_05_SE_C12.indd 444 10/13/14 5:56 PM
445ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
Developer tests
There are a variety of techniques that can be used for testing systems. Jones (2008) identifies 18 types of testing, of which the most commonly used are subroutine, unit, new function, regression, integration and systems testing. Many of the techniques available are not used due to lack of time, money or commitment. Some of the more common techniques are summarised here.
n Module or unit tests. These are performed on individual modules of the system. The module is treated as a ‘black box’ (ignoring its internal method of working) as developers check that expected outputs are generated for given inputs. When you drive a car this can be thought of as black box testing – you are aware of the inputs to the car and their effect as outputs, but you will probably not have a detailed knowledge of the mechanical aspects of the car and whether they are functioning correctly. Module testing involves considering a range of inputs or test cases, as follows: (a) Random test data can be automatically generated by a spreadsheet for module
testing. (b) Structured or logical test data will cover a range of values expected in normal use of
the module and also values beyond designed limits to check that appropriate error messages are given. This is also known as ‘boundary value testing’ and is important, since many bugs occur because designed boundaries are crossed. This type of data is used for regression testing, explained below.
(c) Scenario or typical test data use realistic example data, possibly from a previous system, to simulate day-to-day use of the system.
These different types of test data can also be applied to system testing.
n Integration or module interaction testing (black box testing). Expected interactions such as messaging and data exchange between a limited number of modules are assessed. This can be performed in a structured way, using a top-down method where a module calls other module functions as stubs (partially completed functions which should return expected values) or using a bottom-up approach where a driver module is used to call complete functions.
n New function testing. This commonly used type of testing refers to testing the operation of a new function when it is implemented, perhaps during prototyping. If testing is
Figure 12.1 The V-model of systems development relating analysis and design activities to testing activities
Initiation Implementation review
Requirements specification
User acceptance test
Overall design System test
Detailed design Unit test
Code
Module or unit testing
Individual modules are tested to ensure they function correctly for given inputs.
M12_BOCI6455_05_SE_C12.indd 445 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT446
limited to this, problems may be missed since the introduction of the new function may cause bugs elsewhere in the system.
n System testing. When all modules have been completed and their interactions assessed for validity, links between all modules are assessed in the system test. In system testing, interactions between all relevant modules are tested systematically. System testing will highlight different errors to module testing, for example when unexpected data dependencies exist between modules as a result of poor design.
n Database connectivity testing. This is a simple test that the connectivity between the application and the database is correct. Can a user log in to the database? Can a record be inserted, deleted or updated, i.e. are transactions executing? Can transactions be rolled back (undone) if required?
n Database volume testing. This is linked to capacity planning of databases. Simulation tools can be used to assess how the system will react to different levels of usage anticipated from the requirements and design specifications. Methods of indexing may need to be improved or queries optimised if the software fails this test.
n Performance testing. This will involve timing how long different functions or transactions take to occur. These delays are important, since they govern the amount of wasted time users or customers have to wait for information to be retrieved or screens refreshed. Maximum waiting times may be specified in a contract, for example.
n Confidence test script. This is a short script which may take a few hours to run through and which tests all the main functions of the software. It should be run before all releases to users to ensure that their time is not wasted on a prototype that has major failings which mean the test will have to be aborted and a new release made.
n Automated tests. Automated tools simulate user inputs through the mouse or keyboard and can be used to check for the correct action when a certain combination of buttons is pressed or data entered. Scripts can be set up to allow these tests to be repeated. This is particularly useful for performing regression tests.
n Regression testing. This testing should be performed before a release to ensure that the software performance is consistent with previous test results, i.e. that the outputs produced are consistent with previous releases of the software. This is necessary, as in fixing a problem a programmer may introduce a new error that can be identified through the regression test. Regression testing is usually performed with automated tools.
End-user tests
The purpose of these is twofold: first, to check that the software does what is required; and second, to identify bugs, particularly those that may only be caused by novice users.
For ease of assessing the results, the users should be asked to write down for each bug or omission found:
1. module affected; 2. description of problem (any error messages to be written in full); 3. relevant data – for example, which particular customer or order record in the database
caused the problem; 4. severity of problem on a three-point scale.
Different types of end-user tests that can be adopted include:
n Scenario testing. In an order processing system this would involve processing example orders of different types, such as new customers, existing customers without credit and customers with a credit agreement.
n Functional testing. Users are told to concentrate on testing particular functions or modules such as the order entry module in detail, either following a test script or working through the module systematically.
Volume testing
Testing assesses how system performance will change at different levels of usage.
Regression testing
Testing performed before a release to ensure that the software performance is consistent with previous test results, i.e. that the outputs produced are consistent with previous releases of the software.
Functional testing
Testing of particular functions or modules either following a test script or working through the module systematically.
System testing
When all modules have been completed and their interactions assessed for validity, links between all modules are assessed in the system test. In system testing, interactions between all relevant modules are tested systematically.
M12_BOCI6455_05_SE_C12.indd 446 10/13/14 5:56 PM
447ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
n General testing. Here, users are given free rein to depart from the test specification and test according to their random preferences. Sometimes this is the only type of testing used, which results in poor coverage of the functions in the software!
n Multi-user testing. The effect of different users accessing the same customer or stock record. Software should not permit two users to modify the same data at the same time. Tests should also be made to ensure that users with different permissions and rules are treated as they should be, e.g. that junior staff are locked out of company financial information.
n Inexperienced user testing. Staff who are inexperienced in the use of software often make good ‘guinea pigs’ for testing software, since they may choose an illogical combination of options that the developers have not tested. This is surprisingly effective and is a recommended method of software testing. The staff involved often also like the power of being able to ‘break’ the software.
n User acceptance testing. This is the final stage of testing which occurs before the software is signed off as fit for purpose and the system can go live. Since the customer will want to be sure the software works correctly, this may take a week or more.
n Alpha and beta testing. These terms apply to user tests which occur before a packaged software product is released. They are described in the section on configuration management later in this chapter.
Benefits-based testing
An alternative approach to testing is not to focus only on the errors when reviewing a system, but rather to test against the business benefits that the system confers. A system could be error-free, but if it is not delivering benefits then its features may not have been implemented correctly. This approach can be used with prototyping, so that if a system is not delivering the correct features it can be modified. When undertaking structured testing, the software will be tested against the requirements specification to check that the desired features are present.
Testing environments
Testing occurs in different environments during the project. At an early stage prototypes may be tested on a single standalone machine or laptop. In the build phase, testing will be conducted in a development environment, which involves programmers’ testing data across a network on a shared server. This is mainly used for module testing. In the implementation phase, a special test environment will be set up which simulates the final operating environment for the system. This could be a room with three or more networked machines accessing data from a central server. This test environment will be used for early user training and testing and for system testing. Finally, the production or live environment is that in which the system will be used operationally. This will be used for user acceptance testing and when the system becomes live. When a system goes live, it is worth noting that there may still be major problems despite extensive testing.
Multi-user testing
The effect of different users accessing the same customer or stock record is tested. Software should not permit two users to modify the same data at the same time.
User acceptance testing
This is the final stage of testing which occurs before the software is signed off as fit for purpose and the system can go live.
Test environment
A specially configured environment (hardware, software and office environment) used to test the software before its release.
Live (production) environment
The term used to describe the setup of the system (hardware, software and office environment) where the software will be used in the business.
Producing documentation occurs throughout the software lifecycle, such as when requirements are specified at the analysis stage, but it becomes particularly import-ant at the implementation and maintenance stages of a project. At this stage user guides will be used as part of user acceptance testing and system developers will refer to design documents when updating the system. The main types of documentation required through the project are referred to in Figure 12.1. The important documentation used at the testing stage includes:
n the requirements specification produced at the analysis stage; this is used in the user acceptance test, to check that the correct features have been implemented;
Documentation
Documentation
Software documentation refers to end-user guidance such as the user guide and technical maintenance documentation such as design and test specifications.
M12_BOCI6455_05_SE_C12.indd 447 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT448
n the user manual, which will be used during testing and operational use of the system by business users;
n the design specification, which will be used during system testing and during maintenance by developers;
n the detailed design, which will be used in module testing and during maintenance; n the data dictionary or database design, which will be used in testing and maintenance by
database administrators and developers; n detailed test plans and test specifications, which will be used as part of developer and
user testing; n quality assurance documents such as software change request forms, which will be used
to manage the change during the build and implementation phases.
The writing of documentation is often neglected, since it tends to be less interesting than developing the software. To ensure that it is produced, strong project management is necessary and the presence of a software quality plan will make sure that time is spent on documentation, since a company’s quality standard is assessed on whether the correct documentation is produced.
Example of a user guide structure User guides are normally structured to give a gradual introduction to the system, and there may be several guides for a single system. A common structure is:
1. A brief introductory/overview guide, often known as ‘Getting started’. The aim of this is to help users operate the software productively with the minimum of reading. The introductory section will also explain the purpose of the system for the business.
2. Tutorial guide. This will provide lessons, often with example data to guide the user through using the package. These are now often combined with online ‘guided tours’.
3. Detailed documentation is often structured according to the different screens in an application. However, it is usually better to structure such guides according to the different functions or applications a business user will need. Chapter titles in such an approach might include ‘How to enter a new sales order’ or ‘How to print a report’. This guide should also incorporate information on trouble-shooting when problems are encountered.
4. Quick reference guide, glossary and appendix. These will contain explanations of error messages and a summary of all functions and how to access them.
User guides
The user guide has become a less important aspect of systems documentation with the advent of online help such as the help facility available with Windows applications and web-site-based help. Online help can give general guidance on the software, or it can give more specific advice on a particular screen or function – when it is known as ‘context- sensitive’. It is often a good idea to ask business users to develop the user guide, since if programmers write the guide it will tend to be too technical and not relevant to the needs of users. Since business users are sometimes charged with producing a user guide, approaches to structuring these is covered in a little more detail.
MAINTENANCE
The maintenance phase of a project starts when the users sign off the system during testing and it becomes a live production system. After a system is live, there are liable to be some errors that were not identified during testing and need to be remedied. When
M12_BOCI6455_05_SE_C12.indd 448 10/13/14 5:56 PM
449ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
problems are encountered, this presents a dilemma to the system manager, since they will have to balance the need for a new release of the system against the severity of an error. It is not practical or cost-effective to introduce a new release of the software for every bug found, since each release needs to be tested and installed and fresh problems may exist in the new system. Most systems managers would aim not to make frequent, immediate releases to correct problems because of the cost and disruption this causes. Instead, faults will be recorded and then fixed in a release that solves several major problems. This is known as a maintenance release and it might occur at monthly, six-monthly or yearly intervals according to the stability of the system. This is usually a function of the age of the system – new systems will have more errors and will need more frequent maintenance releases.
With the advent of customer-facing e-commerce systems that need to be available 24 hours a day, 7 days a week for 365 days a year, periodic maintenance releases are not appropriate. Significant problems must be rectified immediately with the minimum of disruption. In 2001 Barclays Bank was censured by the UK advertising standard authority for suggesting in their television adverts that their systems were continuously available 24 hours per day. In fact, some users of their system complained that it was not available for a short period after midnight each night due to maintenance. Consequently Barclays had to change the advert, and may eventually change their approach to maintenance.
Maintenance releases will not only fix problems, but may also include enhancements or new features requested by users.
Major and minor releases are denoted by the release or version number. If a system changes from version 1.1 to 2.0, this will be a major release. When moving from version 2.0 to 2.1, some new features might be involved. From version 2.1 to 2.1.1 might represent a patch or interim release to correct problems.
To help make the decision of installing a new release to correct the problem, a scale of severity of the fault is used by companies to govern what action is required. Such a scale may form part of the contract if a company has outsourced its systems development to a third party. An example of such a scale is shown in Table 12.3.
Most systems now have a modular design such that it is not necessary to reinstall the complete system if an error is encountered – rather the module where the error lies can be replaced. This is described in a rather primitive way as applying a patch to the system. Patches to off-the-shelf systems are now available for download over the Internet. Because
Maintenance
Maintenance occurs after the system has been signed off as suitable for users. It involves reviewing the project and recording and acting on problems with the system.
Table 12.3 Fault taxonomy described in Jorgenson (1995)
Category example action
Mild Misspelt word Ignore or defer to next major release
Moderate Misleading or redundant information Ignore or defer to next major release
Annoying Truncated text Defer to next major release
Disturbing Some transactions not processed correctly, intermittent crashes in one module
Defer to next maintenance release
Serious Lost transactions Defer to next maintenance release may need immediate fix and release
Very serious Crash occurs regularly in one module Immediate solution needed
Extreme Frequent, very serious errors Immediate solution needed
Intolerable Database corruption Immediate solution needed
Catastrophic System crashes, cannot be restarted – system unusable
Immediate solution needed
Infectious Catastrophic problem also causes failure of other systems
Immediate solution needed
Software patch
This is an interim release of part of an information system that is intended to address deficiencies in a previous release.
M12_BOCI6455_05_SE_C12.indd 449 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT450
CHANGE MANAGEMENT
The main activities undertaken by a manager of systems development projects are essentially concerned with managing change. Managing change takes different forms. First, we will look at managing technical changes to the software requirements as the system is developed through prototyping and testing. We will then look at how organisations can manage the transition or changeover to a new information system from an old system. Another important aspect of change we will review is how the introduction of a new system can affect the business users and action that can be taken to manage this organisational change. The role of organisational culture in influencing this will also be considered.
At each stage of a systems development project, change (modification) requests or variations to requirements will arise from business managers, users, designers and programmers. These requests include reports of bugs and of features that are missing from the system as well as ideas for future versions of the software.
These requests will occur as soon as users start evaluating prototypes of a system and will continue through to the maintenance phase of the project when the system has gone live. As the users start testing the system in earnest in the implementation phase, these requests will become more frequent and tens or possibly hundreds will be generated each week. This process of change needs to be carefully managed, since otherwise it can develop into requirements creep, a problem on many information systems projects. As the number of requirements grows, more developer time will be required to fix the problems and the project can soon spiral out of control. What is needed is a mechanism to ensure, first, that all the changes are recorded and dealt with, and second, that they are reviewed in such a way that the number of changes does not become unmanageable.
Software change management
Change (modification) requests
A modification to the software thought to be necessary by the business users or developers.
A post-implementation review or project closedown review occurs several months after the system has gone live. Its purpose is to assess the success of the new system and decide on any necessary corrective action. The review could include the following:
n faults and suggested enhancements with agreement on which need to be implemented in a future release;
n success of system in meeting its budget and timescale targets; n success of system in meeting its business requirements – has it delivered the anticipated
benefits described in the feasibility study? n development practices that worked well and poorly during the project.
An additional reason for performing a post-implementation review is so that lessons can be learnt from the project. Good practices can be applied to future projects and attempts made to avoid techniques which failed.
Post-implementation review
Post-implementation review
A meeting that occurs after a system is operational to review the success of the project.
of the competitive pressures of releasing software as soon as possible, a large number of off-the-shelf packages require some sort of patch. For example, web browser software such as Netscape Navigator and Microsoft Internet Explorer has required frequent patches to correct errors in the security of the browser which permit unauthorised access to the computer on which the browser is running.
M12_BOCI6455_05_SE_C12.indd 450 10/13/14 5:56 PM
451ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
The main steps in managing changed requirements are:
1. Record the change requests, indicating level of importance and module affected. 2. Prioritise them with the internal or external customer as ‘must have’, ‘nice to have’ or
‘later release’ (Priority 1, 2 or 3). This will be done with reference to the project con- straints of system quality, cost and timescale.
3. Identify responsibility for fixing the problem, since it may lie with a software house, internal IS staff, systems integrator or hardware vendor.
4. Implement changes that are recorded as high-priority. 5. Maintain a check of which high-priority errors have been fixed.
When a system is being implemented, it is useful to have a three-way classification of errors to be fixed, since this highlights the errors or missing features that must be implemented and avoids long discussions of the merits of each solution.
When the system is live, a more complex classification is often used to help in deciding how to ‘escalate’ problems up the hierarchy according to their severity. This could be structured as follows:
1. Critical problem, system not operational. This may occur due to power or server failure. Level 1 problems need to be resolved immediately, since business users cannot access the system at all. With customer-facing applications such as e-commerce systems, this type of problem needs to be corrected as soon as possible since every minute the system is not working transaction revenue is lost.
2. Critical problem, making part of the system unusable or causing data corruption. These would normally need to be resolved within 12 to 24 hours, depending on the nature of the problem.
3. Problem causing intermittent system failure or data corruption. Resolve within 48 hours. 4. Non-severe problem not requiring modification to software until next release. 5. Trivial problem or suggestion which can be considered for future releases.
If the system has been tailored by a systems integrator, these will be their responsibility to fix and this will be specified in the contract or service-level agreement (SLA), together with the time that will be taken for the change to be made. If the system has been developed or tailored internally by the IS department or even within a department, an SLA is still a good idea. If the problem occurs from a problem with packaged software, you will have to hope that an update release that solves the problem is available; if not, you will have to lobby the supplier for one.
Software quality assurance
As we have seen, procedures should be followed throughout the software lifecycle to try to produce good-quality systems. These quality assurance (QA) procedures have been formalised in the British Standard BS 5750 Part 1 and its international equivalent ISO 9001 (TickIT). These procedures do not guarantee a quality information system, but their purpose is to ensure that all relevant parts of the software lifecycle, such as requirements capture, design and testing, are carried out consistently. Business users can ask whether suppliers have quality accreditation as a means of distinguishing between them. QA procedures would not specify a particular method for design or testing, but they would specify how the change was managed by ensuring that all changes to requirements are noted and that review mechanisms are in place to check that changes are agreed and acted on accordingly.
If a business buys software services from a company that has achieved the quality standards, then there is less risk of the services’ being inadequate. For a company to achieve a quality standard it has to be assessed by independent auditors and if successful it will be audited regularly.
M12_BOCI6455_05_SE_C12.indd 451 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT452
Choosing the method to be used for migrating or changing from the old system to the new system is one of the most important decisions that the project management team must make during the implementation phase. Changeover can be defined as moving from the old information system to the new information system. Note that this changeover is required whether the previous information system is computer- or paper-based. Before considering the alternatives, we will briefly discuss the main factors that a manager will consider when evaluating them. The factors are:
n Cost. This is, of course, an important consideration, but the quality of the new system is often more important.
Configuration management: builds and release versions
Configuration management is control of the different versions of software and program source code used during the build, implementation and maintenance phases of a project.
Throughout the implementation phase, updated versions of the software are released to users for testing. Before software can be used by users it needs to be released as an executable, built up from compiled versions of all the program code modules that make up the system. The process of joining all the modules is technically known as the linking or build process. The sequence can be summarised as:
1. programmers write different code modules; 2. completed code modules are compiled to form object modules; 3. object modules are linked to form executables; 4. executables are installed on machines; 5. executables are loaded and run by end-users testing the software.
Each updated release of the software is therefore usually known as a new ‘build’. With large software systems there will be hundreds of program files written by different developers that need to be compiled and then linked. If these files are not carefully tracked, then the wrong versions of files may be used, with earlier versions causing bugs. This process of version control is part of an overall process known as configuration management, which ensures that programming and new releases occur in a systematic way. One of the problems with solving the millennium bug was that in some companies configuration management was so poor that the original program code had been lost!
During the build phase, updated software versions will become more suitable for release as new functions are incorporated and the number of bugs is reduced. Some companies, such as Microsoft, call these different versions ‘release candidates’, others use the terminology alpha, beta and gold to distinguish between versions. These terms are often applied to packaged software, but can also be applied to bespoke business applications.
n Alpha releases and alpha testing. Alpha releases are preliminary versions of the software released early in the build phase. They usually have the majority of the functionality of the complete system in place, but may suffer from extensive bugs. The purpose of alpha testing is to identify these bugs and any major problems with the functionality and usability of the software. Alpha testing is usually conducted by staff inside the organisation developing the software or by favoured customers.
n Beta releases and beta testing. Beta releases occur after alpha testing and have almost complete functionality and relatively few bugs. Beta testing will be conducted by a range of customers who are interested in evaluating the new software. The aim of beta testing is to identify bugs in the software before it is shipped to all customers.
n Gold release. This is a term for the final release of the software which will be shipped to all customers.
Configuration management
Procedures that define the process of building a version of the software from its constituent program files and data files.
Alpha release
Alpha releases are preliminary versions of the software released early in the build process. They usually have the majority of the functionality of the system in place, but may suffer from extensive bugs.
Alpha testing
The purpose of ‘alpha testing’ is to identify bugs and any major problems with the functionality and usability of the software. Alpha testing is usually conducted by staff inside the organisation developing the software or by favoured customers.
Beta release
Beta releases occur after alpha testing and have almost complete functionality and relatively few bugs.
Beta testing
Will be conducted by a range of customers who are interested in evaluating the new software. The aim of beta testing is to identify bugs in the software before it is shipped to a range of customers.
IS change management
Changeover
The term used to describe moving from the old information system to the new information system.
M12_BOCI6455_05_SE_C12.indd 452 10/13/14 5:56 PM
453ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
Figure 12.2 Alternative changeover methods for system implementation
A B
C
All modules implemented
Modules Phased
1 2
3
National rollout
Time
Pilot regions Pilot
Parallel running
Direct cutover
Live dateOld system New system
n Time. There will be a balance between the time available and the desired quality of the system which will need to be evaluated.
n Quality of new system after changeover. This will be dependent on number of bugs and suitability for purpose.
n Impact on customers. What will be the effect on customer service if the change-over overruns or if the new system has bugs?
n Impact on employees. How much extra work will be required by employees during the changeover? Will they be remunerated for this?
n Technical issues. Some of the options listed below may not be possible if the system does not have a modular design.
There are four main alternatives for moving from a previous system to a new system. The options are shown in Figure 12.2 and described in more detail below.
Immediate cutover or big-bang method
The immediate cutover method involves moving directly from the original system to the new system at a particular point in time. On a designated date, the old system is switched off and all staff move to using the new system. Clearly, this is a high-risk strategy since there is no fallback position if serious bugs are encountered. However, this approach is adopted by many large companies since it may be impractical and costly to run different systems in parallel. Before cutover occurs, the company will design the system carefully and conduct extensive testing to make sure that it is reliable and so reduce the risk of failure. The case study shows a relatively successful example of the cutover method and indicates why this is necessary for the implementation of large systems. The success factors of this project are described.
Parallel running
With parallel running the old and new systems are operated together for a period until the company is convinced that the new system performs adequately. This presents a lower risk than the immediate cutover method, since if the new system fails, the company can revert to the old system and customers will not be greatly affected. Parallel running
Immediate cutover (big-bang changeover)
Immediate cutover is when a new system becomes operational and operations transfer immediately from the previous system.
Parallel running
This changeover method involves the old and new systems operating together at the same time until the company is certain the new system works.
M12_BOCI6455_05_SE_C12.indd 453 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT454
sometimes also involves using a manual or paper-based system as backup in case the new system fails.
The cost of running two systems in parallel is high, not only in terms of maintaining two sets of software and possibly hardware, but also in the costs of the human operators repeating operations such as keying in customer orders twice. Indeed, the increase in workload may be such that overtime or additional staff may be required. The parallel method is only appropriate when the old and new systems perform similar functions and use similar software and hardware combinations. This makes it unsuitable for business re-engineering projects where completely new ways of working are being introduced that involve staff in working on different tasks or in different locations.
Phased implementation
A phased implementation involves delivering different parts of the system at different times. These modules do not all become live simultaneously, but rather in sequence. As such, this alternative is part-way between the big-bang and parallel running approaches. Each module can be introduced as either immediate cutover or in parallel. In a modular accounting system, for example, the core accounting functions, such as accounts payable, accounts receivable and general ledger, could be introduced first, with a sales order processing and then inventory control module introduced later. This gives staff the opportunity to learn about the new system more gradually and problems encountered on each module can be fixed as they are introduced.
Although this may appear to be an attractive approach, since if a new module fails the other modules will still be available, it is difficult to implement in practice. To achieve a phased implementation requires that the architecture of the new system and old system be designed in a modular way, and that the modules can operate independently without a high degree of coupling. For all systems, however, data exchange will be required between the different modules and this implies that common data exchange formats exist between the old and the new systems. This is often not the case, particularly if the software is sourced from different suppliers. Designers of systems are using techniques such as object-oriented design to produce modules with fewer and clearer dependencies between them. This should help in making phased implementations more practical. In the example given for the modular accounting system, modules in the old and new systems would have to have facilities to transfer data.
Pilot system
In a pilot implementation, the system will be trialled in a limited area before it is deployed more extensively. This could include deploying the system in one operating region of the company, possibly a single country, or in a limited number of offices. This approach is common in multinational or national companies with several offices. Such a pilot system usually acts as a trial before more extensive deployment in a big-bang implementation.
Using combinations of changeover methods
The different changeover methods are often used in conjunction for different stages of an implementation. For example, in a national or international implementation it is customary to trial the project in a single region or country using a pilot of the system. If a pilot system is considered successful there is then a choice of one of the following:
n immediately implementing the system elsewhere using the big-bang approach; n running the new and old systems in parallel until it is certain that the new system is
stable enough;
Pilot implementation
The system is trialled in a limited area before it is deployed more extensively across the business.
Phased implementation
This changeover method involves introducing different modules of the new system sequentially.
M12_BOCI6455_05_SE_C12.indd 454 10/13/14 5:56 PM
455ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
n if the new system is modular in construction, it is possible for the implementation to be phased, with new modules gradually being introduced as they are completed and the users become familiar with the new system;
n parallel running will probably also occur in this instance, in case there is a need to revert to the old system in the event of failure of the new system.
Once the system is proved in the first area, then further rollout will probably occur through the big-bang approach.
The advantages and disadvantages of each of these changeover methods are summarised in Table 12.4.
Deployment planning
A deployment plan is necessary to get all ‘kit’ or hardware in place in time for user acceptance testing. A deployment plan is a schedule that defines all the tasks that need to occur in order for changeover to occur successfully. This includes putting in place all the infrastructure such as cabling and hardware. This is not a trivial task, because often a range of equipment will be required from a variety of manufacturers. A deployment plan should list every software deliverable and hardware item required, when it needs to arrive and when it needs to be connected. The deployment plan will be part of the overall project plan or Gantt chart. A deployment plan is particularly important for large implementations involving many offices, such as the Barclays system referred to earlier in the chapter. Several people may be responsible for this task on large projects.
When planning deployment, advanced planning is required due to possible delays in purchasing and delivery. The burden of purchasing will often be taken by a systems integrator, but it may be shared by the purchasing department of the company buying the new system. This needs careful liaison between the two groups.
With installation of new hardware, a particular problem is where changes to infrastructure are required – for example upgrading cabling to a higher bandwidth or installing a new router. This can take a considerable time and cause a great deal of disruption to users of existing systems.
Deployment plan
A deployment plan is a schedule that defines all the tasks that need to occur in order for changeover to occur successfully. This includes putting in place all the infrastructure such as cabling and hardware.
Table 12.4 Advantages and disadvantages of the different methods of implementation
Method Main advantages Main disadvantages
Immediate cutover Rapid, lowest cost High risk if serious errors in system
Parallel running Lower risk than immediate cutover Slower and higher-cost than immediate cutover
Phased implementation Good compromise between immediate cutover and parallel running
Difficult to achieve technically due to interdependencies between modules
Pilot system Essential for multinational or national rollouts Has to be used in combination with the other methods
This section deals with managing changes to organisational processes, structures and their impact on organisational staff and culture.
In the early-to-mid 1990s organisation-wide transformational change was advocated under the label of business process re-engineering (BPR). It was popularised through the pronouncements of Hammer and Champy (1993) and Davenport (1993). The essence of BPR is the assertion that business processes, organisational structures, team structures and
Organisational change management
Business process re-engineering (BPR)
Identifying radical, new ways of carrying out business operations, often enabled by new IT capabilities.
M12_BOCI6455_05_SE_C12.indd 455 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT456
employee responsibilities can be fundamentally altered to improve business performance. Hammer and Champy (1993) defined BPR as:
the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service, and speed.
The key words from this definition that encapsulate the BPR concept are:
n fundamental rethinking – re-engineering usually refers to changing of significant business processes such as customer service, sales order processing or manufacturing;
n radical redesign – re-engineering is not involved with minor, incremental change or automation of existing ways of working. It involves a complete rethinking about the way business processes operate;
n dramatic improvements – the aim of BPR is to achieve improvements measured in tens or hundreds of per cent. With automation of existing processes only single-figure improvements may be possible;
n critical contemporary measures of performance – this point refers to the importance of measuring how well the processes operate in terms of the four important measures of cost, quality, service and speed.
Willcocks and Smith (1995) characterise the typical changes that arise in an organisation with process innovation as:
n work units changing from functional departments to process teams; n jobs change from simple tasks to multidimensional work; n people’s roles change from controlled to empowered; n focus of performance changes from activities to results; n values change from protective to productive.
In Re-engineering the Corporation Hammer and Champy have a chapter giving examples of how IS can act as a catalyst for change (disruptive technologies). These technologies are familiar from those described earlier (in Chapter 6) and include tracking technology, decision support tools, telecommunications networks, teleconferencing and shared databases. Hammer and Champy label these as ‘disruptive technologies’ which can force companies to reconsider their processes and find new ways of operating.
Many re-engineering projects were launched in the 1990s and failed due to their ambitious scale and the problems of managing large information systems projects. Furthermore, BPR was also often linked to downsizing in many organisations, leading to an outflow of staff and knowledge from businesses. As a result BPR as a concept has fallen out of favour and more caution in achieving change is advocated.
Less radical approaches to organisational transformation are referred to as business process improvement (BPI) or by Davenport (1993) as ‘business process innovation’. Taking the example of a major e-business initiative for supply chain management, an organisation would have to decide on the scope of change. For instance, do all supply chain activities need to be revised simultaneously or can certain activities such as procurement or outbound logistics be targeted initially? Modern thinking would suggest that the latter approach is preferable.
If a less radical approach is adopted, care should be taken not to fall into the trap of simply using technology to automate existing processes which are sub-optimal – in plain words, using information technology ‘to do bad things faster’. This approach of using technology to support existing procedures and practices is known as business process automation (BPA). Although benefits can be achieved through this approach, the improvements may not be sufficient to generate a return on investment. These alternative terms for business process change are summarised in Table 12.5.
Business process improvement (BPI)
Optimising existing processes typically coupled with enhancements in information technology.
Business process automation (BPA)
Automating existing ways of working manually through information technology.
M12_BOCI6455_05_SE_C12.indd 456 10/13/14 5:56 PM
457ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
A staged approach to the introduction of BPR has been suggested by Davenport (1993). This can also be applied to e-business change. He suggests the following stages that can be applied to e-business as follows:
n Identify the process for innovation – these are the major business processes from the organisation’s value chain which add most to the value for the customer or achieve the largest efficiency benefits for the company. Examples include customer relationship management, logistics and procurement.
n Identify the change levers – these can encourage and help achieve change. The main change levers are innovative technology and, as we have seen, the organisation’s culture and structure.
n Develop the process vision – this involves communication of the reasons for changes and what can be achieved in order to help achieve buy-in throughout the organisation.
n Understand the existing processes – current business processes are documented. This allows the performance of existing business processes to be benchmarked and so provide a means for measuring the extent to which a re-engineered process has improved business performance.
n Design and prototype the new process – the vision is translated into practical new processes which the organisation is able to operate. Peppard and Rowland (1995) provide a number of areas for the potential design of processes under the headings of Eliminate, Simplify, Integrate and Automate (ESIA) (see Table 12.6). Prototyping the new process operates on two levels. First, simulation and modelling tools can be used to check the logical operation of the process. Second, assuming that the simulation model shows no significant problems, the new process can be given a full operational trial. Needless to say, the implementation must be handled sensitively if it is to be accepted by all parties.
Table 12.5 Alternative terms for using IS to enhance company performance
term Involves Intention risk of failure
Business process re-engineering Fundamental redesign of all main company processes through organisation-wide initiatives
Large gains in performance (>100%?)
Highest
Business process improvement Targets key processes in sequence for redesign (<50%) Medium
Business process automation Automating existing process Often uses workflow software
(<20%) Lowest
Table 12.6 ESIA areas for potential redesign
eliminate Simplify Integrate automate
Over-production Forms Jobs Dirty
Waiting time Procedures Teams Difficult
Transport Communication Customers Dangerous
Processing Technology Suppliers Boring
Inventory Problem areas Data capture
Defects/failures Flows Data transfer
Duplication Processes Data analysis
Reformatting
Inspection
Reconciling
Source: Peppard and Rowland, 1995.
M12_BOCI6455_05_SE_C12.indd 457 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT458
Business process management (BPM)
Business process management (BPM) is an important approach to process management that can be considered both in terms of a philosophy towards process change and as a supporting technology to process change in the form of tools for process design.
The philosophy of BPM recognises that business processes, and the way they are managed, are the key mechanisms that allow the organisation to deliver value to its customers. The approach thus entails an analysis of the structure of the organisation, the way people work together and the way technology is utilised. The focus of business process change will be provided by performance objectives for business processes that are derived from an analysis of how the company achieves its competitive advantage. Due to the far- reaching nature of the BPM approach it is likely that in most organisations a significant degree of organisational change, including a change of culture will be required. These aspects are covered in the later organisational culture section in this chapter.
Underpinning the philosophy of BPM are a number of process design tools that allow the approach to be put into operation. These tools include process maps, business process simulation, business activity monitoring and service-oriented architecture.
When image document circulation first appeared in the 1990s, the idea of applying computer technology to this kind of labour-intensive business process was considered cutting edge. Everyone in IT understood the potential of centralised computing for numerical computation and transaction processing, but few envisioned that this type of application would fit a broader set of distributed business processes.
Since the 1990s, leading companies have found more innovative ways to automate their business processes. E-forms, process modelling, simulation, EAI, integration services, rules engines, event services, real-time monitoring and process analytics are among the systems being applied to processes that include order management, billing, financial reporting, credit-card issuance, product returns and dispute resolution.
IT and operations executives have now understood that their current technology has done little to link the processes that run their companies to the transactions that result from those processes – transactions at the heart of corporate growth and profitability. This disconnect is rooted in a basic misunderstanding of the purpose of enterprise resource planning (ERP) and the role of business-process management – relationships that are now examined more carefully.
It is no secret that work gets done by people through business processes and that technology only supports those processes, whether distributing goods to customers, collaborating with suppliers, or co-ordinating employee efforts, business processes add value to products and brands.
Yet most ERP systems have a functional focus and lack process models that explain business operations. As a result, when managers tried to innovate or solve business problems – customer satisfaction was one of the most widespread – the ‘fix’ is often myopic and transaction-centric. What is missing is good business- process management.
Business-process management provides methods to automate and/or improve activities and tasks for particular business purposes. Its goal is not only efficiency and productivity, but also control, responsiveness and improvement. Control assures that company resources are aligned to execute strategies. Responsiveness and improvement support the competitive differentiation that enables a company to excel.
IT executives can assert control by basing the direction and flow of transactions on a predefined set of rules and work flows – for example, determining how a purchase order is acknowledged or merchandise is returned. Responsiveness enables individuals to react quickly to business events and maximise interactions, as when expediting a critical customer order across the customer-service and warehouse teams. As for improvement, you want to systematically measure and monitor processes; doing so will lead to innovation and optimal performance.
An integral part of business-process management is performance management, which is intended to steer the organisation and its partners toward corporate goals. Performance management focuses on the collaboration and empowerment of all individuals in the business
Business-process management (BPM)
CASE STUDY 12.1
Business process management (BPM)
Both a philosophy towards process change and a collection of supporting technologies for process change.
M12_BOCI6455_05_SE_C12.indd 458 10/13/14 5:56 PM
business software market has changed the landscape of business-process management systems significantly. It has been recommended that you evaluate all viable options including service-oriented architectures. Also, be wary of promises from vendors of single solutions that do only BAM or only process modelling – these products may not be functionally rich enough.
It was also recommended to keep performance management capabilities in mind when making vendor evaluations as good business-process management requires both simulation and BAM tools. Simulation aids process design and modelling by letting designers preview how a process flows and look at how the logic, events and rules work together – before the process is rolled out into a production environment. Using such a tool should discover and remove bottlenecks and accurately predict process performance. The best process models allow multiple simulation scenarios to be performed across sub- processes. The engine should be able to track resource usage, including cost and time analysis, and monitor usage that exceeds preset thresholds.
Other things to consider: a robust simulation tool will allow you to deploy new versions of processes without interrupting those already in use and the best solutions allow controlled migration from old processes to new ones. This capability is critical not only for safety reasons but also for benchmarking and measuring results.
Business-activity monitoring aggregates, analyses and presents relevant and timely internal information as well as data involving your customers and partners. BAM solutions can alert individuals to changes in the business that may require action from them. Once again, the purpose is to produce rapid insight into process innovations by identifying issues in real time, improving process performance and reducing operating costs.
Most BAM solutions provide post-process metrics, such as when and how many times the process was executed and which user performed which tasks. Some go further to provide visual representations of business activity with maps, technical drawings, charts, blueprints or graphs.
However, keep in mind that real-time process monitoring requires considerable development and integration work. Consider tying these activities to a company’s performance management objectives, measuring and tracking them in an active, balanced scorecard. It is well worth the effort. After all, how will you know if you are aligning current process activities to your performance objectives if you don’t properly score the results?
Once you have evaluated all software and architecture options, the fifth and final step is to roll out your solution and ensure widespread adoption of your business-process management initiative. To succeed, you must understand how to minimise interruptions to your current business processes, culture and technology usage.
network or value chain. It enables them to work across strategic, tactical and operational levels to align actions that produce rapid and effective responses to business challenges.
In the same way you define and document processes, you need to detail performance-management objectives. These objectives are the analytics used to measure all process-improvement projects. The metrics will provide the basis for an ongoing cycle of measurement, evaluation and improvement. It is also critical that your company tie these process-improvement metrics to high-level performance- improvement goals, not to low-level or transaction-oriented metrics.
Done right, performance management should shed light on why some processes do not function well and how to go about improving them. During analysis, the tools will provide the project teams with data to assess the productivity impact of proposed solutions. That data should also help business and IT departments arrive at a common understanding of particular business needs and their solutions. Collaboration between these groups is particularly important in good business process management.
Once you have addressed the overall issue of performance management and built the framework for a sustainable business-process management practice, you can begin to assess requirements for the IT systems that will support it. For this part of the project, five components are necessary: assessing current systems, building a business case, developing and communicating the plan, evaluating software and architecture options and, lastly, deploying the initiative. Here is further detail on each of the steps.
First, conduct an independent assessment of the process you want to innovate and the systems that currently support it. Establish a benchmark for the current levels of efficiency and effectiveness, and then identify areas for improvement. Of course, you will need to evaluate financial and operational requirements in this approach, including ROI and total-cost-of-ownership calculations.
Next, build a business case to demonstrate the value and results that the project will deliver, citing clear definitions of the value and cost of your programme, as well as compelling productivity and financial reasons for going ahead. Address the cultural, business and technology barriers to ensure you have support for your initiative.
Third, create a well-defined plan and communicate it to the process owners and participants. This will have to be articulated at different levels of the organisation, so make clear to all stakeholders what is in it for them. It should also show how the effectiveness of operations will improve through this process innovation.
After that, architecture and software needs should be identified in several ways. First, evaluate solutions with appropriate criteria to ensure that the programme is timely and responsive to the organisation. Consolidation in the
459ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
➨
M12_BOCI6455_05_SE_C12.indd 459 10/13/14 5:56 PM
Tools for business process management
Tools and techniques that are used to assist in the implementation of the business process management approach include:
n diagramming techniques such as process mapping; n modelling techniques such as business process simulation (BPS); n improvement approaches such as business process reengineering (BPR); n implementation of information technologies such as workflow systems; n use of performance management technologies such as business activity monitoring (BAM); n use of the service-oriented architecture (SOA) and web services.
BPR is considered earlier is this section. (Workflow systems are covered in Chapter 6 and BAM is covered in Chapter 4.) This section now considers process mapping, business process simulation, service-oriented architecture and web services.
Process mapping
Documenting the process can be undertaken by the construction of a process map, also called a flowchart. This is a useful way of understanding any business process and showing the interrelationships between activities in a process. For larger projects it may be necessary to represent a given process at several levels of detail. Thus a single activity may be shown as a series of sub-activities on a separate diagram. Table 12.7 shows the representations used in a simple process mapping diagram.
Figure 12.3 shows a process map of activities undertaken by traffic police in response to a road traffic accident (RTA) incident in the UK. The process map shows that following the notification of a road traffic incident to the police by the public, a decision is made to attend
Make sure you do not skip any of these steps – especially the second, where you benchmark your current process performance and build a business case. Doing so will enable you to showcase the value of your innovation after adoption. Following these steps will increase efficiency and effectiveness and improve the alignment of your operational processes. Through a simple organisation-wide approach, you can transform your staff, processes and systems in an efficient manner that ultimately will be reflected on the company’s bottom line.
Source: Based on Optimize, September 2005, Issue 47
QUESTIONS
1. How does the article suggest that business thinking and practice have evolved since the exhortations for business process re-engineering in the 1990s?
2. Summarise the benefits for BPM discussed in the article.
3. Discuss the need for a concept such as BPM when all new information systems and information management initiatives are ultimately driven by process improvement.
Process mapping
The use of a flowchart to document the process incorporating process activities and decision points.
Table 12.7 Symbols used for a process map
Meaning Symbol
Process/activity
Decision point
Start/end point
Direction of flow ▶
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT460
M12_BOCI6455_05_SE_C12.indd 460 10/13/14 5:56 PM
461ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
the scene of the incident. If it is necessary to attend the RTA scene the officer travels to the location of the incident. After an assessment is made of the incident the officer returns to the station to complete and submit the appropriate paperwork. If a court case is scheduled and a not guilty plea has been entered then the officer will be required to attend the court proceedings in person. Otherwise this is the end of the involvement of the officer.
Process maps are useful in a number of ways. For example, the actual procedure of building a process map helps people define roles and see who else does what. This can be particularly relevant to public-sector organisations in which modelling existing processes can be used to build consensus on what currently happens. The process map can also serve as a first step in using business process simulation as it identifies the processes and decision points required to build the model.
Business process simulation
The use of a simulation model on a computer to mimic the operation of a business means that the performance of the business over an extended time period can be observed quickly and under a number of different scenarios. Business process simulation (BPS) is usually implemented using discrete-event simulation systems which move through time in (discrete) steps. BPS software is implemented using graphical user interfaces employing objects or icons that are placed on the screen to produce a model (see Figure 12.4).
Although BPS requires a significant investment in time and skills it is able to provide a more realistic assessment of the behaviour of organisational processes than most other process design tools. This is due to its ability to incorporate the dynamic (i.e. time- dependent) behaviour of organisational systems. The two aspects of dynamic systems which need to be addressed are variability and interdependence. Most business systems contain variability in both the demand on the system (e.g. customer arrivals) and the durations (e.g. customer service times) of activities within the system. The use of fixed
Figure 12.3 A road traffic accident reporting process map
No
Attend court
No
OutNoNo
YesGuilty plea?
No
Yes
Out
Out
Digital mapping
Map geocode
Yes
Fax PBE
Contact division
Get statement in person
No Further action?
Attend RTA
Attend RTA?
Completed? Essential? RTA in
division?
Travel to RTA
Witness proforma
Yes No
No
No
Yes
No
RTA
Out
Yes
Injury?
Fax form 54
Fax form 55
Yes
No YesFatal accident?
Further action?
Yes Collate forms
Yes YesWrite abstract
Letter to drivers
Prosecution? Court case?
Out
Business process simulation
The use of computer software, in the context of a process-based change, that allows operation of a business to be simulated.
M12_BOCI6455_05_SE_C12.indd 461 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT462
Source: courtesy of Oracle corporation
Figure 12.4 Simulation of a textile plant using the ARENA™ Visual Interactive Modelling system
(e.g. average) values will provide some indication of performance, but simulation permits the incorporation of statistical distributions and thus provides an indication of both the range and variability of the performance of the system. Most organisational systems also contain a number of decision points that affect the overall performance of the system. The simulation technique can also incorporate the ‘knock-on’ effect of these many interdependent decisions over time.
Service-oriented architecture (SOA)
The concept of SOA is to develop a number of reusable business-aligned IT services that span multiple applications across the organisation. SOA defines the services in such a way as to be utilised in a manner that is independent of the underlying application and technology platforms. A collection of standardised services forms the basis of a service inventory or service catalogue. Individual services from the service inventory can be deployed in multiple business processes. Each collection of services used in a particular business process is termed a service composition. The advantage of this approach for business process management is that a business process can link with the business services which are activated by the business processes without the need to know about the underlying application and technology platforms. The relationship between the business process, services, application and technology layers is shown in Figure 12.5. The use of SOA provides interoperability, the ability to allow computer systems from different manufacturers to work together and loose coupling, the capability of services to be joined together on demand to create composite services. These capabilities are particularly useful in increasing the flexibility of enterprise systems covered earlier (in Chapter 6). Read Case Study 12.2 ‘Service-oriented architecture’ for more details of SOA.
Service-oriented architecture (SOA)
An approach that incorporates reusable business-aligned IT services that can be utilised in a manner that is independent of the underlying application and technology platforms.
Service inventory/ service catalogue
A collection of standardised services that are designed to be used in a number of business processes.
Service composition
A selection of services from the service inventory that are allocated to a particular business process.
Interoperability
The ability to allow computer systems from different manufacturers to work together.
Loose coupling
The capability of services to be joined together on demand to create composite services.
M12_BOCI6455_05_SE_C12.indd 462 10/13/14 5:56 PM
463ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
Figure 12.5 Relationship between the business process, services, application and technology layers in the organisation
Layer
Order processing Materials management Financial reporting
Examples
Planning services HR services Finance services
ERP Custom applications Legacy applications
Windows Java C11 Unix .NET
Business process layer
Services layer
Web services platform
Application layer
Technology layer
Web services
SOA will most often be implemented on the web platform and the term web services is used for the technology which is derived from the convergence of service-oriented architecture and internet technologies. The web services platform is defined through a number of industry standards including WSDL (web services description language) which provides a means to define the functionality of a web service in terms of the XML schema definition language (Chapter 11). Another standard SOAP (simple object access protocol) consists of a framework for XML format messages sent between distributed information systems.
In practical terms a web service is really just reusable software code that can be combined with other web services to develop new applications. In order to use a web service a consumer searches for existing services in a web services registry either inside the organisation (private registry) or outside the organisation (public registry). Once the service is found it is retrieved and a fee is paid if appropriate. The service provider is then able to provide the web service to the service consumer.
Web services
A collection of industry standards which represents the most likely technology connecting services together to form a service-oriented architecture.
WSDL (web services description language)
Provides a means to define the functionality of a web service in terms of the XML schema definition language
SOAP (simple object access protocol)
consists of a framework for XML format messages sent between distributed information systems.
Business opportunities abound for financial institutions with the flexibility and agility to respond to rapidly changing market conditions and regulatory pressures. But when it requires a lengthy IT project to create
new products or services, those business opportunities remain tantalisingly out of reach.
Services-Oriented Architecture offers the ability for non-technical business users to build new products and
Play pick-and-mix to innovate with SOA By George Ravich, Executive Vice-President and Chief Marketing Officer of Fundtech
CASE STUDY 12.2
➨
M12_BOCI6455_05_SE_C12.indd 463 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT464
Approaches such as business process management are concerned with the implementation of change involving both IS systems and employees. Implementation of processes that are performed by employees requires consideration of organisational change management including factors such as managing a change in culture.
An essential part of managing change associated with IS introduction is education to communicate the purpose of the system to the staff – in other words, to sell the system to them. It is not sufficient to simply provide training in the use of the software. This
Achieving organisational change
services by picking and choosing from existing processes contained within an “SOA services catalogue”.
An SOA services catalog promises to have the same impact on enterprise computing as the MP3 playlist has had on listening to music.
Before MP3 players, people listened to songs on a vinyl record or a CD in the order that the publisher determined. If you wanted to play several songs from different albums, it was a complicated and time- consuming activity. Now an MP3 player can take individual songs and create an endless number of playlists. Each song is reusable in different settings and situations, under the full control of the listener.
Before SOA, enterprise applications placed business processes within inflexible workflows. Without extensive IT development, reuse of any single business process was not feasible within these systems, leading to multiple versions of the same process being developed separately for different applications and channels.
Now, with SOA, individual business processes can be discovered, modified and recombined dynamically without having to involve the IT department. Business users can create new composite services and reuse services outside of their original context.
In the current environment, it is virtually impossible to define in advance the workflow that best fits the needs of the business. With services-oriented payments architecture (Sopa), managers can respond to the evolving needs of the business by tapping into a complete set of reusable, SOA-enabled business assets.
An example would be a fee calculation for a foreign exchange transaction. Given any currency pair, an amount, a transaction date and a customer type, the fee calculation service would determine the applicable fee to the customer. Usually, a function such as this would be part of a point solution for foreign exchange
capabilities, and you would only be able to use it in a prescribed set of circumstances.
Tapping into the SOA services catalog, you would be able to embed the same fee calculator into other SOA-ready systems, including web applications, mobile applications, branch applications and ATM systems.
Using the fee calculator, product specialists could model the revenue impact of a price change, marketers could craft special offers for preferred customers, and service agents could help customers to choose the most appropriate service plan.
All of these business users would be able to use the same fee calculation service, with the knowledge that it’s the most up-to-date version available, meeting all pertinent regulatory standards and company policies.
Sopa can transform an enterprise from a reactive consumer of pre-built systems into an active creator of innovative new services. Whether it’s modifying terms and conditions, updating service-level agreements for key clients or creating new products based on parts of existing services, business users will have unprecedented access to powerful capabilities without having to embark on costly and time-consuming IT projects.
Instead, they will have a nimble response to external regulatory pressures, and be able to build products faster, and deploy new capabilities at a cost advantage to competitors.
The role of IT doesn’t go away with the adoption of SOA. When it’s time to improve the underlying services by making them faster or more efficient, or when expanded capabilities are called for, IT developers can create new versions of existing services or build entirely new ones.
Once completed, these new and improved services are immediately available through the SOA services catalog for enterprise use.
QUESTIONS
Explain the advantages and disadvantages of the SOA approach
Source: Ravich, G. (2009) Play pick-and-mix to innovate with SOA. Financial Times. 13 July. © The Financial Times Limited 2009. All Rights Reserved.
M12_BOCI6455_05_SE_C12.indd 464 10/13/14 5:56 PM
465ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
Culture
This concept includes shared values, unwritten rules and assumptions within the organisation as well as the practices that all groups share. corporate cultures are created when a group of employees interact over time and are relatively successful in what they undertake.
education should target all employees in the organisation who will be affected by the change. It involves:
n explaining why the system is being implemented; n explaining how staff will be affected; n treating users as customers by involving them in specification, testing and review; n training users in use of the software; n above all, listening to users and acting on what they say.
Kurt Lewin and Edgar Schein suggested a model for achieving organisational change that involves three stages:
1. Unfreeze the present position by creating a climate of change through education, training and motivation of future participants.
2. Quickly move from the present position by developing and implementing the new system.
3. Refreeze by making the system an accepted part of the way the organisation works.
Note that Lewin and Schein did not collaborate on developing this model of personal and organisational change. Lewin developed the model in unpublished work and this was then extended by Edgar Schein who undertook research into psychology based on Lewin’s ideas (Schein, 1956). Later, Kurt Lewin summarised some of his ideas (Lewin, 1972). More recently, Schein (1992) concluded that three variables are critical to the success of any organisational change:
1. the degree to which the leaders can break from previous ways of working; 2. the significance and comprehensiveness of the change; 3. the extent to which the head of the organisation is actively involved in the change process.
‘Change’ was defined by Kurt Lewin as a transition from an existing quasi-equilibrium to a new quasi-equilibrium. This model was updated and put into an organisational context by Kolb and Frohman (1970). Although this is now an old model, it remains relevant to the implementation of information systems today.
Organisational culture
Understanding social relationships within an organisation, which are part of its culture, is also an important aspect of change management. The efficiency of any organisation is dependent on the complex formal and informal relationships that exist within it. Formal relationships include the hierarchical work relationships within and between functional business areas. Informal relationships are created through people working and socialising with each other on a regular basis and will cut across functional boundaries. Major change, such as the move to e-business, has the capacity to alter both types of relationships as it brings about change within and between functional business areas.
Schein (1992) also claims that the notion of organisational culture provides useful guidance on what must be changed within a corporate culture, if organisational change is to be successfully accomplished. He provides a threefold classification of culture that helps to identify what needs to be done:
n Assumptions are the invisible core elements of an organisation’s culture such as a shared collective vision within the organisation. One of the challenges in change management is to question core assumptions where appropriate, especially if they are seen to be obstructing organisational change.
n Values are preferences that guide behaviour such as attitudes towards dress codes and punctuality within an organisation or ethics within a society. Often such values
M12_BOCI6455_05_SE_C12.indd 465 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT466
are transmitted by word of mouth rather than being enshrined in written documents or policy statements. As with organisational assumptions, values are hard to change, especially when the views that embody them are firmly held.
n Artefacts are tangible material elements of cultural elements. These will be identifiable from the language used in the policies, procedures and acronyms of the organisation, and the spoken word and dialects of the society. In some ways they are also the easiest to change. Policies can be created or rewritten, but it is the organisation’s values and assumptions that will determine how they are perceived and acted upon.
The implications of organisational culture for information systems implementation are important. While the ‘artefacts’ associated with information systems developments may be clear, it is the ‘assumptions’ and ‘values’ that will ultimately determine the success of the implementation and it is to these that the change management process must be largely directed.
Boddy et al. (2005) summarise four different types of cultural orientation that may be identified in different companies. These vary according to the extent to which the company is inward-looking or outward-looking, in other words to what extent it is affected by its environment. They also reflect whether the company is structured and formal or has a more flexible, dynamic, informal character. The four cultural types of cultural orientation are:
1. Survival (outward-looking, flexible) – the external environment plays a significant role (an open system) in governing company strategy. The company is likely to be driven by customer demands and will be an innovator. It may have a relatively flat structure.
2. Productivity (outward-looking, ordered) – interfaces with the external environment are well structured and the company is typically sales-driven and is likely to have a hierarchi- cal structure.
3. Human relations (inward-looking, flexible) – this is the organisation as family, with inter- personal relations more important than reporting channels, a flatter structure and staff development, and empowerment is thought of important by managers.
4. Stability (inward-looking, ordered) – the environment is essentially ignored with managers concentrating on internal efficiency and again managed through a hierarchical structure.
Different approaches to change management that may be required according to the type of culture are explored in the activity.
The purpose of this activity is to identify appropriate cultural changes that may be necessary for e-business success.
Review the four general categories of organisational cultural orientation summarised by Boddy et al. (2005) and take each as characterising four different companies and then suggest which will most readily respond to the change required for a move to an e-business. State whether you think the cultures are most likely to occur in a small organisation or a larger organisation.
Changing the culture for e-businessActivity 12.1
Achieving user involvement
Efforts should be made to involve as many staff as possible in the development. The following types of involvement (summarised by Regan and O’Connor, 2001) can occur in a systems development project:
1. Non-involvement – here, users are unwilling to participate or are not invited to. 2. Involvement by advice – user advice is solicited through interviews or questionnaires
during analysis.
M12_BOCI6455_05_SE_C12.indd 466 10/13/14 5:56 PM
467ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
3. Involvement by sign-off – users approve the results produced by the project team, such as requirements specifications.
4. Involvement by design team membership – active participation occurs in analysis and de- sign activities (including interviews of other users, creation of functional specifications and prototyping).
5. Involvement by project team membership – user participation occurs throughout the project since the user manages and owns the project.
While it will not be practical to involve everyone, representatives of all job functions should be polled for their requirements for the system at the analysis stage. As many user and manager representatives as possible should be involved in the active analysis and design involved in prototyping.
Promotion of the system can also be achieved by appointing particular managers to champion the new system:
n Senior managers or board members are used as system sponsors. Sponsors are keen that the system should work and will fire up staff with their enthusiasm and stress why introducing the system is important to the business and its workers.
n System owners are managers in the organisation who will use the system to create the business benefits envisaged.
n Stakeholders should be identified at every location in which the system will be used. These people should be respected by their co-workers and will again act as a source of enthusiasm for the system. The user representatives used in specification and testing can also fill this role.
n Legitimisers protect the norms and values of the system; they are experienced in their job and regarded as the experts by fellow workers; they may initially be resistant to change and therefore need to be involved early.
n Opinion leaders are people whom others watch to see whether they accept new ideas and changes. They usually have little formal power, but are regarded as good ‘ideas’ people who are receptive to change, and again need to be involved early in the project.
Resistance to change
Some resistance to change is inevitable, but this is particularly true with the introduction of systems associated with business process re-engineering, because of the way that work is performed and people’s job functions will be changed. If the rationale behind the change is not explained, then all the classic symptoms of resistance to change will be apparent. Resistance to change usually follows a set pattern. For example, Hopson and Scully (1997) have used the transition curve in Figure 12.6 to describe the change from when staff first hear about a system to when the change becomes accepted.
While outright hostility manifesting itself as sabotage of the system is not unheard of, what is more common is that users will try to project blame on to the system and will identify major faults where only minor bugs exists. This will obviously damage the reputation of the system and senior managers will want to know what went wrong with the project. Another problem that can occur if the system has not been introduced well is avoidance of the system, with users working around the system to continue their previous ways of working. Careful management is necessary to ensure that this does not happen. To summarise the way in which resistance to change may manifest itself, the following may be evident:
n aggression – in which there may be physical sabotage of the system, deliberate entry of erroneous data or abuse of systems staff;
n projection – where the system is wrongly blamed for difficulties encountered while using it;
System sponsors
System sponsors are senior managers or board members who are responsible for a system at a senior level in a company.
System owners
These are managers who are directly responsible for the operational use of a system.
Stakeholders
All staff who have a direct interest in the system.
M12_BOCI6455_05_SE_C12.indd 467 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT468
Figure 12.6 Transition curve showing reaction of staff through time from when change is first suggested
Shock
Denial Emotional turmoil:
Fear Anger Guilt Grief
Integration
Search for meaning
New ideas and strategies
Acceptance – letting go
Time
S en
se o
f w el
l-b ei
ng a
nd pe
rf or
m an
ce
n avoidance – withdrawal from or avoidance of interaction with the system, non-input of data, reports and enquiries ignored, or use of manual substitutes for the system.
There are many understandable reasons for people to resist the technological change that comes from the development of new information systems. These include:
n social uncertainty; n limited perspectives and lack of understanding; n threats to power and influence of managers (loss of control); n perception that costs of the new system outweigh the benefits; n fear of failure, inadequacy or redundancy.
It is evident that training and education can be used to counter many of these issues. Additionally, other steps can be taken to reduce resistance to change, namely:
n ensure early participation and involvement of users; n set realistic goals and raise realistic expectations of benefits; n build in user-friendliness to the new system; n don’t promise too much and deliver what was promised; n develop a reliable system that is easy to maintain; n ensure support of the various stakeholders; n bring about agreement through negotiation.
Training
Appropriate education and training are important in implementation. Many companies make the mistake of not training staff sufficiently for a new system. This is often because of the cost of training or of taking staff away from their daily work for several days. If companies do provide training, it is often the wrong sort. Practical, operational training in how to use the software, such as which menu options are available and which buttons to press, is common. What is sometimes missing is ideological training: an explanation of why the system is being brought in – why are the staff ’s existing ways of working being overturned? This educational part of training is very important. Previous projects or examples of how systems have improved the business of competitors may be used here.
M12_BOCI6455_05_SE_C12.indd 468 10/13/14 5:56 PM
469ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
SUMMARY Stage summary: systems build Purpose: To produce a working system Key activities: Programming (coding), system and user documentation, testing Input: Design specification and requirements specification Output: Preliminary working system which can be tested by end-users
Stage summary: systems implementation Purpose: To install the system in the live environment Key activities: Install computers and software, user acceptance test, change-over, sign off Input: Preliminary versions of software Output: Tested, release version of software
Stage summary: systems maintenance Purpose: To ensure system remains available to end-users Key activities: Monitoring errors, reviewing and fixing problems, releasing patches Input: Tested, release version of software Output: Revised version of software
1. The build stage of systems development involves programming, testing and transferring data from the old system to the new system.
2. The main types of testing are unit testing of individual modules, system testing of the whole system by developers and user acceptance testing by the business. Sufficient time for testing must be built in using a quality assurance system to ensure that the delivered system is of the right quality.
3. The implementation stage involves managing the changeover from the old sys-tem to the new system. There are several alternative changeover approaches that can be used together if required:
n run the old and new systems in parallel; n a phased approach where different modules are gradually introduced; n cutover immediately to the new system; n pilot the system in one area or office before ‘rolling out’ on a larger scale.
4. Some of the main reasons that information systems projects may fail at the build or implementation stage include:
n Forgetting the human issues. New systems are usually accompanied by a new way of working, so managers need to explain through training why the change is occurring and then train people adequately in the use of the system.
n Cutting corners through using RAD. Some corners cannot be cut, especially in design, optimising system performance and testing. If insufficient time is spent on these activities, the system may fail. Documentation may also be omitted, which is serious during maintenance.
n Computer resources are inadequate. The project managers need staff to check, for example, that the server can handle the load at critical times of the day, such as when scanning is occurring or at peak times in a call centre. Checks will also be made to ensure that the system performance does not degrade as the number of users of the systems or customers’ records held increase.
n Poor management of change process. Staff who are involved with the new system should be trained so that they can use the software easily and understand the reasons for its introduction.
n Lack of support from the top or from stakeholders. Top management and appropriate stakeholders must support the cultural changes necessary to introduce the new system.
n Using a big-bang method of changeover. Using this approach is high-risk unless there has been extensive testing and methodical design.
5. The maintenance phase is concerned with managing the system once it is live. This will involve responding to errors as they are found. If serious, the problems will have to be solved immediately through issuing a ‘patch’ release to the system; otherwise they will be recorded for a later release.
6. A post-implementation review will occur to assess the success of the systems development project so that lessons are recorded for future projects.
7. Change management can be considered at the software, IS and organisational levels. ➨
M12_BOCI6455_05_SE_C12.indd 469 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT470
8. Software change management involves managing the process of modification to software thought to be necessary be business users or developers.
9. IS change management involves managing the change from the old to the new information system. The four main alternative methods of changeover are immediate cutover, parallel running, phased implementation and pilot system.
10. Organisational change management deals with managing changes to organisational processes, structures and their impact on organisational staff and culture. Business process management (BPM) provides a methodology for change management in the organisation.
1. What are the main activities that occur in the build and implementation phases of a systems development project?
2. What is the difference between unit and system testing?
3. How can resistance to change among staff affect a new information system?
4. What are the most important factors in reducing resistance to change?
5. Why is it important to manage software change requests carefully?
6. What is the difference between the direct changeover method and the parallel changeover method?
7. What is the best option for an end-user to program a system?
8. What is the purpose of a post-implementation review?
9. What is the purpose of the concept of SOA?
EXERCISES
Self-assessment exercises
Discussion questions
1. ‘All the different project changeover methods are likely to be used on any large project.’ Discuss.
2. ‘The most important aspect of software quality assurance is to make sure that bugs are identified during the testing phase.’ Discuss.
3. ‘Companies should aim to minimise the number of patch releases, provided that no serious system errors occur.’ Discuss.
4. ‘The combination of BPM and SOA is more powerful than either is alone.’ Discuss.
1. You are a business manager responsible for the successful implementation of a new information system. What problems would you anticipate from staff when the new system is introduced? What measures could you take to minimise these?
Essay questions
M12_BOCI6455_05_SE_C12.indd 470 10/13/14 5:56 PM
Boddy, D., Boonstra, A. and Kennedy, G. (2009) Managing Information Systems: Strategy and Organisation, 3rd edition, Financial Times Prentice hall, harlow
Davenport, T.H. (1993) Process Innovation: Re-engineering Work through Information Technology, harvard Business School Press, Boston
Hammer, M. and Champy, J. (1993) Re-engineering the Corporation: A Manifesto for Business Revolution, harpercollins, New York
Hopson, B. and Scully, M. (1997) Transitions: Positive Change in Your Life and Work, Prentice hall, harlow
Johnston, A.K. (2003) A Hacker’s Guide to Project Management, 2nd edition, Butterworth- heinemann, Oxford
Jones, C. (2008) Applied Software Measurement: Global Analysis of Productivity and Quality, 3rd Edition, Mcgraw-hill, New York
Jorgenson, P. (1995) Software Testing: A Craftsman’s Approach, cRc Press, Boca Raton, FL
Kolb, D.A. and Frohman, A.L. (1970) ‘An organizational development approach to consulting’, Sloan Management Review, 12, 51–65
Lewin, K. (1972) ‘Quasi-stationary social equilibria and the problems of permanent change’, in N. Margulies and A. Raia (eds), Organizational Development: Values, Process and Technology, Mcgraw-hill, New York, pp. 65–72
Examination questions
References
2. Discuss the advantages and disadvantages of the different methods of changeover from an old system to a new one. Which is the optimal method?
3. Discuss the philosophy and describe the tools of BPM.
1. Describe the direct changeover method. How does this differ from phased implementation?
2. What different classes of fault will a user be aiming to identify in a user acceptance test?
3. What are the three classical signs of resistance to change by end-users?
4. Distinguish between system testing and unit testing.
5. What different types of documentation will be used during the implementation phase of a project?
6. What elements of staff training should a new system receive?
7. What is the purpose of volume testing?
8. Which criteria should be used to measure the successful outcome of a systems development project?
9. In the maintenance phase of the systems development lifecycle, why might an information system need to be maintained?
10. Briefly outline the considerations that a company needs to take into account in deciding between the two main methods of changeover to a new information system: direct and parallel running.
11. Evaluate the concept of BPM.
12. How could BPS and BAM work together?
471ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT
M12_BOCI6455_05_SE_C12.indd 471 10/13/14 5:56 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT472
Peppard, J. and Rowland, P. (1995) The Essence of Business Process Re-engineering, Prentice hall, hemel hempstead
Regan, E.A. and O’Connor, B.N. (2001) End-user Information Systems: Implementing Individual and Work Group Technologies, 2nd edition, Prentice-hall, Upper Saddle River, NJ
Schein, E. (1956) ‘The chinese indoctrination program for prisoners of war’, Psychiatry, 19, 149–72
Schein, E. (1992) Organizational Culture and Leadership, Jossey Bass, San Francisco
Willcocks, L. and Smith, G. (1995) ‘IT enabled business process reengineering: organisational and human resource dimension’, Strategic Information Systems, 4, 3, 279–301
Further reading
Erl, T. (2007) Service Oriented Architecture: Principles of Service Design, Prentice-hall, Upper Saddle River, NJ.
Greasley, A. (2008) Enabling a Simulation Capability in the Organisation, Springer Verlag.
Hallows, J. (2005) Information Systems Project Management: How to Deliver Function and Value in Information Technology Projects, 2nd edition, Amacom, New York.
Kerzner, H. (2013) Project Management: A Systems Approach to Planning Scheduling and Controlling, 11th edition, John Wiley, New York.
Newcomer, E. and Lomow, G. (2005) Understanding SOA with Web Services, Addison Wesley, Upper Saddle River, NJ.
Smith, H. and Fingar, P. (2006) Business Process Management: The Third Wave, Meghan Kiffer, Tampa, FL.
Weske, M. (2012) Business Process Management: Concepts, Languages, Architectures, 2nd edition, Springer, Berlin, New York.
Web links
www.bitpipe.com A repository for white papers on many IT topics including systems testing, change management and business process management. Many of these are sponsored by vendors so research is not independent.
www.bpm.com Provides news and in-depth articles about both the business and technology perspectives of business process management.
www.bptrends.com Provides a source of news and information relating to all aspects of business process change, focused on trends, directions and best practices.
www.bpmi.org Business Process Management Institute. An introduction to the concept and specifications for modelling business processes.
www.computerweekly.com This online trade paper for the IT industry has many case studies of the problems that can occur if the build process is not managed adequately.
www.cio.com CIO.com for chief information officers and IS staff has many articles related to analysis and design topics in different research centres such as security.
www.service-architecture.com consultancy website containing information on SOA.
www.scs.org The Society for Modeling and Simulation International. conference details and links to journal and publications.
www.scs-europe.net The Society for Modeling and Simulation: European council. European conference details and links to journal and publications.
www.stickyminds.com Portal with articles on software test, measurement and defect removal techniques.
M12_BOCI6455_05_SE_C12.indd 472 10/13/14 5:56 PM
,
cHAPTER
9 BIS project management
LEARNING OUTCOMES
After reading this chapter, you will be able to:
■ understand the main elements of the project management approach;
■ relate the concept of project management to the creation of BIS;
■ assess the significance of the different tasks of the project manager;
■ outline different techniques for project management.
MANAGEMENT ISSUES
Managers need to ensure that their BIS projects will be completed satisfactorily, whether they are directly responsible, or if the project management is delegated to another person in the organisation, or an external contractor. From a managerial perspective, this chapter addresses the following questions:
■ What are the success criteria for a BIS project?
■ What are the attributes of a successful project manager?
■ Which project management activities and techniques should be performed by the project manager for a successful outcome?
CHAPTER AT A GLANCE
MAIN TOPICS
■ The project management process 322
■ Steps in project management 326
■ A project management tool: network analysis 338
FOCUS ON . . .
■ A project management methodology: PRINCE2 334
CASE STUDIES
9.1 Putting an all-inclusive price tag on successful IT 320
9.2 Project management: lessons can be learned from successful delivery 324
M09_BOCI6455_05_SE_C09.indd 319 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT320
Projects are unique, one-time operations designed to accomplish a specific set of objectives in a limited timeframe. Examples of projects include a building construction or introducing a new service or product to the market. In this chapter we focus on providing the technical knowledge that is necessary to manage information systems projects. Large information systems projects like construction projects may consist of many activities and must therefore be carefully planned and coordinated if a project is to meet its objectives.
The three key objectives of project management are shown in Figure 9.1. The job of project managers is difficult since they are under pressure to increase the quality of the information system within the constraints of fixed costs, budget and resources. Often it is necessary to make a compromise between the features that are implemented and the time and resources available – if the business user wants a particular new feature, then the cost and duration will increase or other features will have to be omitted.
A major issue in IT project management is the determination of a realistic assessment of the costs and benefits of an IT project. This information is required when deciding whether to proceed with the project and for making a reasonable assessment of project success. This issue is discussed in Case Study 9.1.
While it is difficult to control and plan all aspects of a BIS development project, the chance of success can be increased by anticipating potential problems and by applying corrective strategies. The PRINCE2 methodology is reviewed since it is used to assist in the delivery of BIS projects to time, cost and quality objectives. Network analysis techniques are also reviewed in this chapter, since they can be used to assist project planning and control activities by enabling the effects of project changes to be analysed.
INTRODUCTION
Projects
Projects are unique, one-time operations designed to accomplish a specific set of objectives in a limited timeframe.
Figure 9.1 Three key elements of project management
Time
Quality/ features Cost
Project manager must negotiate for more time,
more people or fewer features
Failure to derive the expected benefits from IT systems is legendary. Yet organisations still fail to recognise or accept why this occurs and generally do little to address the root causes in any meaningful way.
The first place to look is the application of the Return on Investment (ROI) tool as the arbiter for benefits delivery and the subsequent plans for
Putting an all-inclusive price tag on successful IT By Ron Barker
CASE STUDy 9.1
implementing the systems. An ROI is required by most organisations, but the tool is often applied without fully understanding all of the cost components (full disclosure).
By definition, IT projects tend to focus on dealing with the technical issues. It is these that get measured as the cost side of the change – usually the cost
M09_BOCI6455_05_SE_C09.indd 320 10/13/14 4:49 PM
321ChaPter 9 BIS PROjEcT MANAgEMENT
of hardware and software with some allowance for training. Typically, costs are grossly underestimated (often by 40 per cent or more) by failing to consider precisely those factors that are needed to deliver the return.
ROI is a technical measure taking expected returns and expected costs to determine the worth of the investment. The key word is ‘expected’. The reality, of course, is that the ROI calculation is no more than a forecast, based upon someone’s view of the costs and benefits. Realising the benefits forecast is where the hard work arises, there is often a drastic underestimate of the efforts required to ‘make it happen’. The underestimates are generally in:
■ ensuring compliance with the business strategy; ■ aligning the people with the processes the business
is changing to; ■ ensuring that behaviours are commensurate with
the required new ways of working.
This assumes processes are being changed – otherwise where are the benefits coming from? Which means there is an implicit assumption that people somewhere will be doing something differently. It is the need to ensure and facilitate this change that generates a high proportion of the total project costs. By including these costs some projects start to appear unprofitable. This, of course, is generally not in the interests of any systems suppliers. It may, however, stop some projects from getting off the ground and avoid some of the overspending we have seen in the past. If the way things are done in the business is being changed then there is a need to understand what that change means. There is a range of implementation approaches taken by companies including:
■ simple ROI and the ‘stuff it at ‘em’ approach that follows the principles of ‘if we tell them what to do and give them a bit of training then they’ll make it work’;
■ a considered approach that defines real business need and vision but then fails to communicate this through to the ‘what’s in it for me’ messages and thereby does not connect with the users;
■ development of a system that involves some users early and is well communicated to staff, but is not properly aligned to the organisation’s strategy and owned by specific, accountable people in the business.
Quite often, once the decision to invest is made, technology projects are devolved to the IT department who are then responsible for overseeing through delivery and implementation. Often these technically focused people are poorly qualified to understand the business nuances and may not have the required communications skills. Over and above this, who looks at the changes required in human behaviour? Who is addressing the motivational issues that will get the right people doing the right things?
A framework can be proposed to improve chances of success. This is based around the simple model of People, Process and Technology (PPT) with the added element of environment or context (PPTE). Context is the first parameter to get right. How does the development proposed relate to the business strategy? What is the desired outcome for the development, in business benefit terms, so that we know what is to be delivered and why? After thinking through the application needs and functions, the next useful question is how is it to be delivered? This should be viewed as a problem that the business deals with rather than abdicating it to the IT group.
Costs can then be assessed in outline for the whole PPTE model. This may include some scenario planning work fully to appreciate the different ways that the system may work, and identify the best options, prior to getting the technologists involved. A full disclosure ROI can then be calculated that takes all benefits and PPTE costs into account. This should include all of the people costs for effective change, from ownership and visions through stakeholder buy-in, to positive, user-led adoption. Decisions to proceed are now likely to be better informed and can be done on all fronts of process, technology and people readiness, perhaps with the ‘go’ decision requiring people readiness to be assured.
Truer costs will be understood and the full implications of benefits will emerge. The business’s responsible project owner will now have a budget that allows them to plan from concept to execution with holistic consideration of all PPTE elements. This will give positive adoption of systems that are pulled through by users who expect what they get and get what they expect. They will ‘pull’ the system through rather than having it shoved at them.
Source: Barker, R. (2007) Putting an all-inclusive price tag on successful IT. Financial Times. 30 May. © The Financial Times Limited 2012. All Rights Reserved.
QUESTION
Discuss the difficulties in estimating the costs and benefits of an IT project.
M09_BOCI6455_05_SE_C09.indd 321 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT322
Project managers need to control projects, and to achieve this they tend to use frameworks based on previous projects they have managed. The systems development lifecycle (SDLC) or waterfall model (introduced in Chapter 7) provides such a framework. The majority of project plans will divide the project plan according to the SDLC phases.
Context: where in the SDLC does project planning occur?
By John Plummer
Projects equal pounds. Ever wondered what all those programme managers and project leaders do? Earn cash, it seems. According to project management training providers APM Group (www.apmgroup.co.uk) a quarter of the UK’s GDP comes from projects. A full-time job. Many companies reorganised over the past decade to chase the project pound, which has had a profound impact on staff. ‘Projects are no longer “something extra”,’ says the website www.chiefprojectofficer.com, ‘they are the way work gets done at an increasing number of companies, from small start-ups to the likes of Hewlett Packard.’ Get trained. As income from projects has grown, so too has the market in accredited project management qualifications. More companies are sending staff on courses such as Prince2 (www.prince2.org.uk), a project management methodology owned by the Office of Government Commerce. Define your objectives. Every project begins with a plan. When will we start? What do we need? Can we do it alone, or do we need help? How long will it take? What will it cost? ‘These are typical questions asked at the start of any project and the answers are the building blocks of project management,’ says the Prince2 website. Expect to change. Projects that don’t evolve are the likeliest to wither so no matter how good your initial plan is, expect it to change. If you’re running a project that has been outsourced to your company, consider inviting a customer on to the project team to keep them informed, involved in decisions and better motivated. The advantages. ‘Project management’ may sound as sexy as the words ‘Charles Kennedy lap-dancing’, but don’t be fooled. ‘One of the advantages of working in projects is that you never know what you will be doing in six months,’ says Andrew Delo at the project management advisers Provek (www.provek.co.uk). ‘If you like uncertainty, it is an exciting environment.’
Source: Plummer, J. (2005) The key to … project planning. The Times, 26 May.
The key to . . . project planningMini case study
THE PROJECT MANAGEMENT PROCESS
When undertaking a BIS project, the project manager will be held responsible for delivery of the project to the traditional objectives of time, cost and quality. Many BIS have the attributes of a large-scale project in that they consume a relatively large amount of resources, take a long time to complete and involve interactions between different parts of the organisation. To manage a project of this size and complexity requires a good overview of the status of the project in order to keep track of progress and anticipate problems. The use of a structured project management process can greatly improve the performance of IS projects, which have become well known for their tendency to run over budget or be late as stated earlier. The ubiquity of projects and the challenge of project management is outlined in the mini case study ‘The key to … project planning’.
M09_BOCI6455_05_SE_C09.indd 322 10/13/14 4:49 PM
323ChaPter 9 BIS PROjEcT MANAgEMENT
An initial project plan will usually be developed at the initiation phase (Chapter 7). This will normally be a high-level analysis that does not involve the detailed identification of the tasks that need to occur as part of the project. It may produce estimates for the number of weeks involved in each phase, such as analysis and design, and for the modules of the system, such as data-entry and reporting modules. If the project receives the go-ahead, a more detailed project plan will be produced before or as the project starts. This will involve a much more detailed identification of all the tasks that need to occur. These will usually be measured to the nearest day or hour and can be used as the basis for controlling and managing the project. The detailed project plan will not be produced until after the project has commenced, for two reasons: 1. It is not practical to assess the detailed project plan until the project starts, since the cost
of producing a detailed project plan may be too high for it to be discarded if the project is infeasible.
2. A detailed project plan cannot be produced until the analysis phase has started, since estimates are usually based on the amount of work needed at the design and build phases of the project. This estimate can only be produced once the requirements for the system have been established at the analysis phase.
These points are often not appreciated and, we believe, are a significant reason for the failure of projects. Project managers are often asked to produce an estimate of the amount of time required to finish a project before the analysis phase, when insufficient information is at their disposal. Their answer should be:
I can give you an initial estimate and project plan based on similar projects of this scale at the initiation phase. I cannot give you a detailed, accurate project plan until the analysis is complete and the needs of the users and the business have been assessed. A detailed estimate can then be produced according to the amount of time it is likely to take to implement the users’ requirements.
Why do projects fail?
There has been a number of high-profile IT project failures in the UK public sector which underline the difficulties of IT project management. Despite these failures there are also a number of successes which generally receive less publicity. One reason for public-sector IT failures may be the sheer size and thus complexity of the projects. It is also difficult to compare performance with private-sector IT project performance as private companies are generally reluctant to disseminate knowledge regarding IT failures in order not to tarnish their reputation. Read Case Study 9.2 for more information on the success and failure of IT project management.
In general terms Lyytinen and Hirscheim (1987) researched the reasons for information systems projects failing. They identified five broad areas which still hold true today:
■ Technical failure stemming from poor technical quality – this is the responsibility of the organisation’s IS function.
■ Data failure due to (a) poor data design, processing errors and poor data management; and (b) poor user procedures and poor data quality control at the input stage. Responsibility for the former lies with the IS function, while that for the latter lies with the end-users themselves.
■ User failure to use the system to its maximum capability – may be due to an unwillingness to train staff or user management failure to allow their staff full involvement in the systems development process.
■ Organisational failure, where an individual system may work in its own right but fails to meet organisational needs as a whole (e.g. while a system might offer satisfactory
M09_BOCI6455_05_SE_C09.indd 323 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT324
operational information, it fails to provide usable management information). This results from senior management’s failure to align IS to overall organisational needs.
■ Failure in the business environment can stem from systems that are inappropriate to the market environment, failure in IS not being adaptable to a changing business environment (often rapid change occurs), or a system not coping with the volume and speed of the underlying business transactions.
It is apparent that a diverse range of problems can cause projects to fail, ranging from technical problems to people management problems.
It is the responsibility of the project manager to ensure that these types of problems do not occur, by anticipating them and then taking the necessary actions to resolve them. This will involve risk management techniques, described in Chapter 8. Case Study 9.2 shows the type of problems that occur, the reasons behind them and advice for new project managers on how to manage projects successfully.
The team behind Britain’s most high profile infrastructure project in recent times says there was no ‘magic ingredient’ in its successful delivery, but having £9.3bn available is likely to have helped.
The construction of the Olympic park in east London was widely hailed as a success long before the first athlete set foot in it last month.
While the fate of a few key venues is still unclear, there is a strong consensus on the park’s delivery. ‘On time and under budget’ is the most common appraisal batted around by politicians and Olympic organisers – though the latter description is not entirely accurate.
The success of the build has prompted some soul- searching about lessons that can be applied to future developments, partly to avoid repeats of projects that went wrong, such as Wembley Stadium.
‘In reputation terms [the Olympic project] was an opportunity, clearly,’ says Sir John Armitt, the man in charge of the body that built the park.
He says that the UK’s reputation for major construction and infrastructure developments has always been high, but admits that Wembley ‘didn’t go so well’.
The fact that the world was watching and judging as the Olympic park was erected on top of former industrial wasteland added more pressure to get it right. ‘The Olympic project is the most high profile project that you could imagine,’ he says.
Sir John says successful programme management starts from the client, in this case the Olympic Delivery
Project management: lessons can be learned from successful delivery By Vanessa Kortekaas
CASE STUDy 9.2
Authority, which he led. He says the ODA knew what it valued, balancing cost and quality, and made that clear to its suppliers.
‘If you talk to the suppliers on the Olympics what they will say is that the ODA was an intelligent client, and a consistent client in contractual terms,’ says Sir John. Consistency, he says, reinforced to suppliers what was expected of them.
The ODA oversaw the procurement of more than £6bn worth of contracts to deliver the Olympic park, and arguably its most important contract was with its delivery partner – CLM, a consortium that includes CH2M Hill.
But the creation and structure of the ODA itself was also key to the success of the project.
‘One of the weakest points of London’s bid originally was the sense that the UK, and London in particular, had such a range of agencies and bodies that would need to be corralled together to make anything happen,’ says Tim Jones, a partner at Freshfields law firm who was heavily involved in the negotiations that spawned the ODA.
The ODA served as a ‘single governmental interface’ with planning authority, he says, removing the need for time-consuming negotiations with various local bodies.
It also assumed power for some aspects of Olympic transport and security. ‘The ODA was where all those functions were really brought together,’ says Mr Jones. Having a delivery partner enabled the relatively small ODA to function efficiently, he adds.
M09_BOCI6455_05_SE_C09.indd 324 10/13/14 4:49 PM
325ChaPter 9 BIS PROjEcT MANAgEMENT
In order that a project is clearly defined and meets its objectives it is important to define the roles of the staff involved and how those roles are organised within a particular project. The principal roles encountered in a project are outlined below. Note that these roles may be known by other names or be undertaken by more than one person or roles may be combined and allocated to a single person, depending on the organisational context and the size of the project.
Project sponsor
The project sponsor role is to provide a justification of the project to senior management. The role includes defining project objectives and time, cost and quality performance measures. The role also involves obtaining finance and appointing a project manager. The project sponsor is accountable for the success or failure of the project in meeting its business objectives.
Project manager
Appointed by the project sponsor the project manager role is to provide day-to-day management and ensure project objectives are met. This involves selection and management of the project team, monitoring of the time, cost and quality performance measures and informing the project sponsor and senior management of progress. In larger projects the project manager may delegate certain areas of the project (e.g. programming) to team leaders for day-to-day management.
Project organisation
That sentiment is echoed in an ODA report on the project, which says having a delivery partner gave it ‘flexibility and agility’ and their partnership underpinned its success.
But not everything went to plan. Sir John says the impact of the financial crisis on delivering the athletes’ village was the ‘biggest challenge’ the ODA faced during the construction period.
Both the village and the media centre had to be bailed out by the government.
‘You couldn’t get a sensible financial package out of the banks, so the decision was made to use contingency money to fund [the village] and then sell the asset as soon as we could after… to recover the money,’ says Sir John.
Some £557m was recouped last year, when a Qatari- backed consortium bought half of the homes in the village and several plots of land in the Olympic park. Triathlon Homes had already paid £268m for 1,379 units, which are earmarked for affordable housing.
The contingency money that was drawn on for the village was available only because the Olympics budget was revised to £9.3bn in 2007, from an original estimate of £2.4bn. The ODA spent £6.8bn delivering the Olympic park.
The immovable deadline also drove the project. The opening ceremony was always going to happen on July 27, and frequent missions from the International Olympic Committee served as a potent reminder.
Cross-party support was another unique factor from which the build benefited. ‘You don’t always get that [support],’ admits Sir John.
It is unclear whether local authorities would be willing to yield some elements of power again, under different circumstances.
Mr Jones is hopeful but unsure. ‘They only agreed to do it that time because it was the Olympics and they all wanted this to happen,’ he says. ‘Maybe it is rather optimistic to suspect that you would get such support for another project, that people would actually surrender their power.’
Source: Kortekaas, V. (2012) Project management: lessons can be learned from successful delivery. Financial Times. 19 August. © The Financial Times Limited 2012. All Rights Reserved.
QUESTION
Discuss the delivery of the Olympic Park in terms of time, cost and quality performance objectives.
M09_BOCI6455_05_SE_C09.indd 325 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT326
Project user
The project user is the person or group of people who will be utilising the outcome of the information systems project. The user(s) should be involved in the definition and implementation of the system to ensure successful ongoing usage.
Other major roles that may be defined in the project include the following.
Quality manager
This role involves defining a plan containing procedures that ensure quality targets are met. Quality can be defined as ‘conformance to customer requirements’. Total quality management (TQM) attempts to establish a culture that supports quality. The European Foundation for Quality Management (EFQM) has provided a model that allows an organisation to quantify its progress towards a total quality business. For more information on quality management in relation to IS projects see Cadle and Yeates (2007).
Risk manager
All projects contain some risk that the investment made will not achieve the required business objectives. Risk management has become increasingly important in providing processes that attempt to reduce risk in complex and uncertain projects (see Chapter 8 for more details on risk management).
In many situations the project is organised by the main roles of project sponsor, project manager and project user. However, in complex or larger projects other organisational bodies may be encountered. A steering committee brings together a variety of interested people such as users, functional staff (e.g. finance, purchasing) and project managers in order that all stakeholder views are taken into consideration. At a lower level user groups may be instituted to represent the views of multiple potential users.
STEPS IN PROJECT MANAGEMENT
Before the planning process can commence, the project manager will need to determine not only the business aims of the project but also the constraints under which they must be achieved. Major constraints include the overall budget for project development, the timescale for project completion, staffing availability, and hardware and software requirements for system development and running of the live system. These questions form the framework for the project and it is important that they be addressed at the beginning of the project planning process. It is usual, however, to only prepare detailed plans of the early stages of the project at this point.
The project management process includes the following main elements:
■ estimate; ■ schedule/plan; ■ monitoring and control; ■ documentation.
Estimation
Estimation allows the project manager to plan for the resources required for project execution through establishing the number and size of tasks that need to be completed in the project. This is achieved by breaking down the project repeatedly into smaller tasks until
Estimation
Estimation allows the project manager to plan for the resources required for project execution through establishing the number and size of tasks that need to be completed in the project.
M09_BOCI6455_05_SE_C09.indd 326 10/13/14 4:49 PM
327ChaPter 9 BIS PROjEcT MANAgEMENT
a manageable chunk of one to two days’ work is defined. Each task is given its own cost, time and quality objectives. It is then essential that responsibility be assigned to achieving these objectives for each particular task. This procedure should produce a work breakdown structure (WBS) that shows the hierarchical relationship between the project tasks. It is an important part of estimation. Figure 9.2 shows how the work on producing a new accounting system might be broken down into different tasks. Work on systems projects is usually broken down according to the different modules of the system. In this example, three levels of the WBS are shown for the accounts receivable module down to its printing function. All the other five modules of the system would also have similar tasks.
At the start of the project in the initiation or startup phase, an overview project plan is drawn up estimating the resources required to carry out the project. It is then possible to compare overall project requirements with available resources.
Project constraints can be resource-constrained (limited by the type of people or hardware resources available) or time-constrained (limited by the deadline).
The next step, after the project has been given the go-ahead, is a more detailed estimate of the resources needed to undertake the tasks identified in the work break-down structure. If highly specialised resources are required (e.g. skilled analysts), then the project completion date may have to be set to ensure that these resources are not overloaded. This is a resource-constrained approach. Alternatively, there may be a need to complete a project in a specific timeframe (e.g. due date specified by customer). In this case, alternative resources (e.g. subcontractors) may have to be utilised to ensure timely project completion. This is a time-constrained approach. This information can then be used to plan what resources are required and what activities should be undertaken over the lifecycle of the project.
Effort time and elapsed time
When estimating the amount of time a task will take, it is important to distinguish between two different types of time that need to be estimated. Effort time is the total amount of work that needs to occur to complete a task. The elapsed time indicates how long in time (such as calendar days) the task will take (duration). Estimating starts by considering the amount of effort time that needs to be put in to complete each task. Effort time is then converted into
Work breakdown structure (WBS)
This is a breakdown of the project or a piece of work into its component parts (tasks).
Project constraints
Projects can be resource-constrained (limited by the type of people, monetary or hardware resources available) or time- constrained (limited by the deadline).
Figure 9.2 Work breakdown structure (WBS) for an accounting system
Accounting system
Control program
General ledger
Sales order
processing
Accounts receivable
Accounts payable
Print receipts
View all receipts
Data entry
Print option dialog
Send to printer
Print review
Effort and elapsed time
Effort time is the total amount of work that needs to occur to complete a task. The elapsed time indicates how long in time (such as calendar days) the task will take (duration).
M09_BOCI6455_05_SE_C09.indd 327 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT328
elapsed time, which indicates how long the task will take through real-time measures such as months or days. Effort time does not usually equal elapsed time, since if a task has more than one worker the elapsed time will be less than the effort time. Conversely, if workers on a task are also working on other projects, then they will not be available all the time and the elapsed time will be longer than the effort time. An additional factor is that different workers may have different speeds. A productive worker will need less elapsed time than an inexperienced worker. These constraints on elapsed time can be formalised in a simple equation:
Elapsed time = Effort times × Availability
%
× Work rate %
The equation indicates that if the availability or work rate of a worker is less than 100 per cent, the elapsed time will increase proportionally, since availability and work rate are the denominators on the right-hand side of the equation. The equation will need to be applied for each worker, who may have different availabilities and work rates. These factors can be entered into a project management package, but to understand the principles of estimation better the activity on project planning should be attempted (see Activity 9.1 below).
From the example in the activity, it can be seen that several stages are involved in estimation:
1. estimate effort time for average person to undertake task; 2. estimate different work rates and availability of staff; 3. allocate resources (staff) to task; 4. calculate elapsed time based on number of staff, availability and work rate; 5. schedule task in relation to other tasks.
Cadle and Yeates (2007) provide the following techniques for estimating the human resource and capacity requirements for the different stages of an IS project:
1. Estimating the feasibility study. This stage will not usually be estimated in detail, since it will occur at the same time as or before a detailed project estimate is produced. The feasibility stage consists of tasks such as interviewing, writing up interview information and report writing in order to assess the financial, technical and organisational acceptability of the project. The estimate will depend greatly on the nature of the project, but also on the skills and experience of the staff involved. Thus it is important to keep records of previous performance of personnel for this activity in order to improve the accuracy of future estimates.
2. Estimating analysis and design phases. The analysis phase will typically involve collection of information about the operation of current systems and the specification of requirements for the new system. This will lead to the functional requirements specification, defining the new system in terms of its business specification. The design phase will specify the new computer-based system in terms of its technical content. This will need to take into account organisational policies on design methodologies and hardware and software platforms. In order to produce an accurate estimate of the analysis and design phases, it is necessary to produce a detailed description of each task involved. As in the feasibility stage, time estimates will be improved if timings are available for previous projects undertaken.
3. Estimating build and implementation. This stage covers the time and resources needed for the coding, testing and installation of the application. The time taken to produce a program will depend mainly on the number of coding statements required and the complexity of the program. The complexity of the coding will generally increase with the size of the program and will also differ for the type of application. A lookup table can be derived from experience to give the estimated coding rate dependent on the
100 100
M09_BOCI6455_05_SE_C09.indd 328 10/13/14 4:49 PM
329ChaPter 9 BIS PROjEcT MANAgEMENT
complexity of the project for a particular development environment. This is discussed in more detail below.
Estimating tools
Statistical methods can be used when a project is large (and therefore complex) or novel. This allows the project team to replace a single estimate of duration with a range within which they are confident the real duration will lie. This is particularly useful for the early stage of the project when uncertainty is greatest. The PERT approach described later in this chapter allows optimistic, pessimistic and most likely times to be specified for each task – from these a probabilistic estimate of project completion time can be computed.
The most widely used economic model is the constructive cost model (COCOMO), described by Boehm (1981) and first proposed by staff working at US consultancy Doty Associates. The constructive cost model is used to estimate the amount of effort required to complete a project on the basis of the estimated number of lines of program code. Based on an analysis of software projects, the model attempts to predict the effort required to deliver a project based on input factors such as the skill level of staff. A simplified version of the model is:
WM = C × (KDSI)K × EAF
where WM = number of person months, C = one of three constant values dependent on development mode, KDSI = delivered source lines of code × 1000, K = one of three constant values dependent on development mode, EAF = effort adjustment factor.
The three development modes or project types are categorised as organic (small development teams working in a familiar environment), embedded (where constraints are made by existing hardware or software) and semi-detached, which lies somewhere between the two extremes of organic and embedded. In order to increase the accuracy of the model, more detailed versions of COCOMO incorporate cost drivers such as the attributes of the end product and the project environment. The detailed version of the model calculates the cost drivers for the product design, detailed design, coding and unit test, and integration and test phases separately.
These techniques may take a considerable amount of time to arrive at a reasonably accurate estimate of personnel time required. However, since the build phase will be a major part of the development budget, it is important to allocate time to undertake detailed estimation.
The COCOMO method derives the time estimates it produces from an estimate of the number of lines of programming code to be written. A method of estimating the number of lines of code was developed by Alan Albrecht of IBM (Albrecht and Gaffney, 1983). Function point analysis is based on counting the number of user functions the application will have. It is possible to do this in detail after the requirements for the application have been defined. The five user function categories are:
1. number of external input types; 2. number of external output types; 3. number of logical internal file types; 4. number of external interface file types; 5. external enquiry types.
Each of these types of input and output is then weighted according to its complexity and additional factors applied according to the complexity of processing. The function point estimate can be compared to the function point count of previous completed information systems to give an idea of the number of lines of code and length of time that are expected.
Constructive cost model (COCOMO)
A model used to estimate the amount of effort required to complete a project on the basis of the estimated number of lines of program code.
Function point analysis
A method of estimating the time it will take to build a system by counting up the number of functions and data inputs and outputs and then comparing to completed projects.
M09_BOCI6455_05_SE_C09.indd 329 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT330
Note that both the COCOMO and function point analysis techniques were developed before the widespread use of applications with graphical user interfaces, interactive development environments for ‘graphical programming’, rapid applications development (RAD) and client/server databases to store information. These new techniques have made it faster to develop applications and the original data sets and principles on which these models are based have been updated to account for this. In order to take account of developments in software and software development methodologies COCOMO II has been developed (Boehm et al., 2001).
Scheduling and planning
Scheduling is determining when project activities should be executed. The finished schedule is termed the project plan.
Resource allocation is part of scheduling. It involves assigning resources to each task. Once the activities have been identified and their resource requirements estimated, it is necessary to define their relationship to one another. There are some activities that can only begin when other activities have been completed. This is termed a serial relationship and is shown graphically in Figure 9.3.
The execution of other activities may be totally independent and thus they have a parallel relationship, as shown graphically in Figure 9.4. Here, after the design phase, three activities must occur in parallel before implementation can occur.
For most significant projects there will be a range of alternative schedules which may meet the project objectives.
For commercial projects, computer software will be used to assist in diagramming the relationship between activities and calculating network durations. From a critical path network and with the appropriate information, it is usually possible for the software automatically to generate Gantt charts, resource loading graphs and cost graphs, which are discussed later in the chapter. Project management software, such as Microsoft Project, can be used to assist in choosing the most feasible schedule by recalculating resource requirements and timings for each operation. The network analysis section of this chapter provides more information on project scheduling techniques.
Scheduling
Scheduling involves determining when project activities should be executed. The finished schedule is termed the project plan.
Resource allocation
This activity involves assigning a resource to each task.
Figure 9.3 Serial relationship of activities
CodeDesign Test
Figure 9.4 Parallel relationship of activities
Code
Write documentation
Design Test Procure
hardware
M09_BOCI6455_05_SE_C09.indd 330 10/13/14 4:49 PM
331ChaPter 9 BIS PROjEcT MANAgEMENT
The scenario You are required to construct a project plan for the following BIS development project. Your objective is to schedule the project to run in the shortest time possible. The plan should include all activities, the estimated, elapsed and effort time, and who is to perform each activity. In addition, it is necessary to indicate the sequence in which all the tasks will take place. The programs can be scheduled in any order, but for each program the design stage must come first, followed by the programming and finally the documentation.
Within the context of the exercise, you can assume that the detailed systems analysis has already been carried out and that it is now necessary to perform the design, programming and documentation activities. For the purposes of this exercise, we will not include the testing and implementation phases.
Present your project plan in the form of a Gantt chart (see Figure 9.10 later) showing each task, the sequence in which tasks will be performed, the estimated effort and elapsed time and the resource allocated to each task.
The activities There are five programs in the system. Each has a different level of difficulty:
■ Program 1 Difficult ■ Program 2 Easy ■ Program 3 Moderate ■ Program 4 Moderate ■ Program 5 Difficult
For each level of difficulty, the design, programming and documentation tasks take different amounts of effort time:
Design ■ Easy programs 1 day ■ Moderate programs 2 days ■ Difficult programs 4 days
Programming ■ Easy programs 1 day ■ Moderate programs 3 days ■ Difficult programs 6 days
Documentation ■ Easy programs 1 day ■ Moderate programs 2 days ■ Difficult programs 3 days
Resources In order to complete the project plan, you need to know what resources you have available. For each resource, there are two variables:
■ Work rate. This describes the speed at which the resource works (i.e. a work rate of 1.0 means that a task scheduled to take one day should only take one day to complete satisfactorily; a work rate of 1.5 means that a task scheduled for three days should only take two days etc.).
■ Availability. Each resource will be available for certain amounts of time during the week. 100% availability = 5 days per week, 50% availability = 2.5 days per week, etc.
In planning your project, work to units of half a day. For simplicity, any task which requires a fraction of half a day should be rounded up (e.g. 1.6 days should be rounded up to 2 days). Also, a resource can only be scheduled for one task at any one time!
Project planning exerciseActivity 9.1
➨
M09_BOCI6455_05_SE_C09.indd 331 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT332
Resource availability
System designer 1 (SD1) ■ Work rate 1.0 ■ Availability 100%
Systems designer 2 (SD2) ■ Work rate 1.5 ■ Availability 40%
Systems designer 3 (SD3) ■ Work rate 0.5 ■ Availability 50%
Programmer 1 (P1) ■ Work rate 2.0 ■ Availability 40%
Programmer 2 (P2) ■ Work rate 1.0 ■ Availability 100%
Programmer 3 (P3) ■ Work rate 0.5 ■ Availability 60%
Technical author 1 (TAl) (to do the documentation) ■ Work rate 1.0 ■ Availability 60%
Technical author 2 (TA2) ■ Work rate 0.5 ■ Availability 100%
Technical author 3 (TA3) ■ Work rate 2.0 ■ Availability 40%
Tips
1. This exercise will be easier if you structure the information well. You could do this by producing three matrices for the design, programming and documentation tasks. Each of them should show across the columns three different tasks for easy, moderate and difficult programs. Each row should indicate how long the different types of workers will take to complete the task.
2. To calculate the length of elapsed time for each cell in the matrix, it is easiest to use this relationship:
Elapsed time = Effort times × Availability
%
× Work rate %
3. A calculator may help!
4. When drawing the Gantt chart, you may want to put your best people on the most difficult tasks, as you would on a real project.
100 100
M09_BOCI6455_05_SE_C09.indd 332 10/13/14 4:49 PM
333ChaPter 9 BIS PROjEcT MANAgEMENT
When a project is under way, its objectives of cost, time and quality in meeting targets must be closely monitored. Monitoring involves ensuring that the project is working to plan once it has started. This should occur daily for small-scale tasks or weekly for combined activities. Control or corrective action will occur if the performance measures deviate from plan. It is important to monitor and assess performance as the project progresses, in order that corrective action can be taken before it deviates from plan to any great extent. Milestones (events that need to happen on a particular date) are defined so that performance against objectives can be measured (e.g. end of analysis, production of first prototype).
Computer project management packages can be used to automate the collection of project progress data and production of progress reports.
Achieving time, cost and quality objectives
As stated earlier, the project should be managed to achieve the defined objectives of time, cost and quality. The time objective is met by ensuring that the project is monitored in terms of execution of tasks within time limits. Corrective action is taken if a variance between actual and planned time is observed. The cost objective is achieved by the use of human resource and computing resource budgets and, again, variation between estimated and actual expenditure is noted and necessary corrective action taken. To ensure that quality objectives are met it is necessary to develop a quality plan which contains a list of items deliverable to the customer. Each of these will have an associated quality standard and procedure for dealing with a variance from the required quality level defined in the quality plan.
Project structure and size
The type of project structure required will be dependent on the size of the team undertaking the project. Projects with up to six team members can simply report directly to a project leader at appropriate intervals during project execution. For larger projects requiring up to 20 team members, it is usual to implement an additional tier of management in the form of team leaders. The team leader could be responsible for either a phase of the development (e.g. analysis, design) or a type of work (e.g. applications development, systems development). For any structure it is important that the project leader ensures consistency across development phases or development areas as appropriate. For projects with more than 20 members, it is likely that additional management layers will be needed in order to ensure that no one person is involved in too much supervision.
Reporting project progress
The two main methods of reporting the progress of a project are by written reports and verbal reports at meetings of the project team. It is important that a formal statement of progress is made in written form, preferably in a standard report format, to ensure that everyone is aware of the current project situation. This is particularly important when changes to specifications are made during the project. In order to facilitate two-way communication between team members and team management, regular meetings should be arranged by the project manager. These meetings can increase the commitment of team members by allowing discussion of points of interest and dissemination of information on how each team’s effort is contributing to the overall progression of the project.
Monitoring and control
Monitoring and control
Monitoring involves ensuring the project is working to plan once it is started. control is taking corrective action if the project deviates from the plan.
M09_BOCI6455_05_SE_C09.indd 333 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT334
Ensuring adequate project documentation is a key aspect of the role of the project manager. Software development is a team effort and documentation is necessary to disseminate design information throughout the team. Good documentation reduces the expense of maintenance after project delivery. Also, when members of the team leave the department or organisation, the coding they have produced must be understandable to new project members. Often a development methodology will require documentation at stages during the project in a specific format. Thus documentation must be an identified task in the development effort and a standard document format should be used throughout the project (this may be a standard such as BS 5750 or ISO 9001).
Documents that may be required include the following:
■ Workplan/task list. For each team member a specified activity with start and finish dates and relevant coding standard should be defined.
■ Requirements specification. This should clearly specify the objectives and functions of the software.
■ Purchase requisition forms. Required if new software and hardware resources are needed from outside the organisation.
■ Staffing budget. A running total of personnel costs, including expenses and subsistence payments. These should show actual against predicted expenditure for control purposes.
■ Change control documents. To document any changes to the project specification during the project. A document is needed to highlight the effect on budgets and timescales of a change in software specifications.
Documentation
Project documentation
Documentation is essential to disseminate information during project execution and for reference during software maintenance.
A PROJECT MANAGEMENT METHODOLOGy: PRINCE2 FOCUS ON…
PRINCE2 (published in 1996) is a process-based project management methodology based on PRINCE (published in 1989), which stands for Projects in Controlled Environments. The development of PRINCE2 involved a consortium of 150 European organisations and is a de facto standard used by the UK government and widely used by the private sector, both in the UK and internationally. The PRINCE2 method for managing projects is designed to help you work out who should be involved and what they are responsible for. It also provides a set of processes to work through and explains what information you should be gathering along the way.
The key features of PRINCE2 are:
■ its focus on business justification; ■ a defined organisation structure for the project management team; ■ its product-based planning approach; ■ its emphasis on dividing the project into manageable and controllable stages; ■ its flexibility to be applied at a level appropriate to the project.
Thus the PRINCE2 methodology means managing the project in a logical organised way, following defined steps. The PRINCE2 methodology says that a project should have an:
■ organised and controlled start, i.e. organise and plan things properly before leaping in; ■ organised and controlled middle, i.e. when the project has started, make sure it continues
to be organised and controlled; ■ organised and controlled end, i.e. when you’ve got what you want and the project has
finished, tidy up the loose ends.
PRINCE2
A process-based project management methodology for effective IS project management.
M09_BOCI6455_05_SE_C09.indd 334 10/13/14 4:49 PM
335ChaPter 9 BIS PROjEcT MANAgEMENT
The PRINCE2 Process Model
In order to describe what a project should do when, PRINCE2 has a series of processes which cover all the activities needed on a project from starting up to closing down. The PRINCE2 Process Model (Figure 9.5) defines each process with its key inputs and outputs together with the specific objectives to be achieved and activities to be carried out.
Each element in the process model will now be described.
Directing a project
This process is aimed at the project board and involves the management and monitoring of the project via reports and controls from the startup of the project until its closure. Key processes for the project board are:
■ initiation (starting the project off on the right foot); ■ stage boundaries (commitment of more resources after checking results so far); ■ ad hoc direction (monitoring progress, providing advice and guidance, reacting to
exception situations); ■ project closure (confirming the project outcome and controlled close).
Starting up a project
This is a pre-project process designed to ensure that the prerequisites for initiating the project are in place. The work of the process is built around the production of three elements:
■ ensuring that the information required for the project team is available; ■ designing and appointing the project management team; ■ creating the initiation stage plan.
Initiating a project
The objectives of initiating a project include:
■ agree whether or not there is sufficient justification to proceed with the project; ■ establish a stable management basis on which to proceed; ■ document and confirm that an acceptable business case exists for the project; ■ agree to the commitment of resources for the first stage of the project.
Figure 9.5 PRINCE2 Process Model
Starting up a project
Initiating a project
Controlling a stage
Directing a project
Planning
Managing product delivery
Managing stage
boundaries
Closing a project
M09_BOCI6455_05_SE_C09.indd 335 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT336
Managing stage boundaries
This process provides the project board with key decision points on whether to continue with the project or not. The objectives of the process include:
■ assure the project board that all deliverables planned in the current stage plan have been completed as defined;
■ provide the information needed for the project board to assess the continuing viability of the project;
■ provide the project board with information needed to approve the current stage’s comple- tion and authorise the start of the next stage, together with its delegated tolerance level;
■ record any measurements or lessons which can help later stages of this project and/or other projects.
Controlling a stage
This process describes the monitoring and control activities of the project manager involved in ensuring that a stage stays on course and reacts to unexpected events. Throughout a stage there will be a cycle consisting of:
■ authorising work to be done; ■ gathering progress information about that work; ■ watching for changes; ■ reviewing the situation; ■ reporting; ■ taking any necessary corrective action.
Managing product delivery
The objective of this process is to ensure that planned products are created and delivered by:
■ making certain that work on products allocated to the team is effectively authorised and agreed accepting and checking work packages;
■ ensuring that the work is done; ■ assessing work progress and forecasts regularly; ■ ensuring that completed products meet quality criteria.
Closing a project
The purpose of this process is to execute a controlled close to the project. The process covers the project manager’s work to wrap up the project either at its end or at premature close. Most of the work is to prepare input to the project board to obtain its confirmation that the project may close. The objectives of closing a project therefore include:
■ check the extent to which the objectives or aims set out in the project initiation document (PID) have been met;
■ confirm the extent of the fulfilment of the project initiation document (pid) and the customer’s satisfaction with the deliverables;
■ make any recommendations for follow-on actions; ■ prepare an end project report; ■ notify the host organisation of the intention to disband the project organisation and
resources.
Planning
Planning is a repeatable process, and plays an important role in other processes, main ones being:
M09_BOCI6455_05_SE_C09.indd 336 10/13/14 4:49 PM
337ChaPter 9 BIS PROjEcT MANAgEMENT
■ planning an initiation stage ■ planning a project ■ planning a stage ■ producing an exception plan.
PRINCE2 organisation
The following are the main project management roles in PRINCE2.
Project manager
The project manager is responsible for organising and controlling the project. The project manager will select people to do the work on the project and will be responsible for making sure that the work is done properly and on time. The project manager also draws up the project plans that describe what the project team will actually be doing and when they expect to finish.
Customer, user and supplier
The person who is paying for the project is called the customer or executive. The person who is going to use the results or outcome of the project is called the user. On some projects the customer and user may be the same person. The person who provides the expertise to do the actual work on the project is called the supplier or specialist. All these people need to be organised and coordinated so that the project delivers the required outcome within budget, on time and to the appropriate quality.
Project board
Each PRINCE2 project will have a project board made up of the customer (or executive), someone who can represent the user side and someone to represent the supplier or specialist input. In PRINCE2 the people are called customer, senior user and senior supplier respectively. The project manager reports regularly to the project board, keeping them informed of progress and highlighting any problems they can foresee. The project board is responsible for providing the project manager with the necessary decisions for the project to proceed and to overcome any problems.
Project assurance
Providing an independent view of how the project is progressing is the job of project assurance. In PRINCE2, there are three views of assurance: business, user and specialist. Each view reflects the interests of the three project board members. Assurance is about checking that the project remains viable in terms of costs and benefits (business assurance), checking that the users’ requirements are being met (user assurance), and that the project is delivering a suitable solution (specialist or technical assurance). On some projects, the assurance is done by a separate team of people called the project assurance team, but the assurance job can be done by the individual members of the project board themselves.
Project management methodologies compared
In addition to PRINCE2 many other project management methodologies (as opposed to development process methodologies such as SSADM, JSD and STRADIS) exist, such as BPMM (www.bates.ca) and IDEAL (www.sei.cmu.edu/ideal). In addition, methodologies have been developed ‘in-house’ by companies for their own use or have been developed commercially and require a licence fee before more information is released.
M09_BOCI6455_05_SE_C09.indd 337 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT338
Critical path diagrams are used extensively during scheduling and monitoring to show the planned activities of a project and the dependencies between these activities. For example, network analysis will show that activity C can only take place when activity A and activity B have been completed. Once a network diagram has been constructed, it is possible to follow a sequence of activities, called a path, through the network from start to end. The length of time it takes to follow the path is the sum of all the durations of activities on that path. The path with the longest duration gives the project completion time. This is called the critical path because any change in duration in any activities on this path will cause the whole project duration to become shorter or longer. Activities not on the critical path will have a certain amount of slack time in which the activity can be delayed or the duration lengthened and not affect the overall project duration. The amount of slack time is the difference between the path duration the activity is on and the critical path duration. By definition, all activities on the critical path have zero slack. Note that there must be at least one critical path for each network and there may be several critical paths. The significance of the critical path is that if any node on the path finishes later than the earliest finish time, the overall network time will increase by the same amount, putting the project behind schedule. Thus any planning and control activities should focus on ensuring that tasks on the critical path remain within schedule.
Critical path network diagrams are sometimes called ‘PERT charts’, but the correct technical meaning of this term is detailed in a later section.
An important function of a company’s information systems manager is to review which methodologies should be employed to improve the quality of its systems development processes. Some methodologies may add a structure to a company process which improves its efficiency. Others may enforce restrictions which reduce the efficiency of the process and increase the cost and duration of the project.
You are a project manager in a company of 400 people. The company has a history of developing systems that meet the needs of the end-users well, but can sometimes be over six months late. The managing director has decided that the project will be conducted by internal IS development staff. Your role as the owner of the system in which the project will be implemented is to manage the project using other resources, such as the IS department, as you see fit.
QUESTION
From the information given in the preceding section and using any relevant books, decide whether to use a formal project methodology such as PRINCE2 or IDEAL or a different approach. Justify your answer, giving a brief evaluation of what you perceive as the advantages and disadvantages of the methodology.
An assessment of PRINCE2Activity 9.2
A PROJECT MANAGEMENT TOOL: NETWORK ANALySIS
Critical path
Activities on the critical path are termed critical activities. Any delay in these activities will cause a delay in the project completion time.
The critical path method (CPM)
Once the estimation stage has been completed, the project activities should have been identified, activity durations and resource requirement estimated and activity relationships identified. Based on this information, the critical path diagrams can be constructed using either the activity-on-arrow (AOA) approach or the activity-on-node (AON) approach. The issues involved in deciding which one to utilise will be discussed later. The following description of critical path analysis will use the AON method.
M09_BOCI6455_05_SE_C09.indd 338 10/13/14 4:49 PM
339ChaPter 9 BIS PROjEcT MANAgEMENT
The critical path method (CPM) uses critical path diagrams to show the relationships between activities in a project.
The activity-on-node (AON) method
In an activity-on-node network, the diagramming notation shown in Figure 9.6 is used. Each activity task is represented by a node with the format shown in the figure. Thus a completed network will consist of a number of nodes connected by lines, one for each task, between a start and an end node, as shown in Figure 9.7.
The diagram illustrates sequential activities such as from activity B to activity C and parallel activities such as activities D, F and G. Once the network diagram has been drawn using the activity relationships, the node information can be calculated, starting with the earliest start and finish times. These are calculated by working from left to right through the network, in the ‘forward pass’. Once the forward pass has been completed, it is possible to calculate the latest start and finish times for each task. This is achieved by moving right to left along the network, backward through time, in the ‘backward pass’. Finally, the slack or float value can be calculated for each node by taking the difference between the earliest start and latest start (or earliest finish and latest finish) times for each task. There should be at least one (there may be more than one) critical path running through the network where each task has a slack value of 0. In Figure 9.7 the critical path can be stated as running through the activities B, C, E, F and H. Any delay to these critical activities will increase the current project duration of 83. The critical path represents the sequence of activities that takes the longest time to complete and thus defines the shortest time that the project can complete in. Activities not on the critical path have a slack time, for example activity A has a slack time of 32, which represents how late the activity can be without effecting the overall project duration.
Critical path method (CPM)
critical path diagrams show the relationship between activities in a project.
Figure 9.6 Activity on node notation
DurationEarliest start Earliest finish
Slack/floatLatest start Latest finish
Activity number/letter Activity description
Figure 9.7 Activity on node network diagram
A 300 30
3232 62
B 150 15
00 15 C 3015 45
015 45
G 745 52
1762 69
E 1045 55
045 55 F 1455 69
055 69
D 1545 60
954 69
H 1469 83
069 83 END
083 83
083 83START 00 0
00 0
M09_BOCI6455_05_SE_C09.indd 339 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT340
The activity-on-arrow (AOA) method
The format for the activity-on-arrow method will now be described. The symbols used in this method are as shown in Figure 9.8.
Rather than considering the earliest and latest start and finish times of the activities directly, this method uses the earliest and latest event times, as below:
■ Earliest event time – determined by the earliest time at which any subsequent activity can start.
■ Latest event time – determined by the latest time at which any subsequent activity can start.
Thus for a single activity the format would be as shown in Figure 9.9. There has historically been a greater use of the activity-on-arrow (AOA) method, but
the activity-on-node (AON) method is now recognised as having a number of advantages, including the following:
■ Most project management computer software uses the AON approach. ■ AON diagrams do not need dummy activities to maintain the relationship logic. ■ AON diagrams have all the information on timings and identification within the node
box, leading to clearer diagrams.
Figure 9.8 Activity-on-arrow notation
Event label
Earliest event time
Latest event time
Activity label
Activity duration
Figure 9.9 Calculating event times for an activity-on-arrow network
A
25
10 0
0 20
25
25
Gantt charts
Show the duration of parallel and sequential activities in a project as horizontal bars on a chart.
Gantt charts
Although network diagrams are ideal for showing the relationship between project tasks, they do not provide a clear view of which tasks are being undertaken over time and particularly of how many tasks may be undertaken in parallel at any one time. Gantt charts are used to summarise the project plan by showing the duration of parallel and sequential activities in a project as horizontal ‘time bars’ on a chart. The Gantt chart provides an overview for the project managers to allow them to monitor project progress against planned progress and so provides a valuable information source for project control.
M09_BOCI6455_05_SE_C09.indd 340 10/13/14 4:49 PM
341ChaPter 9 BIS PROjEcT MANAgEMENT
Figure 9.10 shows a typical Gantt chart produced using Microsoft Project. Note that some phases such as ‘Phase 1 – software evaluation’ have subactivities such as ‘consult and set criteria’ and ‘evaluate alternatives – report’. Each of these subactivities has a certain number of days and a corresponding cost assigned to it. Milestones are activities that are planned to occur by a particular day, such as ‘Purchase hardware by 17/06’. These are shown as triangles. They are significant events in the life of the project, such as completion of a prototype.
To draw a Gantt chart manually or using a spreadsheet or drawing package, follow these steps:
1. Draw a grid with the tasks along the vertical axis and the timescale (for the whole project duration) along the horizontal axis.
2. Draw a horizontal bar across from the task identifier along the left of the chart, starting at the earliest start time and ending at the earliest finish time.
3. Indicate the slack amount by drawing a line from the earliest finish time to the latest finish time.
4. Repeat steps 2 and 3 for each task.
If the network analysis is being conducted using project management software, then the Gantt chart is automatically generated from information in the network analysis.
Milestone
This denotes a significant event in the project such as completion of a prototype.
Figure 9.10 Gantt chart showing activities and milestones
Capacity loading graphs
The basic network diagram assumes that all tasks can be undertaken when required by the earliest start times calculated from the node dependency relationships. However, resources required to undertake tasks are usually limited and the duration of an individual task or the
Source: Screenshot frame reprinted by permission from Microsoft corporation
M09_BOCI6455_05_SE_C09.indd 341 10/13/14 4:49 PM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT342
number of parallel tasks may be limited. In order to calculate the capacity requirements of a project over time, the capacity requirements associated with each task are indicated on the Gantt chart. From this, a capacity loading graph can be developed by projecting the loading figures on a time graph. The capacity loading graphs show the resources required to undertake activities in a project. If the network analysis is being conducted using project management software, then the capacity loading graph is automatically generated from information in the network analysis.
Capacity loading graphs
capacity loading graphs show the resources required to undertake activities in a project.
Project cost graphs
Show the financial cost of undertaking the project.
Project crashing
Refers to reducing the project duration by increasing spending on critical activities.
Project costs
The previous discussion has concentrated on the need to schedule and control activities in order to complete the entire project within a minimum timespan. However, there are situations in which the project cost is an important factor. If the costs of each project are known, then it is possible to produce a project cost graph which will show the amount of cost incurred over the life of the project. This is useful in showing any periods when a number of parallel tasks are incurring significant costs, leading to the need for additional cashflow at key times. In large projects it may be necessary to aggregate the costs of a number of activities, particularly if they are the responsibility of one department or subcontractor. As a control mechanism, the project manager can collect information on cost to date and percentage completion to date for each task to identify any cost above budget and take appropriate action without delay.
Trading time and cost: project crashing
Within any project there will be a number of time–cost trade-offs to consider. Most projects will have tasks that can be completed with an injection of additional resources, such as equipment or people. Reasons to reduce project completion time include:
■ to reduce high indirect costs associated with equipment; ■ to reduce new product development time to market; ■ to avoid penalties for late completion; ■ to gain incentives for early completion; ■ to release resources for other projects.
The use of additional resources to reduce project completion time is termed crashing the project. The idea is to reduce overall indirect project costs by increasing direct costs on a particular task. One of the most obvious ways of decreasing task duration is to allocate additional labour to a task. This can be either an additional team member or through overtime working. To enable a decision to be made on the potential benefits of crashing a task, the following information is required:
■ the normal task duration; ■ the crash task duration; ■ the cost of crashing the task to the crash task duration per unit time.
The process by which a task is chosen for crashing is by observing which task can be reduced for the required time for the lowest cost. As stated before, the overall project completion time is the sum of the task durations on the critical path. Thus it is always necessary to crash a task that is on the critical path. As the duration of tasks on the critical path is reduced, however, other paths in the network will also become critical. If this
M09_BOCI6455_05_SE_C09.indd 342 10/13/14 4:49 PM
343ChaPter 9 BIS PROjEcT MANAgEMENT
PERT replaces the fixed activity duration used in the CPM method with a statistical distribution which uses optimistic, pessimistic and most likely duration estimates.
The critical path method (CPM) described above was developed by the company Du Pont during the 1950s to manage plant construction. The PERT approach was formulated by the US Navy during the development of the Polaris submarine-launched ballistic missile system in the same decade (Sapolsky, 1972). The main difference between the approaches is the ability of PERT to take into consideration uncertainty in activity durations.
The PERT approach attempts to take into account the fact that most task durations are not fixed, by using a beta probability distribution to describe the variability inherent in the processes. The probabilistic approach involves three time estimates for each activity:
■ optimistic time – the task duration under the most optimistic conditions; ■ pessimistic time – the task duration under the most pessimistic conditions; ■ most likely time – the most likely task duration.
As stated, the beta distribution is used to describe the task duration variability. To derive the average or expected time for a task duration, the following equation is used:
Expected duration = Optimistic + (4 × Most likely) + Pessimistic
The combination of the expected time and standard deviation for the network path allows managers to compute probabilistic estimates of project completion times. A point to bear in mind with these estimates is that they only take into consideration the tasks on the critical path and discount the fact that slack on tasks on a non-critical path could delay the project. Therefore the probability that the project will be completed by a specified date is the probability that all paths will be completed by that date, which is the product of the probabilities for all the paths.
Project evaluation and review technique (PERT)
PERT
PERT replaces the fixed activity duration used in the cPM method with a statistical distribution which uses optimistic, pessimistic and most likely duration estimates.
6
Project network simulation
In order to use the PERT approach, it must be assumed that the paths of a project are independent and that the same tasks are not on more than one path. If a task is on more than one path and its actual completion time was much later than its expected time, it is obvious that the paths are not independent. If the network consists of these paths and they are near the critical path time, then the results will be invalid.
Simulation can be used to develop estimates of a project’s completion time by taking into account all the network paths. Probability distributions are constructed for each task, derived from estimates provided by such data collection methods as observation and historical data. A simulation then generates a random number within the probability distribution for each task. The critical path is determined and the project duration calculated. This procedure is repeated a number of times (possibly more than 100) until there are sufficient data to construct a frequency distribution of project times. This distribution can be used to make a probabilistic assessment of the actual project duration. If greater accuracy is required, the process can be repeated to generate additional project completion estimates which can be added to the frequency distribution.
happens, it will require the crashing process to be undertaken on all the paths that are critical at any one time.
M09_BOCI6455_05_SE_C09.indd 343 10/13/14 4:49 PM
344 Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT
The main benefit of using the network analysis approach is the requirement to use a structured analysis of the number and sequence of tasks contained within a project, so aiding understanding of resource requirements for project completion. It provides a number of useful graphical displays that assist understanding of such factors as project dependencies and resource loading, a reasonable estimate of the project duration and the tasks that must be completed on time to meet this duration (i.e. the critical path), and a control mechanism to monitor actual progress against planned progress on the Gantt chart. It also provides a means of estimating any decrease in overall project time by providing extra resources at any stage and can be used to provide cost estimates for different project scenarios.
Limitations to consider when using network analysis include remembering that its use is no substitute for good management judgement in such areas as prioritising and selecting suppliers and personnel for the project. Additionally, any errors in the network such as incorrect dependency relationships or the omission of tasks may invalidate the results. The tasks’ times are forecasts and are thus estimates that are subject to error. PERT and simulation techniques may reduce time estimation errors, but at the cost of greater complexity which may divert management time from more important issues. Also time estimates for tasks may be greater than necessary to provide managers with slack and ensure that they meet deadlines. Slack time that does exist may be ‘wasted’ by not starting activities until the last possible moment and thus delaying the project if they are not completed on time.
1. Projects are unique, one-time operations designed to accomplish a specific set of objectives in a limited timeframe with a limited budget and resources.
2. Major roles in project organisation include the project sponsor, the project manager and the project user. The project sponsor provides a justification of the project to senior management. The project manager role is to provide clearly defined goals and ensure that adequate resources are employed on the project. The project user who will be utilising the system should be involved in the definition and implementation of the system.
3. The main elements in the project management process include estimate, schedule and plan, monitoring and control, and documentation.
4. A work breakdown structure splits the overall project task into a number of more detailed activities in order to facilitate detailed estimation of resources required.
5. Projects can be resource-constrained (limited by resource) or time-constrained (limited by the deadline).
6. Scheduling involves producing a project plan which determines when activities should be executed.
7. Once under way a project can be monitored against the defined objectives of time, cost and quality.
8. Documentation is essential in reducing the expense of project maintenance.
9. PRINCE2 is an example of a project management methodology. An example of a systems development methodology is RAD.
10. Critical path analysis shows the activities undertaken during a project and the dependencies between them. The critical path is identified by making a forward and then a reverse pass through the network, calculating the earliest and latest activity start/finish times respectively.
11. Gantt charts provide an overview of what tasks are being undertaken over time. This allows the project manager to monitor project progress against planned progress.
SUMMARy
Benefits and limitations of the network analysis approach
M09_BOCI6455_05_SE_C09.indd 344 10/13/14 4:49 PM
345ChaPter 9 BIS PROjEcT MANAgEMENT
12. Capacity loading graphs provide an indication of the amount of resource needed for the project over time.
13. Cost graphs provide an indication of monetary expenditure over the project period.
14. Project crashing consists of reducing overall indirect project costs (e.g. by reducing the project duration) by increasing expenditure on a particular task.
15. To reduce the length of a project we need to know the critical path of the project and the cost of reducing individual activity times.
16. The PERT approach provides a method of integrating the variability of task durations into the network analysis.
1. What are the main elements of the project management process?
2. What are the main project aims of the PRINCE2 methodology?
3. What information is required for the construction of a critical path diagram?
4. What information do the Gantt chart and the PERT chart convey?
5. Define the term ‘critical path’.
6. What is the difference between effort time and elapsed time?
EXERCISES
Self-assessment exercises
Discussion questions
1. Draw a Gantt chart for the following AON network (Figure 9.11).
2. ‘One of the most difficult parts of project management is getting the estimates right.’ Discuss.
Figure 9.11 Activity-on-node network
2 7
A 300 3
332 62
B 150 1
00 15 C 3015 4
015 45
G 745 5
162 69
E 1045 5
045 55 F 1455 6
055 69
D 1545 6
954 69
H 1469 8
069 83 EN 083 83
083 83START 00 0
00 0
1. Explore the features of a project management computer package such as Microsoft Project. Evaluate its use in the project management process.
Essay questions
M09_BOCI6455_05_SE_C09.indd 345 10/13/14 4:49 PM
346
Albrecht, A.J. and Gaffney, J. (1983) ‘Software function, source lines of code and development effort prediction’, IEEE Transactions on Software Engineering, SE-9, 639–48
Boehm, B.W. (1981) Software Engineering Economics, Prentice-Hall, Englewood cliffs, Nj
Boehm, B.W., Abts, C., Winsor Brown, A., Chulani, S., Clark, B.K., Horowitz, E., Madachy, Cadle, J. and Yeates, D. (2007) Project Management for Information Systems, 5th edition, Financial Times Prentice Hall, Harlow
Lyytinen, K. and Hirscheim, R. (1987) ‘Information systems failures: a survey and classification of the empirical literature’, Oxford Surveys in IT, 4, 257–309
Sapolsky, H.M. (1972) The Polaris System Development: Bureaucratic and Programmatic Success in Government, Harvard University Press, Boston, MA
346
Examination questions
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT
1. Evaluate the roles undertaken by people in a project organisation.
2. What are the main elements in the project management process?
3. Evaluate the use of the PRINCE2 project management methodology.
4. Explain the difference between portraying a project plan as a Gantt chart and as a PERT chart.
5. What is the importance of conducting monitoring and control when managing a project?
6. Why is it difficult and often impossible for a software project manager to balance the three constraints of time, budget and quality? You should relate your answer to two different aspects of the quality of the delivered information system.
7. What is the difference between elapsed time and effort time? How are the two factors related in terms of the availability and work rate of different staff? Describe this in words, or using an equation or an example.
References
Further reading
Brooks, F.P. (1995) The Mythical Man-Month: Essays on Software Engineering – Anniversary Edition, Addison-Wesley, Reading, MA.
Fenton, N.E. and Bieman, J. (2014) Software Metrics: A Rigorous and Practical Approach, 3rd edition, PWS Publishers, London.
Garmus, D. and Herron, D. (2000) Function Point Analysis: Measurement Practices for Successful Software Projects, Addison-Wesley, Upper Saddle River, Nj
Greasley, A. (2013) Operations Management, 3rd edition, john Wiley, chichester.
Hughes, B. and Cotterell, M. (2009) Software Project Management, 5th edition, Mcgraw-Hill, Maidenhead.
Kerzner, H. (2013) Project Management: A Systems Approach to Planning Scheduling and Controlling, 11th edition, john Wiley, New York.
2. Compare the different alternatives that are available for the critical path method of network analysis.
3. What is the most effective method of estimating the duration of an information systems development project?
M09_BOCI6455_05_SE_C09.indd 346 10/13/14 4:49 PM
347ChaPter 9 BIS PROjEcT MANAgEMENT
Web sites with further information on project management methodologies are as follows:
http://www.prince-officialsite.com/ PRINcE2 official site.
www.prince2.com website of ILX group who offer PRINcE2 training.
www.bates.ca BPMM.
www.sei.cmu.edu/ideal IDEAL.
www.pmi.org Project Management Institute.
https://at-web1.comp.glam.ac.uk/staff/dwfarthi/projman.htm Dave Farthing’s software project management web page has many links to project management resources.
Kerzner, H. (2013) Project Management Metrics, KPIs, and Dashboards: A Guide to Measuring and Monitoring Project Performance, 2nd edition, john Wiley, chichester.
Lock, D. (2013) Project Management, 10th edition, gower, Aldershot.
Maylor, H. (2010) Project Management, 4th edition, Financial Times Prentice Hall, Harlow.
Persse, J.R. (2007) Project Management Success with CMMI: Seven CMMI Process Areas, Prentice-Hall, Upper Saddle River, Nj.
Selby, R.W. (2007) Software Engineering: Barry W. Boehm’s Lifetime Contributions to Software Development, Management, and Research, Wiley-Interscience, Huboken, Nj.
Web links
M09_BOCI6455_05_SE_C09.indd 347 10/13/14 4:49 PM
,
CHAPTER
1 Systems analysis
LEARNING OUTCOMES
After reading this chapter, you will be able to:
■ define the importance of conducting the analysis phase to the overall success of the system;
■ choose appropriate techniques for analysing users’ requirements for an information system;
■ construct appropriate textual descriptions and diagrams to assist in summarising the requirements as an input to the design phase.
MANAGEMENT ISSUES
Careful systems analysis must be conducted on each BIS project to ensure that the system meets the needs of the business and its users. From a managerial perspective, this chapter addresses the following questions:
■ Which different aspects of the system must be summarised in the requirements document?
■ Which diagramming tools are appropriate to summarise the operation of the existing and proposed systems?
CHAPTER AT A GLANCE
MAIN TOPICS
■ Identifying the requirements 350
■ Documenting the findings 359
■ Systems analysis – an evaluation 383
■ Software tools for systems analysis 384
FOCUS ON . . .
■ Requirements determination in a lean or agile environment 358
■ Soft systems methodology 379
CASE STUDIES
10.1 IFD drawing – a student records system 362
10.2 ABC case study 384
CHAPTER
1 CHAPTER
10
M10_BOCI6455_05_SE_C10.indd 349 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT350
Once it has been determined that it is desirable to proceed with the acquisition of a new BIS, it is necessary to determine the system requirements before any design or development work takes place. Systems analysis is about finding out what the new system is to do, rather than how. There are two basic components to the analysis process:
■ Fact-finding. An exercise needs to take place where all prospective users of the new system should contribute to determining requirements;
■ Documentation. Detailed systems design follows the analysis stage and it needs to be based on unambiguous documentation and diagrams from the analysis stage.
Systems analysis involves the investigation of the business and user requirements of an information system. Fact-finding techniques are used to ascertain the user’s needs and these are summarised using a range of diagramming methods.
Factors that will influence the use of fact-finding techniques and documentation tools will include:
■ The result of the ‘make-or-buy decision’. Made during the feasibility stage, a ‘make’ decision where bespoke software is developed will need more detailed analysis than a ‘buy’ decision where packaged software is purchased off-the-shelf, especially when the results of the analysis process are fed into the design stage.
■ Application complexity. A very complex system or one where there are linkages to other systems will need very careful analysis to define system and subsystem boundaries, and this will lead to use of more formal techniques when compared with a simple or standalone application.
■ User versus corporate development. User development does not lend itself to extensive use of formal analysis tools. However, basic analysis is required and there are certain analysis tools that user developers can use that increase the probability of success. Similarly, where application development by IS/IT professionals occurs there will be a need for a more formal approach, especially where systems cut across functional boundaries.
Any errors in systems development that occur during the analysis phase will cost far more to correct than errors that occur in subsequent stages. It is therefore essential that maximum thought and effort be put into the analysis process if unanticipated costs are not to arise in the later stages of development.
INTRODUCTION
Systems analysis
The investigation of the business and user requirements of an information system. Fact-finding techniques are used to ascertain the user’s needs and these are summarised using a requirements specification and a range of diagramming methods.
The emphasis in this section will be on those methods typically used during the traditional systems development lifecycle approach to software development. However, it is recognised that lean and agile approaches to software development will focus on techniques that are particularly relevant to those methods. Therefore, these will be commented on later in the chapter in a ‘Focus on’ section.
The main purpose of the requirements determination phase of a systems development project is to identify those user requirements that need to be incorporated into the design of the new information system and that the requirements identified ‘really’ meet the users’ needs. Therefore, the first task in analysis is to conduct a fact-finding exercise so that the information systems requirements can be determined. Unfortunately, as identified by Shi et al. (1996) and by Browne and Rogich (2001), there are a number of reasons why this is very difficult for many organisations:
■ user limitations in terms of their ability to express correct requirements;
IDENTIFYING THE REQUIREMENTS
M10_BOCI6455_05_SE_C10.indd 350 30/09/14 7:21 AM
351ChaPter 10 SYSTEMS ANALYSIS
■ lack of user awareness of what can be achieved with an information system (both in terms of under- and over-estimating an information system’s capabilities;
■ different interpretation of software requirements by different users; ■ existence of biases amongst users so that requirements are identified on the basis of
attitude, personality or environment rather than real business needs; ■ requirements may overlap organisational boundaries (e.g. between different functional
areas of the business) such that conflicts occur when identifying requirements; ■ information requirements are varied and complex and this can lead to difficulties in
structuring requirements so that they can be properly analysed; ■ communication issues can result because of the complex web of interactions that exists
between different users.
Nonetheless, while the task of requirements determination may be difficult, it must still be undertaken if the developed system is to have those features that the users and the organisation actually need. The methods an organisation uses in the analysis phase will depend, at least in part, on two factors:
■ Levels of decision making involved. A new information system will be under consideration either to resolve a problem or to create an opportunity. In either case, the objective is to improve the quality of information available to allow better decision making. The type of system under consideration may include a transaction processing system, a management information system, a decision support system, a combination of these or some other categorisation of system (Chapter 6). So, for example, an information system that is purely geared towards the needs of management will require a different approach to fact-finding (for example, using one-to-one interviews with senior managers) from one that mainly involves transaction processing (for example, using observation of the existing process).
■ Scope of functional area. A new information system may serve the needs of one functional business area (e.g. the HRM function), or it may cut across many functional areas. An information system that is restricted in scope may be faced with fewer of the problems that can affect new systems designed to meet the needs of many different areas. As before, the techniques of fact-finding may be similar, but how they are used and the findings presented may be radically different. Organisational culture, structure and decision-making processes will all have a part to play in selling the systems solution to all the affected parties.
Regardless of the scope and organisational levels involved, the objective of the fact- finding task is to gather sufficient information about the business processes under consideration so that a design can be constructed which will then provide the blueprint for the system build phase. We will now turn to a consideration of a number of fact- finding methods.
Although it might be thought that finding out the requirements for a system is straightforward, this is far from the case. Dissatisfaction with information systems is often due to the requirements for the information system being wrongly interpreted. Figure 10.1 shows an oft-quoted example of how a user’s requirements for a swing might be interpreted, not only at the requirements analysis stage but throughout the project.
As noted by Browne and Rogich (2001), the most popular strategy likely to be adopted by an analyst is to use structured interviews with the people who will use the new system and to identify the procedures they follow in performing their tasks and also to identify the information they need to perform them. A successful requirements determination exercise
Interviewing
M10_BOCI6455_05_SE_C10.indd 351 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT352
Figure 10.1 Varying interpretations of a user’s requirements at different stages in a project
What the users’ manager specified
The requirements specification The design
First delivery Final delivery after ‘fixing’
What the users really wanted
will require the analyst to elicit from the users both their understanding of the current business environment and current information needs and flows and also a visualisation of the preferred future organisational environment and information needs. The difficulties associated with this have already been indicated above.
During interviewing, a range of staff are interviewed using structured techniques to identify features and problems of the current system and required features of the future system.
Success with this method involves careful planning, proper conduct of the interviews themselves and, finally, accurate recording of the interview findings. We can expand each of these to provide more detail.
Planning
■ Clear objectives need to be set to identify what needs to be achieved at the end of the interviewing process.
■ Interview subjects must also be carefully selected so that the information gained will be relevant to the system being developed. For example, there may be little use in interviewing all the shopfloor workers in a manufacturing company if the system being developed is an executive information system (EIS) to assist with decision making at senior levels within the business. There may still be some merit in interviewing certain key personnel involved in operational decision making, since data produced may be useful in the proposed EIS.
■ Customers should be involved in analysis if the use of a system affects them directly. For example, a customer of a phone-based ordering system or a telephone bank may well give an insight into problems of an existing system.
■ The topics the interview is to cover need to be clearly identified and the place where interviews are to take place must be determined.
■ Finally, it is necessary to plan how the interviews are to be conducted and the types of questions to be used.
Analysis technique – interviewing
Recommended practice: a range of staff are interviewed using structured techniques to identify features and problems of the current system and required features of the future system.
M10_BOCI6455_05_SE_C10.indd 352 30/09/14 7:21 AM
353ChaPter 10 SYSTEMS ANALYSIS
Conduct
■ The interviewer must establish a control framework for the interview. This will include the use of summarising to check the points being made and appropriate verbal and non- verbal signals to assist the flow of the interview.
■ Interviewers must be good listeners. This is especially important when dealing with complex business processes which are the object of the systems development project.
■ The interviewer must select a mix of open and closed questions which will elicit maximum information retrieval.
■ Finally, the interview must be structured in an organised way. There are three main approaches to structuring an interview. The first is the ‘pyramid structure’, where the interview begins with a series of specific questions and during the course of the interview moves towards general ones. The second is the ‘funnel structure’, where the interviewer begins with general questions and during the course of the interview concentrates increasingly on specific ones. The third approach is the ‘diamond structure’, where the interview begins with specific questions, moves towards general questions in the middle of the interview and back towards specific questions at the end.
Regardless of which approach is taken, it will still be necessary to document carefully the findings of the interview.
Interviews should use a mixture of open and closed questions. Open questions are not restricted to a limited range of answers such as Yes/No (closed questions). They are asked to elicit opinions or ideas for the new system or identify commonly held views among staff. Open questions are not typically used for quantitative analysis, but can be used to identify a common problem.
Closed questions have a restricted choice of answers such as Yes/No or a range of opinions on a scale from ‘strongly agree’ to ‘strongly disagree’ (Likert scale). This approach is useful for quantitative analysis of results.
Recording
During the course of the interview, the interviewer will need to make notes to record the findings. It may also be useful to draw diagrams to illustrate the processes being discussed. Some interviewers like to use a tape recorder to be sure that no points are missed. Whichever methods are used, the requirement is to record three main attributes of the system under consideration:
■ Business processes. A business process exists when an input of some kind (raw materials, for example) is transformed in some way so that an output is produced for use elsewhere in the business.
■ Data. Data will be acquired and processed and information produced as a con-sequence of carrying out business processes. Data must be analysed so that data acquisition, processing needs and information requirements can be encapsulated in the new information system.
■ Information flows. Functional business areas do not exist in isolation from each other and neither do different business processes within the same business function. It is necessary, therefore, to identify how data and information within one business process are necessary for other business processes to operate effectively.
We will look at some relevant tools and techniques which help to record the findings later in this chapter.
As an information-gathering tool, interviews have a number of advantages and disadvantages. On the positive side they include:
■ the ability to gather detailed information through a two-way dialogue; ■ the ability for candid, honest responses to be made;
Open questions
Not restricted to a limited range of answers such as Yes/No (closed questions). Asked to elicit opinions or ideas for the new system or identify commonly held views amongst staff. Open questions are not typically used for quantitative analysis, but can be used to identify a common problem.
Closed questions
Closed questions have a restricted choice of answers such as Yes/No or a range of opinions a scale from ‘strongly agree’ to ‘strongly disagree’ (Likert scale). Approach is useful for quantitative analysis results.
M10_BOCI6455_05_SE_C10.indd 353 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT354
■ an open, spontaneous process which can lead to valuable insights, especially when open questions are used;
■ responses that can easily be quantified, especially when closed questions are used; ■ being one of the best methods for gathering qualitative data such as opinions, and
subjective descriptions of activities and problems.
On the negative side, however, the following points can be made:
■ The analyst’s findings may be coloured by his or her perceptions of how other, similar, business operations work. Interviewers need to be especially skilled if this is to be avoided.
■ The development of a new information system may represent a threat through the risk of deskilling, redundancy or perceived inability to cope with change. Interviewees may, therefore, not cooperate with the interview process, either by not taking part or by giving vague and incomplete replies.
■ The interviewee may tell the analyst what he or she thinks should happen rather than what actually happens.
■ An interview at lower organisational levels may not yield as much information as some other methods if staff in this area are not capable of articulating with sufficient clarity.
On balance, interviewing is an essential part of the information-gathering process. For maximum benefit, interviewing should be used in conjunction with other techniques, and we will turn to these now.
Analysis techniques – questionnaires
Used to obtain a range of opinion on requirements by targeting a range of staff. They are open to misinterpretation unless carefully designed. They should consist of open and closed questions.
Questionnaires are used to obtain a range of opinion on requirements by targeting a range of staff. They are open to misinterpretation unless carefully designed. They should consist of both open and closed questions.
Questionnaires can be a useful addition to the analyst’s armoury, but are not in themselves enough to gather sufficient information for the later stages of the systems development process. That said, questionnaires can be very useful when used with other fact-finding methods, either to confirm the findings obtained elsewhere or to open up possible further areas for investigation. Typically, they are used before more detailed questions by interview.
Successful questionnaires have a number of characteristics:
■ The questions will be framed by the analyst with a clear view of the information that is to be obtained from the completed questionnaires.
■ The target audience must be carefully considered – a questionnaire designed for clerical or operational personnel should not contain questions that are not relevant to their level of work.
■ The questionnaire should only contain branching (e.g. ‘if the answer to Question 3 was ‘No’, then go to Question 8’) if it is absolutely necessary – multiple branches create confusion and may lead to unusable responses.
■ Questions should be simple and unambiguous so that the respondent does not have to guess what the analyst means.
■ Multiple-choice, Likert-scale-type questions make the questionnaire easier to fill in and allow the results to be analysed more efficiently.
■ The questionnaire should contain the required return date and name of the person to whom the questionnaire should be returned.
Questionnaires
M10_BOCI6455_05_SE_C10.indd 354 30/09/14 7:21 AM
355ChaPter 10 SYSTEMS ANALYSIS
Difficulties that can be encountered with questionnaires include:
■ the inability of respondents to go back to the analyst to seek clarification about what a question means;
■ difficulty in collating qualitative information, especially if the questionnaire contains open-ended questions;
■ the inability to use verbal and non-verbal signals from the respondent as a sign to ask other or different questions;
■ low response rates – these can be lower than 20 to 25 per cent when sent to other organisations or customers, which means that a large sample size is needed if the results are to carry any weight. Response rate is not such a problem with internal staff.
By contrast, the questionnaire process also has a number of benefits:
■ When large numbers of people such as customers or suppliers need to be consulted, a carefully worded questionnaire is more efficient and less expensive than carrying out large numbers of interviews.
■ Questionnaires can be used to check results found by using other fact-finding methods. ■ The use of standardised questions can help codify the findings more succinctly than
other tools.
In summary, questionnaires can have a useful role to play in certain circumstances, but they should not be used as the sole data-gathering method.
Documentation reviews target information about existing systems, such as user guides or requirements specifications, together with paper or on-screen forms used to collect information, such as sales order forms. They are vital for collecting detail about data and processes that may not be recalled in questionnaires and interviews.
All organisations have at least some kind of documentation that relates to some or all of the business operations carried out. A documentation review can be carried out at a number of different stages in the analysis process. If carried out at the beginning of a requirements analysis exercise, it will help provide the analyst with some background information relating to the area under consideration. It may also help the analyst construct a framework for the remainder of the exercise, and enable interviews to be conducted in a more effective way since the analyst has some idea of current business practices and procedures. If document review is carried out later, it can be used to cross-check the actual business operations with what is supposed to happen. The kinds of documentation and records that can be reviewed include the following:
■ instruction manuals and procedure manuals which show how specific tasks are supposed to be performed;
■ requirements specifications and user guides from previous systems; ■ job descriptions relating to particular staff functions which may help identify who
should be doing what; ■ strategic plans both for the organisation as a whole and the functional areas in particular,
which can provide valuable background data for establishing broad functional objectives.
While documentation review can provide a very useful underpinning for other fact-finding tasks, there are still a number of problems:
■ There can be a large quantity of data for an analyst to process. This is especially true in large organisations and it may take the analyst a long time to identify the documentation that is useful and that which can be ignored.
Documentation review
Analysis technique – documentation review
Uses information on existing systems such as user guides, or requirements specifications together with paper or on-screen forms used to collect information such as sales order forms.
M10_BOCI6455_05_SE_C10.indd 355 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT356
■ Documentation is often out of date. If there is an old computerised system, it is quite possible that the documentation has not been changed for years, even though the system may have changed considerably over that period. The same can be said for the documentation of activities and procedures.
Observation
Useful for identifying inefficiencies in an existing way of working either with a computer- based or a manual information system. Involves timing how long particular operations take and observing the method used to perform them.
Observation is useful for identifying inefficiencies in an existing way of working, with either a computer-based or a manual information system. It involves timing how long particular operations take and observing the method used to perform them. It can be time-consuming and the staff who are observed may not behave normally.
This fact-finding method involves the analyst in directly observing business activities taking place so that they can see what is actually taking place rather than looking at documentation which states what should be taking place. One of the benefits of observation is that the analyst can see directly how something is done, rather than relying on verbal or written communication which may colour the facts or be the subject of misinterpretation by the analyst. Other benefits include:
■ the ability to see how documents and records are actually handled and processed; ■ observation may give a greater insight into actual business operations than simple paper
documentation; ■ identification of particular operations that take a long time; ■ the opportunity to see how different processes interact with each other, thus giving the
analyst a dynamic rather than a static view of the business situation under investigation.
On the downside, there are a number of difficulties associated with the observation technique:
■ It is an extremely time-consuming exercise and therefore needs to be done as a supplementary rather than a principal fact-finding method.
■ While observation allows an organisation to be dynamically assessed, it still does not allow attitudes and belief systems to be assessed. This can be a very important issue if the proposed information system is likely to encounter resistance to change among the workforce.
■ Finally, there is the issue of the ‘Hawthorne effect’, where people tend to behave differently when they are being observed, thus reducing the value of the information being obtained. Of course, for the analyst, the problem is in determining whether those being observed are behaving differently or not!
This last effect was first noticed in the Hawthorne plant of Western Electrics in the United States. Here, it was noted that production increased, not as a con-sequence of actual changes in working conditions introduced by the plant’s management, but because management demonstrated an interest in improving staff working conditions.
Despite these difficulties, it is desirable for the analyst to conduct at least some observation to ensure that no aspect of the system being investigated is overlooked.
Observation
Brainstorming
Brainstorming uses interaction within a group of staff to generate new ideas and discuss existing problems. It is the least structured of the fact- finding techniques.
Brainstorming uses interaction within a group of staff to generate new ideas and discuss existing problems. It is the least structured of the fact-finding techniques.
This is the final fact-finding technique we will consider. The methods we have looked at so far are either passive or conducted on a one-to-one basis, or both. The brainstorming
Brainstorming
M10_BOCI6455_05_SE_C10.indd 356 30/09/14 7:21 AM
357ChaPter 10 SYSTEMS ANALYSIS
method involves a number of participants and is an active approach to information gathering. While the other methods allow for many different views to be expressed, those methods do not allow different persons’ perceptions of the business processes and systems needs to be considered simultaneously. Brainstorming allows multiple views and opinions to be brought forward at the same time. If the proposed system’s user community participates actively, it is more likely that an accurate view of current business processes and information systems needs will be reached.
Brainstorming sessions require careful planning by the analyst. Factors to consider include:
■ which persons to involve and from which functional business areas; ■ how many people to involve in the session – too few and insufficient data may be
gathered; too many and the session may be too difficult to handle; ■ terms of reference for the session – there may need to be more than one session to
identify clearly areas of agreement and those that need further discussion; ■ management involvement – a session for shopfloor workers, for example, may be far
less successful if management personnel are involved than if they are not. It would be appropriate, however, for management groups to have their own brainstorming session so that tactical and strategic issues can be tackled rather than simply operational ones.
The main benefit of the brainstorming approach is that, through the dynamics of group interaction, progress is more likely to be made than from a simple static approach to information gathering. Brainstorming sessions, if they are handled properly, can result in the productive sharing of ideas and perceptions, while at the same time cultural factors, attitudes and belief systems can be more readily assessed. Also, when the outcomes are positive ones, a momentum for change is built among those who will be direct users of the new information system. Change management is therefore more easily facilitated.
The main danger of the approach is that in the hands of an inexperienced analyst, there is a risk that the sessions may descend into chaos because of poor structure, bad planning, poor control or a combination of all three.
However, if used properly, this fact-finding method can generate the desired results more quickly than any other information-gathering method. Even so, it still needs to be supplemented by one or more of the other methods discussed above.
Yeates and Wakefield (2004) explain how structured brainstorming can be used to identify different options for a new system. This technique involves the following stages:
■ invite ideas which are written by individuals on separate sheets of paper or called out spontaneously and then noted on a whiteboard;
■ identify similarities between ideas and rationalise the options by choosing those which are most popular;
■ analyse the remaining options in detail, by evaluating their strengths and weaknesses.
It is important when brainstorming is undertaken that a facilitator be used to explain that a range of ideas is sought with input from everyone. Each participant should be able to contribute without fear of judgement by other members. When such an atmosphere is created, this can lead to ‘out-of-box’ or free thinking which may generate ideas of new ways of working.
Brainstorming and more structured group techniques can be used throughout the development lifecycle. Brainstorming is an important technique in re-engineering a business, since it can identify new ways of approaching processes. Taylor (1995) suggests that once new business processes have been established through analysis, they should be sanity-checked by performing a ‘talk-through, walk-through and run-through’. Here, the design team will describe the proposed business process and in the talk-through stage will elaborate on different business scenarios using cards to describe the process objects and the services they provide to other process objects. Once the model has been adjusted, the
M10_BOCI6455_05_SE_C10.indd 357 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT358
walk-through stage involves more detail in the scenario and the design team will role-play the services the processes provide. The final run-through stage is a quality check in which no on-the-spot debugging occurs – just the interactions between the objects are described.
Once the analyst has completed the requirements investigation, it will be necessary to document the findings so that a proposal can be put forward for the next stage of the project. Some of the documentation tools discussed below may be used at the same time as the fact-finding process. For example, information flow diagrams may be used by the analyst to check with the end-user that points have been properly understood.
Research has shown that new ideas and recall are improved by the use of pictures, which tend to prompt thought better than text. This point is well made by Buzan and Buzan (2010), who describe a technique known as ‘mindmapping’ to record information and promote brainstorming. Mindmaps are a spontaneous means of recording information which are ideally applied to systems analysis, since they can be used to record information direct from user dialogues or summarise information after collection.
Another graphical technique which is useful to the system analyst is the ‘rich picture’. The rich picture is an element of the soft systems methodology described later in this chapter.
Pictures and brainstorming
REQUIREMENTS DETERMINATION IN A LEAN OR AGILE ENVIRONMENT FOCUS ON…
As indicated earlier (in Chapter 7), the traditional waterfall approach to software development is by no means the only valid approach to software development, as the increasing adoption of lean and agile approaches bears witness. In the traditional waterfall model, requirements analysis results in a systems specification and a catalogue of user requirements which is then converted into a design specification. However, the emphasis in an agile development environment is on the frequent delivery of software products and the daily involvement of end-users in the software development process. Agile methodologies place an emphasis on delivering working code and downplay the importance of formal processes. It is suggested, therefore, that the software development process can adapt and react promptly to changes that occur in user requirements.
Lindstrom and Jeffries (2004) identify a number of reasons for failed information systems projects. Those relating to requirements determination include:
■ requirements that are not clearly communicated; ■ requirements that do not solve business problems; ■ requirements that change prior to completion of the project.
There is also a tendency for stakeholders to ask for everything to be included in the software, regardless of how much it might be used, thus increasing development costs and maintenance budgets. He also claims that by trying to identify all the requirements up-front, the opportunity to develop and implement the most valuable and high-priority requirements is forgone, thus increasing the payback period for the system.
The requirements determination emphasis with agile methods is, therefore, on the frequent delivery of rapidly implementable software products, where a requirement can be built in a single product release iteration of two to four weeks (depending on the chosen methodology). Critics will claim that this approach results in impossible-to-estimate project costs since the entire project cannot be costed up front (the assumption being that requirements will evolve in response to frequent delivery of software products). However, proponents of agile methods point out that it is better for the customer to be able to call
M10_BOCI6455_05_SE_C10.indd 358 30/09/14 7:21 AM
359ChaPter 10 SYSTEMS ANALYSIS
a halt once they have enough of what they need, rather than to embark on a lengthy development project only to discover that user requirements have changed in such a way that the delivery system is no longer fit for purpose.
In this section we will concentrate on three main diagramming tools: information flow diagrams (IFDs), dataflow diagrams (DFDs) and entity relationship diagrams (ERDs). These techniques are used by professional IS/IT personnel, partly as documentation tools and partly as checking tools with the user community. It is important, therefore, for non- IS/IT personnel to understand the fundamentals behind these diagramming tools so that communication between functional personnel and IS/IT experts is enhanced. Furthermore, tools such as ERDs can be applied by end-users to assist them in developing their own personal or departmental applications. As well as these tools, the requirements specification will also contain a text description of what the functions of the software will be. We will consider this first, and then consider the documentation tools.
DOCUMENTING THE FINDINGS
Requirements specification
The main output from the systems analysis stage. Its main focus is a description of what all the functions of the software will be.
The requirements specification is the main output from the systems analysis stage. Its main focus is a description of what all the functions of the software will be. These must be defined in great detail to ensure that when the specification is passed on to the designers, the system is what the users require. This will help prevent the problem referred to in Figure 10.1.
The scope of the requirements specification will include:
■ Data capture – when, where and how often. The detailed data requirements will be specified using entity relationship diagrams and stored in a data dictionary. Dataflow diagrams will indicate the data stores required.
■ Preferred data capture methods – this may include use of keyboard entry, bar codes, OCR, etc. (it could be argued that this is a design point, but it may be a key user requirement that a particular capture method be used).
■ Functional requirements – what operations the software must be able to perform. For example, for the maps in a geographic information system, the functional requirements would specify: the ability to zoom in and out, pan using scroll-bars and the facility to change the features and labels overlaid on the map.
■ User interface layout – users will want access to particular functions in a single screen, so the requirements specification will define the main screens of an application. Detailed layout will be decided as part of prototyping and detailed design.
■ Output requirements – this will include such things as enquiry screens, regular standard and ad hoc reports and interfaces to other systems.
One approach to documenting requirements is illustrated by the ‘requirements catalogue’ specified in SSADM (discussed in Chapter 7). Figure 10.2 illustrates a typical requirements catalogue entry.
The purpose of the requirements catalogue is to act as the repository of all requirements information. It can be used from the initiation stage when early thoughts are being gathered about the possible requirements, through to the design stage when user requirements may still be emerging (especially in such areas as system navigation and performance requirements).
The requirements specification
M10_BOCI6455_05_SE_C10.indd 359 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT360
There are three main aspects that need to be documented, usually from a user perspective:
■ Functional requirements – consist of requirements that perform the activities that run the business. Examples include updating master files, enquiring against data on file, producing reports and communicating with other systems.
■ Non-functional requirements – define the performance levels of the business functions to be supported. Examples include online response times, turn-round time for batch processing, security, backup and recovery.
■ Quantification of requirements – refers to the need for a measure of quality if the benefits are to be properly evaluated. Examples might include reducing customer complaints by 75 per cent, reducing the value of unsold stock by 85%, or increasing online sales by 25 per cent.
Figure 10.2 Example of a requirements catalogue entry
Source Credit Control Clerk
Owner Credit Control Manager
Requirement ID 5.9
Priority High
Functional Requirements Link Sales Order Processing system in with accounting package so that online credit checking is an automatic process when new orders are being processed.
Non-functional requirements
Response Time Target Value
Within 10 seconds Acceptable range
Within 20 seconds CommentsDescription
Service Hours 08:30 to 18:00 Monday to Friday
Availability 97.5% Above 92.5%
Benefits Will speed up order processing and enable account handlers to spend more time collecting cash rather than continually switching between computer systems when processing orders.
Comments / suggested solutions Either provide a function key to perform the credit check function, or make it an automatic process when an order is entered. Do not allow order to be confirmed if credit check is failed.
Related documents Required System DFD, process box 5.9
Related requirements 3.2. Improve cash collection process – more accurate sales ledger data 4.9. Reduce number of bad debts – link to improved aged debtors report
Resolution 3.2. Improve cash collection process – more accurate sales ledger data 4.9. Reduce number of bad debts – link to improved aged debtors report
REQUIREMENTS CATALOGUE ENTRY
M10_BOCI6455_05_SE_C10.indd 360 30/09/14 7:21 AM
361ChaPter 10 SYSTEMS ANALYSIS
Each entry in the requirements catalogue would typically consist of an A4 sheet that contains the details outlined above. Other elements such as requirements originator, date and links to other formal documentation would be included.
When reviewing the contents of a requirements catalogue, it is desirable to prioritise requirements so that the development effort concentrates on the most important features of the new system. For example, it is possible to categorise user requirements into three: the A list, the B list and the C list (or Priority 1 to 3). The A list should comprise all those requirements that the proposed system must support and without which it would not function. For example, an accounting system that does not produce customer statements may be seriously deficient. The B list would contain those requirements that are very desirable but are not vital to the successful operation of the system. For example, it may be very desirable for a sales order processing system to produce a list of all customers who have not placed an order for the last six months, but it is not essential. The C list would contain those things that are nice to have (the ‘bells and whistles’) but are neither essential nor very desirable. It might be nice in a stock control system, for example, if a screen ‘buzzed’ at the user if a certain combination of factors were present. However, this would not be classified as essential.
The requirements catalogue can be used to prioritise the ‘very desirables’ and the ‘bells and whistles’ so that at the design stage most attention can be paid to those items that are perceived as having the highest priority. It may be, however, that if a low-priority item is seen to be very easy to implement, and a higher-priority item less so, the lower-priority item would be included in the development in preference.
It may be that in the case of a ‘very desirable but hard to implement’ feature, a simpler item might be included as an imperfect substitute. This would be more readily apparent at the design stage and it may be necessary to revisit the requirements catalogue at this point and consult the functional personnel again.
The information flow diagram (IFD) is one of the simplest tools used to document findings from the requirements determination process. It is used for a number of purposes:
■ to document the main flows of information around the organisation; ■ for the analyst to check that they have understood those flows and that none has been
omitted; ■ for the analyst to use during the fact-finding process itself as an accurate and efficient
way to document findings as they are identified; ■ as a high-level (not detailed) tool to document information flows within the organisation
as a whole or a lower-level tool to document an individual functional area of the business.
The information flow diagram is a simple diagram showing how information is routed between different parts of an organisation. It has an information focus rather than a process focus.
An information flow diagram has three components, shown in Figure 10.3. The ellipse in the diagram represents a source of information, which then flows to a destination location. In a high-level diagram, the source or destination would be a department or specific functional area of the business such as sales, accounting or manufacturing. In a lower-level (more detailed) diagram, one might refer to subfunctions such as accounts receivable, credit control or payroll (as you would normally find in an accounts department). The name of the source or destination should appear inside the ellipse. The source or destination is sometimes referred to as an ‘internal’ or ‘external entity’ according to whether it lies inside or outside the system boundary. The term ‘entity’ is used frequently when constructing entity relationship diagrams, and entities are described more fully later.
Information flow diagrams
Information flow diagram (IFD)
A simple diagram showing how information is routed between different parts of an organisation. It has an information focus rather than a process focus.
M10_BOCI6455_05_SE_C10.indd 361 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT362
The information flow, as represented by the arrowhead line, shows a flow of information from a source location to a destination. In an IFD the line should always be annotated with a brief description of the information flow. So, for example, if a sales department sends a customer’s order details to the accounts department for credit checking, the resulting flow might look like Figure 10.4.
Sources or destinations lying within the system’s boundary imply that this information will be used directly by the system. The concept of the system boundary is explained further in Case Study 10.1. This detailed example illustrates how an IFD could be used in practice.
Figure 10.3 Information flow diagrams – the basic building blocks
Source or destination
Information flow
Systems boundary
Figure 10.4 An illustration of a simple information flow
Customer order details AccountsSales
Suppose that a university wished to move from a manual, paper- based student records system to one that was computerised. The analyst would need to create a clear picture of the required information flows to help the system designer with the blueprint for the proposed system. We include some sample narrative to demonstrate the possible result of an interview between the analyst and the head of admissions.
When a student enrols for the first time, they are required to fill in a form which has the following details:
■ forename; ■ surname; ■ date; ■ local authority; ■ home address;
■ term-time address; ■ home telephone number; ■ term-time telephone number; ■ sex; ■ course code; ■ course description; ■ module code (for each module being studied); ■ module description (as above).
When the forms have been completed, they are passed to the student information centre. A series of actions follows:
■ The student information centre (SIC) allocates the student a unique code number which stays with the student until they complete their studies.
IFD drawing – a student records system
CASE STUDY 10.1
M10_BOCI6455_05_SE_C10.indd 362 30/09/14 7:21 AM
363ChaPter 10 SYSTEMS ANALYSIS
The result of your efforts should look something like this:
Step 1 (information generators) ■ STUDENT ■ SIC ■ FINANCE ■ DEPARTMENT
Step 2 (information destinations) ■ STUDENT ■ SIC ■ LEA ■ TUTOR ■ LIBRARY ■ STUDENTS’ UNION
Step 3 (information flows) ■ Student’s personal and course information ■ LEA list ■ Invoices ■ Students on course ■ Class list ■ Enrolment form ■ List of all new students (times two)
Step 4 (adding sources and destinations to the information flows)
■ The SIC creates a card index of the student’s details down to and including course description, plus the new student code number.
■ The SIC also creates a list of all students belonging to each local education authority (LEA).
■ The SIC sends the LEA list to the finance department, which then invoices the LEAs for the tuition fees relating to the students from their area.
■ The SIC creates a study record card (SRC), giving the student details and the modules being studied.
■ The SIC groups the SRCs by course and for each course sorts the cards into student name order; the SRCs for each course are then sent to the department that runs that course.
■ Each department will take the SRCs for its courses and produce a number of class lists, based around the modules that the student is studying, which are then passed to the relevant module leaders.
■ The SIC will issue the student with an enrolment form which the student can use to obtain a library card.
■ Finally, the SIC will pass a list of all new students to the library and the students’ union so that the library can issue students with library cards and the students’ union can issue students with their NUS cards.’
It is necessary to translate the above into a series of information flows and also define the systems boundary (i.e. the line that separates what is in the system under consideration from what is outside it).
In order to be successful in drawing IFDs, it is helpful to follow a few simple steps, since an attempt to draw a diagram from scratch may prove a little tricky:
Step 1 List all the sources of information for the system under consideration (in other words, places where information is generated).
Step 2 List all the destinations (receivers) of informa- tion for the system under consideration.
Step 3 Make a simple list of all the information flows.
Step 4 For each of the information flows identified in Step 3, add the source and destination that relate to it.
Step 5 Draw the IFD from your list that you produced from 3 and 4.
Tips 1. When you have gained experience in doing this, Steps 1
and 2 can be ignored and Steps 3 and 4 can be combined.
2. An information source/destination can appear more than once on an IFD – it can help to eliminate lots of crossed lines (and crossed lines are best avoided since the annotations can look rather jumbled).
3. Use A4 paper, or larger, in landscape mode.
Generator Flow Destination
STUDENT Student’s personal and course information
SIC
SIC LEA list FINANCE
FINANCE Invoices LEA
SIC Students on course DEPARTMENT
DEPARTMENT Class list TUTOR
SIC Enrolment form STUDENT
SIC List of all new students (1)
LIBRARY
SIC List of all students (2) STUDENTS’ UNION
STUDENTS’ NUS card STUDENTUNION
LIBRARY Library card STUDENT
Step 5 (the completed diagram) It is almost certain that if you were to attempt this diagram your results would not be exactly the same. However, provided that all the flows are represented correctly and there are no crossed lines, the result will be perfectly acceptable. Also, note that the student appears twice on the diagram. This is not just because the student is important ➨
M10_BOCI6455_05_SE_C10.indd 363 30/09/14 7:21 AM
364 Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT
union may be able to use an interface file from the student record system to generate NUS cards automatically; but, as with the library, that system would lie outside the scope of the area under consideration. Therefore, we will place the students’ union outside the system boundary.
■ It is reasonable to assume that the tutor is only to receive outputs from the system rather than carry out any processing of the data; it is reasonable, then, for the tutor to lie outside the system boundary.
■ Finally, the local education authority is physically external to the university as well as not being part of the university itself; the LEA should, therefore, lie outside the system boundary.
We can see the result of this analysis in the final IFD, with the system boundary included (Figure 10.6).
You will observe that there are three different types of information flow:
■ the first crosses the system boundary from outside with its destination inside the boundary – it is thus an input to the system from the external environment;
■ the second lies entirely within the system boundary and is, therefore, an output from one area in the system which then forms the input to another;
■ the third begins inside the system boundary and its destination lies outside – it is, therefore, an output from the system into its external environment.
What we have now is a diagram that clearly identifies the context for the systems development under consideration. The diagram can be used by the analyst to check with the prospective system users that all areas have been covered. It also helps the user community build a picture of how a
(which of course they are!), but because it helps avoid crossed lines (Figure 10.5).
What remains now is to consider the systems boundary. If this manual information system were to be replaced by a new computer-based information system, it would be necessary to identify what would be within the systems boundary and what would be external to the system and, hence, outside the system boundary. For the purposes of this example, we will make some assumptions:
■ Students are external to the system – they provide information as an input to the system and receive outputs from the system but are not themselves part of it – students will, therefore, be outside the system boundary.
■ The student information centre is clearly central to the whole system and, therefore, is an integral part of the system under consideration – the SIC will lie inside the system boundary.
■ The finance area needs a further assumption to be made. Let us assume that the finance area operates a computer-based information system for its accounting records and that the proposed system is to interface directly with it; in this case, it would make sense to include the finance area inside the system boundary.
■ Similarly, suppose that the library operates its own computerised lending system. In the new system, it may wish to use an interface between the student records system and its own system for setting up new students’ details. Since the library system is a separate one and does not require development itself, we will place the library outside the system boundary.
■ As with the library, we need to make an assumption about the students’ union information systems. The students’
Figure 10.5 A simple, high-level IFD, excluding the system boundary
SICStudent Finance Student
and course details
Students by LEA
Department LEA
Invoices
Tutor
Class lists
Students’ union
Library
Student
Library card
NUS card
Enrolment form
Students on course
List of new students
List of new students
M10_BOCI6455_05_SE_C10.indd 364 30/09/14 7:21 AM
365ChaPter 10 SYSTEMS ANALYSIS
new computer system should help to make the processes more efficient. Two separate IFDs are often drawn:
1. System ‘as-is’ to identify inefficiencies in the existing system.
2. New proposed system to rectify these problems.
What is required is further work to identify the business processes and data needs for the proposed system and this is where the following tools come in.
Source: Simon Hickie, course notes
Figure 10.6 The completed IFD, including the system boundary
SICStudent Finance Student
and course details
Students by LEA
Department LEA
Invoices
Tutor
Class lists
Students’ union
Library
Student
Library card
NUS card
Enrolment form
Students on course
List of new students
List of new students
Systems boundary
Context diagrams
Simplified diagrams that are useful for specifying the boundaries and scope of the system. They can be readily produced after the information flow diagram since they are a simplified version of the IFD showing the external entities.
Context diagrams are simplified diagrams that are useful for specifying the boundaries and scope of the system. They can be readily produced after the information flow diagram since they are a simplified version of the IFD showing the external entities. They show these types of flow:
1. Flow crosses the system boundary from outside with its destination inside the boundary – it is thus an input to the system from the external environment.
2. Flow begins inside the system boundary and its destination lies outside – it is, therefore, an output from the system into its external environment.
The internal flows which lie entirely within the system boundary are not shown. Context diagrams provide a useful summary for embarking on dataflow diagrams and entity relationship diagrams, since they show the main entities. The main elements of a context diagram are:
■ a circle representing the system to be investigated; ■ ellipses (or boxes) representing external entities; ■ information flows.
Context diagrams
M10_BOCI6455_05_SE_C10.indd 365 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT366
Figure 10.7 shows a context diagram for the student loan system described in Case Study 10.1.
Figure 10.7 Context diagram for the student loan system described in Case Study 10.1
Student records system
Students’ unionLibrary LEA
Student Tutor
Invoices
Li st
o f n
ew
st ud
en ts
details
Student and course
C la
ss lis
ts
E nrolm
ent form
Library and NUS card C
la ss
li st
s
Dataflow diagrams (DFDs)
Define the different processes in a system and the information that forms the input and output of the processes. They may be drawn at different levels. Level 0 provides an overview of the system with levels 1 and 2 providing progressively more detail.
Dataflow diagrams (DFDs) define the different processes in a system and the information that forms the input and output to the processes. They provide a process focus to a system. They may be drawn at different levels: level 0 provides an overview of the system with levels 1 and 2 providing increasing detail.
Dataflow diagrams of different types are one of the mainstays of many systems analysis and design methodologies. SSADM, for example, makes extensive use of DFDs, not only to document things as they are at the moment but also to document the required system. Whether the latter is really of any value is debatable. How-ever, as a tool to document processes, data or information flows and the relationships between them for an existing system (computerised or paper-based), they are extremely valuable.
Dataflow diagrams build on IFDs by adding two new symbols as well as subtly redefining others.
The diagram conventions in Figure 10.8 are those that are in most common use in Europe. Differing methodologies adopt different symbols for some items (such as a circle for a process), as you will see in some of the supplementary texts for this chapter.
Explanations of symbols
■ Sources and sinks – an information source is one which provides data for a process and is outside the system boundary. A sink lies outside the system boundary and is a receiver of information. There is a clear distinction between the use of this symbol in the DFD
Dataflow diagrams
M10_BOCI6455_05_SE_C10.indd 366 30/09/14 7:21 AM
367ChaPter 10 SYSTEMS ANALYSIS
and in the IFD that we looked at before, in that the symbol should not appear inside the system boundary.
■ Processes – convert data into either usable information or data in a different form for use in another process. The data that enter a process can come either from a datastore (see below) or from an external source.
■ Datastores – a datastore can either provide data as input to a process or receive data that have been output from a process. The amount of time that data would spend in a datastore can vary from a very short time (e.g. fractions of seconds in the case of some work files) to much longer periods (e.g. months or years in the case of master files).
■ Dataflows – a dataflow describes the exchange of information and data between datastores and processes and between processes and sources or sinks. Note that in this context we are using ‘data’ in a broad sense (to include information) rather than in the narrow sense used earlier (in Part 1 of the book).
■ Systems boundary – remains the same as for an IFD. It indicates the boundary between what lies inside the system under consideration and what lies outside.
Drawing dataflow diagrams
It is unfortunate that many texts actually contain errors in the DFD examples used. This is mainly through having ‘illegal’ information flows. In a well-constructed diagram, you will note the following:
■ Data do not flow directly between processes – the data that enter a process will come either from a source or from a datastore, they cannot exist in a vacuum!
■ Data do not flow directly between datastores – there must be an intervening process that takes the input data and converts them into a new form and outputs them to either a datastore or a sink.
■ Data do not flow directly from a datastore to a sink, or from a source to a data-store – there must be an intervening process.
To draw a basic high-level DFD, there are five steps required:
1. Identify and list all processes which take place in the system under consideration. A process is an event where an input of some kind, from either a source or a datastore, is transformed into an output (the output being either to a sink or to a datastore).
Figure 10.8 Symbols used in data flow diagrams
An external source or sink
An information flow
A process
A datastore
The information system boundary
M10_BOCI6455_05_SE_C10.indd 367 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT368
2. Identify all the datastores which you think exist in the system under consideration. A datastore will exist wherever a set of facts needs to be stored about persons, places, things or events.
3. For each process identified in Step 1, identify where the information used in the process comes from (this can be from a source or a datastore or both) and identify the output(s) from that process (which can be an information flow to a sink or to a datastore or to both).
4. Draw a ‘mini-DFD’ for each single process, showing the process box and any relevant sources, sinks or datastores.
5. Link the mini-DFDs to form a single diagram, using the datastores to link the processes together.
To help you to construct a diagram, the following tips are useful:
■ use A4 paper in landscape orientation; ■ aim to have no more than about six or seven processes on a page (ten maximum); ■ include the same datastores, sources and sinks more than once if required (to eliminate
crossed lines or to make the flows clearer).
Before working through the student records example introduced in the previous section, it is necessary to introduce the concept of ‘levelling’ in DFDs. For anything other than a very small system with a handful of processes, it would be almost impossible to draw a single diagram with all the processes on it. It is necessary, therefore, to begin with a high-level diagram with just the broadest processes defined. Examples of high-level processes might be ‘process customer orders’, ‘pay suppliers’ or ‘manufacture products’. Needless to say, each of the processes described can be broken down further until all the fundamental processes which make up the system are identified. It is usual to allow up to three or four levels of increasing detail to be identified. If there are any more levels of detail than this, it suggests that the system is too large to consider in one development and that it should be split into smaller, discrete subsystems capable of separate development.
To illustrate the levelling concept and also to demonstrate how process boxes should be used, we will take the simple example of checking a customer order. At Level 1, the process box will appear as in Figure 10.9.
It is desirable to split this process up into smaller components. As an example, suppose the following are identified:
■ check customer credit limit – can the customer pay for the goods? ■ perform stock check – to see whether the desired goods are in stock; ■ create sales order – this may be a special order form that is needed for each order;
Figure 10.9 An example of a Level 1 process in a DFD
SALES1
Process customer
order
Process description
Place where process
performed Process number
M10_BOCI6455_05_SE_C10.indd 368 30/09/14 7:21 AM
369ChaPter 10 SYSTEMS ANALYSIS
■ send order to warehouse – the warehouse will need to pick the stock ready for delivery; ■ dispatch customer order; ■ invoice customer.
This will give us six new processes to record at the next level. The process box for the first Level 2 process would be similar to this (Figure 10.10).
Note that the process number is 1.1. This indicates that the process has been decomposed from the higher-level process numbered 1. Subsequent processes would be numbered 1.2, 1.3, 1.4, and so on. Also note that the process name begins with a verb. The choice of verb helps indicate more clearly the type of process that is being performed.
Suppose now that we still need to decompose the new process 1.1 further. For example, the credit check process may involve these steps:
■ calculate order value; ■ identify current balance; ■ produce credit check result.
We need to present the new processes as Level 3 processes, since they have been decomposed from the higher Level 2 process. The first of these would be represented as in Figure 10.11.
The new processes would be numbered 1.1.1, 1.1.2 and 1.1.3. This approach to numbering allows each of the low-level processes to be easily associated with the higher- level process that generated it. Thus, for example, processes 3.2.1, 3.2.2 and 3.2.3 could be tracked back to process 3.2, and thence to process 3.
We will now return to the student enrolment example. We will concentrate on producing a Level 1 diagram for this procedure, although it will be clear that the example is a somewhat simplified one.
The first task is to identify all the processes which exist. Looking back to Figure 10.6, we can identify the following:
1. allocate unique student code; 2. create student card index; 3. create LEA list;
Figure 10.10 An example of a Level 2 process in a DFD
SALES1.1
Check customer credit limit
Figure 10.11 An example of a Level 3 process in a DFD
SALES1.1.1
Calculate order value
M10_BOCI6455_05_SE_C10.indd 369 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT370
4. invoice LEA; 5. create student record card; 6. create class list; 7. issue enrolment form; 8. issue new students list.
Step 2 requires us to identify all the datastores which might exist. Our example reveals the following:
■ student card index; ■ LEA list; ■ student record card; ■ class list; ■ new students list.
Step 3 requires us to construct a ‘mini-DFD’ for each of the eight processes identified above. We will restrict ourselves to the first three (see Figures 10.12, 10.13 and 10.14).
You will see from these figures that each of the processes we have considered has generated an output which forms an input to the next process. In the full diagram in Figure 10.15 you will see the complete picture, including all processes, datastores, sources and sinks. In this diagram, you will notice that the datastore ‘card index file’ appears more than once. This does not mean that there are two separate datastores with the same name, but that we have included it for a second time to make the diagram easier to draw. If we did not do this, there would have been either crossed lines or at least very tortuous ones. A system boundary is also included and you will note that sources and sinks lie outside the system boundary, while processes and datastores are inside. Many of the dataflows are inside the boundary, but you see where flows also cross the system boundary.
The final point to note is that a dataflow diagram is time-independent. This means that we are not trying to show the sequence in which things happen, but rather to show all the things that happen.
Figure 10.12 Mini-DFD for process 1
SIC1
Allocate unique
student code
Updated forms1
Student details
Student
Student details
M10_BOCI6455_05_SE_C10.indd 370 30/09/14 7:21 AM
371ChaPter 10 SYSTEMS ANALYSIS
Figure 10.13 Mini-DFD for process 2
SIC2
Create student
card index
Card index file2
New student card
Student details
Updates forms1
Figure 10.14 Mini-DFD for process 3
SIC2
Create student
card index
LEA lists3
Student and LEA details
Student details
Card index file2
The benefits to an organisation of constructing a dataflow diagram can be summed up in the following ‘three Cs’:
■ Communication. A picture paints a thousand words and DFDs are no exception. A diagram can be used by an analyst to communicate to end-users the analyst’s understanding of the area under consideration. This is likely to be more successful than what Ed Yourdon describes as the ‘Victorian novel’ approach to writing specification reports.
■ Completeness. A DFD can be scrutinised by functional area personnel to check that the analyst has gained a complete picture of the business area being investigated. If anything is missing or the analyst has misinterpreted anything, this will be clearer to the user if there is a diagram than if purely textual tools are used.
M10_BOCI6455_05_SE_C10.indd 371 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT372
Figure 10.15 Completed DFD for the student record system
LEAInvoices
Tutor
Student and course details
Student details
SIC1
Allocate unique
student code
Updated forms1
Student details
Student
Students’ details
SIC2
Create student
card index
Card index file2
New student card
New SRC
Study record cards4
SIC5
Create study record
card
SIC3
Create LEA lists
Student and LEA details
Student and LEA details
Class list
Card index file2
LEA lists3
Student details
Finance4
Invoice LEAs
SIC8
Issue new student
lists
SIC7
Issue enrolment
form
Student details
Student details
Student
Students union
LibraryStudent names
Student names
Enrolment form
Department6
Create class lists
■ Consistency. A DFD will represent the results of the fact-finding exercise conducted by the analyst. For the DFD to be constructed at all, the analyst will need to compare the fact-finding results from all the areas investigated and look for linkages between them. If the same processes are portrayed differently by different people, then the DFD will be hard to construct. In such an event, this will be a catalyst for the analyst to return to the fact-finding task, perhaps using brainstorming to get to the real facts.
Entity relationship diagrams (ERDs)
Provide a data-focused view of the main data objects or entities within a system such as a person, place or object and the relationships between them. It is a high-level view and does not consider the detailed attributes or characteristics of an object such as a person’s name or address.
Entity relationship diagrams (ERDs) provide a data-focused view of the main data objects or entities within a system such as a person, place or object and the relationships between them. It is a high-level view and does not consider the detailed attributes or characteristics of an object such as a person’s name or address.
In dealing with entity relationship diagrams, we must bear in mind that we are beginning to move away from the analysis stage of the systems development lifecycle towards the design stage. This is because we are beginning to think about how data are represented and how different sets of data relate to each other. For this chapter, we will concentrate on the fundamentals of entity relationships as they exist within a particular business situation, rather than on the detail of database design which follows directly from using this tool. Database design will be covered in much more detail later (in Chapter 11) where a technique called data normalisation will also be covered.
In any business situation, data (whether paper-based or computerised) are processed to produce information to assist in the decision-making processes within that business area. Processes may change over time and new ones be created to provide new or different information, but very often the types of data that underpin this remain relatively unchanged.
Entity relationship diagrams (ERDs)
M10_BOCI6455_05_SE_C10.indd 372 30/09/14 7:21 AM
373ChaPter 10 SYSTEMS ANALYSIS
Sometimes, data requirements change to allow new processes to be created. For example, a supermarket that moves to an electronic system from a manual one will generate new data in the form of sales of specific products at specific times and in specific quantities. The data can then be linked to automated stock ordering systems and the like.
In order to produce good-quality information, two things are needed above all others. These are:
■ accurate data; ■ correct processing.
If data are inaccurate, correct processing will only result in the production of incorrect information. If data are accurate, but faults exist in the processing, the information will still be incorrect. However, in the second case, the capability exists for producing correct information if the processing is adjusted. With faulty data, it may not be so easy to rectify the situation.
In the analysis context, we need to engage in fact-finding activities that reveal the data that underlie all the relevant business processes, so that they can be captured and stored correctly and then processed to produce the required information. This process will reveal details of certain entities which exist within the business. One of the most useful methods that can be used here is the review of records and documentation (for example, order forms, stock control cards, customer files and so on).
An entity can be defined as facts about a person, place, thing or event about which we need to capture and store data. To take the example of a sales department, it would need to know facts about customers, orders, products and stock availability.
The essential symbols used in ERDs are very straightforward (Figure 10.16). Note that additional symbols are used in some notations, but they are not necessary for the detail of analysis conducted in this chapter.
There are a number of possible relationships between entities.
One-to-one relationships
For each occurrence of entity A there is one and only one occurrence of entity B. For example, let us assume that a lecturer may teach on only one module, and that
module may be taught by only one lecturer (an unlikely situation) (Figure 10.17).
Entity
An object such as a person, place, thing or event about which we need to capture and store data. An entity forms a data set about a particular object.
Figure 10.16 Essential symbols in an entity relationship diagram
Customer An ENTITY (name always in box)
A relationship between entities (one-to-many illustrated)
Figure 10.17 A one-to-one relationship
Lecturer Module Teaches
Is taught by
M10_BOCI6455_05_SE_C10.indd 373 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT374
In Figure 10.17, we have added some additional information. This shows the nature of the relationship between the two entities. This information on the relation-ship is added to the line between the two entities. The relationship can be described in two ways according to which entity we refer to first. The relationships are:
■ lecturer teaches module; ■ module is taught by lecturer.
The practice of describing the relationship on the line is recommended since it helps others interpret the ERD more readily. However, the nature of the relationship is omitted on some subsequent diagrams for the sake of clarity.
One-to-many relationships
For each occurrence of entity A, there may be zero, one or many occurrences of entity B. For example, a lecturer belongs to a single division, but that division may contain many lecturers (it may, of course, have no staff at all if it has only just been created or if all the staff decided to leave) (Figure 10.18).
Many-to-many relationships
For each occurrence of entity A, there may be zero, one or many occurrences of entity B, and for each occurrence of entity B there may be zero, one or many occurrences of entity A.
For example, a course module may be taken by zero, one or many students and a student may take zero, one or many course modules (Figure 10.19).
Unfortunately, especially in database design, many-to-many relationships can cause certain difficulties. Therefore, they are usually ‘resolved’ into two one-to-many relationships through the creation of a ‘linking’ entity. The decomposition is shown in Figure 10.20. The linking entity will contain an item of data from each of the other entities which allows the link to be made.
The following example shows a simple ERD which illustrates each of the above possibilities in more detail.
Suppose that a nation has a professional hockey league, comprising 16 clubs. Each club may only play in this one league. Each club may employ a number of professional players (although it is also possible for a team to consist completely of amateurs). Each professional player may only be contracted to one club at a time and may also experience periods of unemployment between contracts. Professional players are also eligible to play for their
Figure 10.18 A one-to-many relationship
Division Lecturer Contains
Belongs to
Figure 10.19 A many-to-many relationship
Module Student Is taken by
Takes
M10_BOCI6455_05_SE_C10.indd 374 30/09/14 7:21 AM
375ChaPter 10 SYSTEMS ANALYSIS
national team, but any one player may only ever play for one national team. Finally, suppose that professionals may have a number of sponsors and that each sponsor may sponsor a number of players.
If we inspect the previous paragraph, we can identify the following entities:
league; club; professional; national team; sponsor.
Our first-cut ERD is shown in Figure 10.21. The only obvious difficulty here is the many-to-many relationship between professional
player and sponsor. We can resolve this by introducing a linking entity which contains something common to both an individual player and their sponsor. This can be seen in the next ERD of Figure 10.22.
Figure 10.20 A many-to-many relationship decomposed into two one-to-many relationships
Module StudentStudent/ module
Figure 10.21 First ERD for the professional hockey example
League Club
National team SponsorProfessional
player
Figure 10.22 Final ERD for the professional hockey example
League Club
National team
Sponsorship agreement
Professional player
Sponsor
M10_BOCI6455_05_SE_C10.indd 375 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT376
We have introduced the linking entity I to resolve the many-to-many relationship. Thus, any one player may have many sponsorship agreements, but any one sponsor agreement will belong to one player and to one sponsor.
This example was pretty straightforward. Others will be less so and it is therefore time to go back to our student records example from earlier in the chapter. In fact, we have already started the process of thinking about entities because the earlier DFD section required us to think about datastores – somewhere we store data, in other words a possible entity! Faced with a more complex set of possible relationships, it is useful to adopt a more structured approach to constructing ERDs.
There are six steps that can be helpful in producing an ERD, especially when one lacks experience in drawing them:
1. Identify all those things about which it is necessary to store data, such as customers and orders.
2. For each entity, identify specific data that need to be stored. In the case of a customer, for example, name, address and telephone number are all necessary.
3. Construct a cross-reference matrix of all possible relationships between pairs of entities and identify where a relationship actually exists. To do this, it is very helpful to identify some item of data which is common to the pair of entities under consideration.
4. Draw a basic ERD showing all the possible relationships, but not yet the degree of the relationship.
5. On the basic ERD, inspect each relationship and amend it to show whether it is a one- to-one, one-to-many or many-to-many relationship.
6. Resolve any many-to-many relationships by introducing an appropriate linking entity.
Step 1: Identify the entities
By going back to the student record example and the DFD in Figure 10.15, it is possible to identify some possible candidate entities. The difficulty here is that the kind of documentation generated from the process obscures what we really need to hold data about. For example, there is a datastore called card index file. This is hardly helpful! What really needs to be done is to ask the question: ‘What things do we need to store data about?’ This may yield something rather different from the entities we thought we had before. As a starting point, we will begin with the following entities:
students; courses; LEAs; departments; modules.
Step 2: Identify specific data for each entity
Each entity will be taken in turn, and a number of data attributes suggested.
STUDENTS name; home address; sex; local education authority name; local education authority code; course code;
M10_BOCI6455_05_SE_C10.indd 376 30/09/14 7:21 AM
377ChaPter 10 SYSTEMS ANALYSIS
term-time address; date of birth; next of kin; modules taken.
COURSES course code; course description; department; course leader.
LEAs name.
LEA CODE address; contact name; telephone number; fax number.
DEPARTMENT department name; department location; office number; head of department.
MODULES module code; module leader; department; semester run; owning department.
Step 3: Construct cross-reference matrix
This part of the process helps novice analysts identify where relationships exist between entities. It is necessary to identify where there is a common data attribute between pairs of entities, so indicating that a probable relationship exists between them. This is the hardest part of the whole exercise. The essence is to ask the question: ‘For any occurrence of entity A, are there (now or likely to be in the future) any occurrences of B that relate to it?’ For example, is it likely that for a customer some orders exist that relate to it?
The cross-reference matrix in Figure 10.23 allows each pair of possible relationships to be examined for a link. In the cross-reference diagram, it is only necessary to identify each possible pair of relationships once. Also, there is no need to examine a relationship that an entity might have to itself. As a result, we are only interested in examining ten possible pairs of relationship for this small, five-entity example.
Steps 4 and 5: Construct first-cut ERD and add degree of relationship
Steps 4 and 5 will be combined, since there is nothing to be gained here from making separate diagrams. However, when drawing the diagram for Case Study 10.2, it would be wise to split the tasks as suggested.
The diagram in Figure 10.24 is almost correct, but there is still the question of the many- to-many relationship to resolve, so we must move to the final step.
M10_BOCI6455_05_SE_C10.indd 377 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT378
Step 6: Resolve any many-to-many relationships
The many-to-many relationship about which we should be concerned is the one between students and modules. A student may enrol for many modules and any modules may be taken by many students. However, what we need to represent is the ability of students to enrol for as many or as few modules as required without causing complications in either the student entity or the module entity. The many-to-many relationship is therefore resolved by introducing a linking entity which will have one occurrence for each module that one student takes and for the whole student population. So if there were 100 students each studying 8 modules, the new linking entity would contain 800 records. The final diagram is in Figure 10.25.
By working through the student record system example, we have moved from the process of identifying what the data requirements are for the system under consideration (the analysis part) and have made substantial progress on how a database might be constructed to hold the required data (which is a design task). This exercise is far from complete, however, as database design involves more than just looking at entity relationships. The detailed database design aspects will therefore be covered later (in Chapter 11) where all aspects of system design are considered.
Figure 10.23 Cross-reference matrix for student records system
YNYY
YYN
NN
N
Module
Student
Course
LEA
Department
M od
ul e
S tu
de nt
C ou
rs e
LE A
D ep
ar tm
en t
Figure 10.24 Student record system ERD – with many-to-many relationship
LEA
Department
StudentModule
Course
M10_BOCI6455_05_SE_C10.indd 378 30/09/14 7:21 AM
379ChaPter 10 SYSTEMS ANALYSIS
Figure 10.25 Final student record system ERD – with many-to-many relationships decomposed
LEAStudent
Department
Student/ModuleModule
Course/Module
Course
Soft systems methodology
A methodology that emphasises the human involvement in systems and models their behaviour as part of systems analysis in a way that is understandable by non-technical experts.
Human activity system
Human activity system are non-tangible systems where human beings are undertaking some activities that achieve some purpose’.
SOFT SYSTEMS METHODOLOGYFOCUS ON…
Soft systems methodology is a methodology that emphasises the human involvement in systems and models their behaviour as part of systems analysis in a way that is understandable by non-technical experts.
This methodology has its origins in Peter Checkland’s attempt to adapt systems theory into a methodology which can be applied to any particular problem situation (Checkland, 1999). From an information systems development perspective, it is argued that systems analysts often apply their tools and techniques to problems that are not well defined. In addition, it is also argued that since human beings form an integral part of the world of systems development, a systems development methodology must embrace all the people who have a part to play in the development process (users, IS/IT professionals, managers, etc.). Since these people may have conflicting objectives, perceptions and attitudes, we are essentially dealing with the problems caused by the unpredictability of human activity systems.
Human activity systems are non-tangible systems where human beings are undertaking some activities that achieve some purpose.
Proponents of soft systems methodology (SSM) claim, therefore, that true understanding of complex problem situations (and in our case this means information systems development) is more likely if ‘soft systems’ methods are used rather than formal ‘hard systems’ techniques. This is not to say that ‘hard’ methods do not have a place. Rather, it is to suggest that the more traditional tools and techniques will have a greater chance of being used effectively if they are placed within a soft systems perspective.
Soft systems methodology has seven stages. They should be regarded as a framework rather than a prescription of a series of steps that should be followed slavishly.
Stage 1: The problem situation: unstructured
This stage is concerned with finding out as much as possible about the problem situation from as many different affected people as possible. Many different views about the problem
M10_BOCI6455_05_SE_C10.indd 379 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT380
will surface and it is important to bring out as many of them as possible. The structure of the problem in terms of physical layout, reporting structure, and formal and informal communication channels will also be explored.
A soft systems investigator will often find that there is a vagueness about the problem situation being investigated and what needs to be done. There can also be a lack of structure to the problem and the situation that surrounds it.
Stage 2: The problem situation: expressed
The previous stage was concerned with gathering an informal picture of the problem situation. This stage documents these findings. While there is no prescribed method for doing this, a technique that is commonly used is the drawing of ‘rich pictures’. A rich picture can show the processes involved in the problem under consideration and how they relate to each other. The elements which can be included are the clients of the system (internal and external), the tasks being performed, the environment within which the system operates, the owners of the ‘problem’ and areas of conflict that are known to exist.
Rich pictures can act as an aid to discussion, between problem owner and problem solver or between analysts and users, or both. From a rich picture it then becomes possible to extract problem themes, which in turn provide a basis for further understanding of the problem situation. An example of a rich picture is shown in Figure 10.26. Such a diagram can be used in systems analysis to indicate the flows of information, the needs of staff and how the physical environment – in this case the office layout – affects the current way of working. This summary of the existing situation provides a valuable context for systems analysis and design.
Stage 3: Root definitions of relevant systems
Checkland (1999) describes a root definition as a ‘concise, tightly constructed description of a human activity system which states what the system is’.
A root definition is created using the CATWOE checklist technique. CATWOE is an acronym that contains the following elements:
■ Clients or customers – the person(s) who benefit, or are affected by or suffer from the outputs of the system and its activities that are under consideration.
■ Actors – those who carry out the activities within the system. ■ Transformation – the changes that take place either within or because of the system (this
lies at the heart of the root definition). ■ Weltanschauung or Worldview – this refers to how the system is viewed from an explicit
viewpoint; sometimes this term is described as assumptions made about the system. ■ Owner – the person(s) to whom the system is answerable: the sponsor, controller or
someone who could cause the system to cease. ■ Environment – that which surrounds and influences the operation of the system but
which has no control over it.
The main use of the root definition is to clarify the situation so that it can be summed up in a clear, concise statement. An example of a root definition for a university might be:
To provide students with the maximum opportunity for self-development, while at the same time safeguarding academic standards and allowing the university to operate within its budgetary constraints.
An alternative root definition might be:
A system to maximise revenue and the prestige of academic staff!
If there are many different viewpoints to be represented, it is possible that a number of different root definitions may be constructed. These in turn will provide a basis for further discussion, so that a single agreed root definition can be produced. A single root definition that is hard to produce is indicative of sharp divisions between the CATWOE elements. From
M10_BOCI6455_05_SE_C10.indd 380 30/09/14 7:21 AM
381ChaPter 10 SYSTEMS ANALYSIS
an information systems development perspective, if it is not possible to agree on a single root definition, then the systems development process is likely to be fraught with difficulties.
Stage 4: Building conceptual models
A conceptual model is a logical model of the key activities and processes that must be carried out in order to satisfy the root definition produced in Stage 3. It is, therefore, a representation of what must be done rather than what is currently done.
Conceptual models can be shown on a simple diagram where activities and the links between them can be shown. Figure 10.27 shows a simple conceptual model of a student records system.
Where several alternative root definitions have been produced, it is usual to draw a conceptual model for each one. Successive iterations through the alternative models can then lead to an agreed root definition and conceptual model. When this has happened, it is possible to move on to the next stage.
Figure 10.26 An example of a rich picture for an estate agency showing the needs and responsibilities of different staff
BACK OFFICE
FRONT OFFICE
MANAGER
Big picture – branch profitability
Are targets achieved? What are our conversion rates?
NEGOTIATOR
Customer details
Busy – info at fingertips. Spends all time on phone
Sales = commission
WINDOW DISPLAY
FINANCIAL ADVISER
Sells insurance and mortgages. Needs customer
and property details
BRANCH ADMINISTRATOR
Printer
Customer
Quick sale
Property details
M10_BOCI6455_05_SE_C10.indd 381 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT382
Stage 5: Comparing conceptual models with reality
Different alternative conceptual models that represent what should happen can be compared with the reality of what actually happens, as represented by the rich picture produced in Stage 2.
The purpose of this step is not to alter the conceptual models so that they fit more closely with what happens in reality. Instead, it is to enable the participants in the problem situation to gain an insight into the situation and the possible ways in which the change to reality can take place.
Stage 6: Assessing feasible and desirable changes
From the output of Stage 5, an analysis of the proposed changes can be made and proposals for change drawn up for those that are considered feasible and desirable. Such changes may relate to information systems, but there is no restriction on the type or scope of the change.
Stage 7: Action to improve the problem situation
It is perhaps here that the application of the model is most evident. SSM does not describe methods for implementing solutions – that lies outside the scope of the methodology. What it does do is to provide a framework through which problem situations can be understood. In fact, there is no reason that SSM should not be used as a tool for assisting the implementation of the required solution – the steps can be repeated, but this time the problem situation under consideration is the implementation of the required solution. This in turn may throw up alternative methods such as SSADM or rapid applications development (in Chapter 7) as the best approach to information systems development. Indeed, SSM has often been used as a ‘front end’ to more traditional structured development methodologies.
Figure 10.27 Conceptual models – a simple example
Update module details
Module leader Student
Local education authority
Student
Module leader
Create class list
Create student record
Enrol student
Create enrolment
form
Create LEA
invoice
M10_BOCI6455_05_SE_C10.indd 382 30/09/14 7:21 AM
383ChaPter 10 SYSTEMS ANALYSIS
Any systems development project will be confronted by issues such as system size, complexity and acquisition method. These factors affect the choice of fact-finding and documentation tools. It is appropriate, therefore, to consider three alternative acquisition methods and review fact-finding and documentation needs for each.
SYSTEMS ANALYSIS – AN EVALUATION
Bespoke software, which can be developed either internally or by a third party, presents the greatest scope for using the full range of analysis tools. Complex systems will require that the analyst gain a very clear and precise understanding of the business processes that take place, and all the tools at the analyst’s disposal may need to be used. A combination of interviewing, documentation review and observation will yield much of the information that is needed, but if the system is a large one with many users, questionnaires may also need to be used. Brainstorming will be valuable, especially when linkages between different processes and subsystems are being investigated.
Complex projects will also require the use of all of the documentation tools we have discussed. Needless to say, the resulting diagrams will be more detailed and extensive than the ones given as examples in this chapter.
Bespoke development
Even though there is no requirement to produce something from which the system designer can produce a blueprint for the build stage, it is still necessary to gain a clear understanding of user requirements before a package is considered. Therefore, the fact-finding process will still be undertaken, but will be geared towards gaining an understanding of the features a package must support and those that are only desirable.
One benefit of deciding to purchase a package is that a number of candidate packages can be initially selected and used by the analyst as a means of identifying real user needs. It is possible, for example, for a selected group of users to review the features of a small number of packages with a view to compiling an appropriate requirements catalogue. Also, when users actually have an opportunity to experiment with a package, the analyst can gain a much greater insight into what the users’ real requirements are.
For the analyst, it may still be useful to construct information flow and dataflow diagrams to help ensure that the package that is finally selected will support the required linkages, both between processes in the business area under consideration and to other business areas (from sales to accounts, for example). It will also be useful for the analyst to construct an entity relationship diagram to be sure that the packaged software will support the data requirements of the organisation.
Purchasing packages off-the-shelf
The situation here is somewhat different from the previous two acquisition methods. The end-user will have a clear idea of what the system is required to do. Also, it is less likely that the system will need to have linkages to other applications. The emphasis for the user, therefore, should be on identifying the data and processing requirements clearly so that they can be reviewed by others in the organisation, and an application can be produced which
User applications development
M10_BOCI6455_05_SE_C10.indd 383 30/09/14 7:21 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT384
Software tools are available to assist in the analysis phase. These usually focus on the diagramming rather than the enquiry stage, so much of the skill remains with the analyst in interpreting the users’ requirements before producing meaningful diagrams showing the information flows and processes.
An important issue in using software tools to help the analyst is the degree to which the diagrams used to summarise processes can be converted easily into the system design and then into the final system. Traditionally, there have been separate tools for the analyst, designer and programmer. Since there is a strong overlap with the design phase, we will defer the examination of these tools until later (see Chapter 11, which has a section on computer-aided software engineering or CASE tools). Integrated CASE tools are intended to bridge the gap between analysis, design and programming.
SOFTWARE TOOLS FOR SYSTEMS ANALYSIS
Background The following scenario is typical of many companies in the retail/wholesale business. A number of information flows exist both internally within the organisation and also with people outside. This case study is used for exercises on in-formation flow diagrams (IFDs), dataflow diagrams (DFDs) and entity relationship diagrams (ERDs).
The exercise continues in Chapter 11 when the reader is asked to produce a detailed database design based on the entity relationship diagram produced and the paper form examples.
ABC case study information Andy’s Bonsai Company (ABC) specialises in selling bonsai kits by mail order. The kits are made up of a number of ele- ments, including soil, plant pots and seeds. Other products such as mini-garden tools are also sold.
Customers place orders by telephone or by mailing an order slip which is printed as part of an ABC advert. Customers pay by cheque, credit card or debit card.
When an order is received by ABC, it is directed to a sales clerk. Each sales clerk has responsibility for a particular geographic region. The sales clerk will enter the details of the order onto a preprinted three-part order form. One part is retained by the sales clerk, one copy together with the payment is sent to the accounts department and the other is sent to the warehouse (on confirmation of the customer’s creditworthiness).
On receipt of the customer orders and payment details, the accounts department ensures that the customer’s payment is valid. If the payment is satisfactory, the department will inform the sales department and the order may proceed. An unsatisfactory payment situation is also communicated to the sales department, which then informs the customer of the problem.
ABC case study
CASE STUDY 10.2
delivers good-quality information. Of the techniques discussed, the most relevant is the entity relationship diagram. By concentrating on data and how they are to be captured and represented, the user increases the probability that the data will be correct, while the use of fourth-generation language tools will help maximise the probability that the processing will also be correct.
Many user-developed applications suffer from poor database design and, as a consequence, the processing requirements are much more complex and prone to error. By taking care to consider carefully the relationships between the relevant data items, the probability of obtaining successful user-developed applications is increased.
M10_BOCI6455_05_SE_C10.indd 384 30/09/14 7:21 AM
385ChaPter 10 SYSTEMS ANALYSIS
dispatched. The warehouse also needs to keep track of the amount of product in stock and, when stock levels are low, it sends a manufacturing order to the manufacturing department.
The warehouse keeps a manual card-index system of stock and raw materials held together with copies of the customer orders. When an order is dispatched to the customer, the relevant order form is marked as having been
CUSTOMER ORDER FORM CUSTOMER NO.: C234792 CUSTOMER ADDRESS: 26 Vicarage Drive Thorndyke West Yorkshire WF24 7PL CODE DESCRIPTION: 1983 MINI-OAk 0184 MINI-MAPLE 2984 MINI gARDEN TOOLS (STAINLESS) 3775 MINI WATERINg CAN (COPPER) PAYMENT TYPE
PRICE: 19.95 24.50 29.95 17.50 Cheque
QTY: 2 2 1 1 ORDER VALUE
ORDER NO.: 4214 DATE ORDERED: 29 March 1999
TELEPHONE NO.: 01482 7374
VALUE: 39.90 49.00 29.95 17.50 136.35
WAREHOUSE CARD INDEx
LOCATION PRODUCT CODE PRODUCT DESCRIPTION START TRANSACTION QTY QTY: 37 25 28 23 25 215 10 150 60
J82 4151 MINI-ASH DATE
2/6/99 4/6/99 9/6/99 17/6/99
CARD NO.: 19
SIgNATURE
RON JEFF LUCY ERIC
MANUFACTURING ORDER FORM
MANUFACTURINg ORDER NO. PRODUCT CODE PRODUCT DESCRIPTION QUANTITY ORDERED DATE ORDERED DATE REQUIRED DATE DELIVERED SIgNATURE
7210 4151 MINI-ASH 50 3/6/99 13/6/99 17/6/99 BERYL
PURCHASE ORDER FORM
SUPPLIER NO.: S165 SUPPLIER ADDRESS: 14 Wyke Trading Estate Heckwhistle West Yorkshire WF9 5JJ
PURCHASE ORDER NO. 214 DATE ORDERED 29 March 1999
TELEPHONE NO. 01637 7346
CODE 23 69 84 75
DESCRIPTION OAk CHIPPINgS 2’ POTS SILVER SAND MINI WATERINg CAN (STAINLESS)
QTY 30.00 0.03 1.77 4.56 ORDER VALUE
VALUE 25 1000 10 20
PRICE 750.00 300.00 17.70 91.20 1158.90
➨
M10_BOCI6455_05_SE_C10.indd 385 30/09/14 7:21 AM
386 Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT
The manufacturing department is responsible for ordering materials from various suppliers and then packaging them into products for sale to the customer. A three-part purchase order is made out: one part is sent to the supplier, one part is retained by the manufacturing department and the third part is sent to the accounts department. The accounts department holds copies of purchase orders for future matching with delivery notes and invoices. When the supplier delivers the ordered items, together with a delivery note, a check is made to ensure that the delivery matches the order. The supplier will send an invoice to the accounts department on confirmation that the delivery is correct so that payment can be made.
QUESTIONS
1. Using the ABC case study, produce an information flow diagram for the company by following the steps given earlier in the chapter. Does the diagram
tell you anything about ABC’s operations which may need some attention (such as missing or superfluous information flows)?
2. Using the ABC case study and the information flow diagram that you drew in answer to Question 1, produce a simple Level 1 dataflow diagram for the company by following the steps given earlier in the chapter. Compare your answer with that by one of your colleagues. Are the diagrams the same? If not, is it possible to say which is correct? If not, why not?
3. Using the ABC case study, including the sample forms included below the main text, construct an entity relationship diagram for the company. Make sure that you do a cross-reference matrix before attempting to draw the diagram. When you have drawn your first-cut diagram, check for many-to- many relationships and eliminate any that you find by using the appropriate technique described earlier in the chapter.
Stage summary: systems analysis Purpose: Define the features and other requirements of the information system Key activities: Requirements capture (interviews, questionnaires, etc.) diagramming Inputs: User’s opinions, system documentation, observation Outputs: Requirements specification
1. The analysis phase of systems development is aimed at identifying what the new system will do.
2. Analysis will identify the business processes which will be assisted by the software, the functions of the software and the data requirements.
3. The results of the analysis phase are summarised as a requirements specification which forms the input to the design phase, which will define how the system will operate.
4. Fact-finding techniques used at the analysis stage include:
■ questionnaires; ■ interviews; ■ observation; ■ documentation review; ■ brainstorming.
5. The results from the fact-finding exercise are summarised in a requirements specification and using different diagrams such as:
■ information flow diagrams which provide a simple view of the way information is moved around an organisation;
■ dataflow diagrams which show the processes performed by a system and their data inputs and outputs;
■ entity relationship diagrams which summarise the main objects about which data need to be stored and the relationship between them.
6. The depth of analysis will be dependent on the existing knowledge of requirements. A user development may have limited analysis since the user will have a good understanding of their needs. A software house will need to conduct a detailed analysis which will form the basis for a contract with the company for which it is developing software.
SUMMARY
M10_BOCI6455_05_SE_C10.indd 386 30/09/14 7:21 AM
387ChaPter 10 SYSTEMS ANALYSIS
1. What is the difference between the ‘funnel’ and ‘pyramid’ approaches to structuring an interview?
2. Why can closed questions still be useful in an interview?
3. Assess the relative effectiveness of interviews versus questionnaires when attempting to establish user requirements.
4. In an information flow diagram, why should we not record information flows that lie completely outside the system boundary?
5. What are the main differences between an information flow diagram and a dataflow diagram?
6. What is meant by the term ‘levelling’ in dataflow diagrams?
7. In a sales order processing system, which of the following are not entities? Customer, colour, size, product, telephone number, sales order, salesperson, order date.
8. Why might the construction of an ERD still be useful even if an off-the-shelf package was going to be purchased?
ExERCISES
Self-assessment exercises
Discussion questions
1. Use a simple example with no more than five processes or ten information flows to examine the differences between the information flow diagram and the dataflow diagram. Which would be more effective for explaining deficiencies with an existing system to:
(a) a business manager; (b) a systems designer?
Justify your reasoning.
2. Compare the effectiveness of ‘soft’ methods of acquiring information such as interviews and questionnaires and ‘hard’ methods of gathering information such as document analysis and observation of staff. In which order do you think these analysis activities should be conducted and on which do you think most time should be spent?
3. ‘For producing a database, the only type of diagram from the analysis phase that needs to be produced is the entity relationship diagram. Dataflow diagrams are not relevant.’ Discuss.
Essay questions
1. Compare and contrast alternative fact-finding methods and analysis documentation tools as they might relate to bespoke software development and the purchase of off-the-shelf packages.
2. Errors in the analysis stage of a systems development project are far more costly to fix than those that occur later in the systems development lifecycle. Why do some organisations seem to devalue the analysis process by seeking to get to the system build as quickly as possible?
3. Compare and contrast the relative effectiveness of the use of information flow diagrams, dataflow diagrams and entity relationship diagrams by a business analyst to demonstrate inefficiency in a company’s existing information management processes. Use examples to illustrate your answer.
M10_BOCI6455_05_SE_C10.indd 387 30/09/14 7:21 AM
388 Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT
Examination questions
1. Briefly review the arguments for and against using interviewing as a means of determining system requirements.
2. Explain the relationship between the initiation and analysis phases of the systems development lifecycle.
3. Briefly explain (in one or two sentences) the purpose of each of the following diagramming methods:
(a) information flow diagram; (b) context diagram; (c) dataflow diagram; (d) entity relationship diagram.
4. Draw a diagram showing each of the following relationships on an ERD:
(a) The customer places many orders. Each order is received from one customer. (b) The customer order may contain many requests for products. Each product will feature
on many customer orders. (c) Each customer has a single customer representative who is responsible for them. Each
customer representative is responsible for many customers.
5. The final examination question is based on a detailed case study for Megatoys and is to be found on the companion web site.
Browne, g.J. and Rogich, M.B. (2001) ‘An empirical investigation of user requirements elicitation: comparing the effectiveness of prompting techniques’, Journal of Management Information Systems, 17, 4, 223–49
Buzan, T. and Buzan, B. (2010) The Mind Map Book, BBC Active, London
Checkland, P.B. (1999) Systems Thinking, Systems Practice, John Wiley, Chichester
Lindstrom, L. and Jeffries, R. (2004) ‘Extreme programming and agile software development methodologies’, Information Systems Management, 21, 3, 41–52
Shi, Y., Specht, P. and Stolen, J. (1996) ‘A consensus ranking of information systems requirements’, Information Management and Computer Security, 4, 1, 10–18
Taylor, D. (1995) Business Engineering with Object Technology, John Wiley, New York
Yeates, T. and Wakefield, T. (2004) Systems Analysis and Design, 2nd edition, Financial Times Prentice Hall, Harlow
References
Further reading
Ambler, S. and Lines, M. (2012) Disciplined Agile Delivery: A Practitioner’s Guide to Agile Software Delivery in the Enterprise, IBM Press.
Avison, D.E. and Fitzgerald, g. (2006) Information Systems Development: Methodologies, Techniques and Tools, 4th edition, Blackwell, Oxford.
M10_BOCI6455_05_SE_C10.indd 388 30/09/14 7:21 AM
389ChaPter 10 SYSTEMS ANALYSIS
Web links
www.cio.com CIO.com for chief information officers and IS staff has many articles related to analysis and design topics.
www.computerweekly.com Computer Weekly is an IS professional trade paper with UK/ Europe focus which has many case studies on practical problems of analysis, design and implementation.
www.research.ibm.com/journal IBM Systems Journal and the Journal of Research and Development have many cases and articles on analysis and design related to e-business concepts such as knowledge management and security.
kendall, k.E. and kendall, J.E. (2013) Systems Analysis and Design, 9th edition, Prentice-Hall, Englewood Cliffs, NJ.
Lejk, M. and Deeks, D. (2004) An Introduction to Systems Analysis Techniques and UML Distilled: A Brief Guide to the Standard Object Modelling Language, 2nd edition, Prentice Hall, Hemel Hempstead.
M10_BOCI6455_05_SE_C10.indd 389 30/09/14 7:21 AM
,
CHAPTER
11
LEARNING OUTCOMES
After reading this chapter, you will be able to:
■ define the difference between analysis and design and the overlap between them;
■ synthesise the relationship between good design and good- quality information systems;
■ define the way relational databases are designed;
■ evaluate the importance of the different elements of design for different applications.
MANAGEMENT ISSUES
Design is also a critical phase of BIS development since errors at this stage can lead to a system that is unsatisfactory for the user. From a managerial perspective, this chapter addresses the following questions:
■ What different types of design need to be conducted for a quality BIS to be developed?
■ What are the key aspects of design for an e-business system?
■ How do we create an effective information architecture for our organisation?
CHAPTER AT A GLANCE
MAIN TOPICS
■ Aims of system design 392 ■ Constraints on system design 394 ■ The relationship between analysis
and design 395 ■ Elements of design 395 ■ System or outline design 397 ■ Detailed design (module design) 405 ■ Design of input and output 421 ■ User interface design 423 ■ Input design 426 ■ Output design 428 ■ Designing interfaces between
systems 428 ■ Defining the structure of program
modules 428 ■ Security design 429 ■ Design tools: CASE (computer-aided
software engineering) tools 430 ■ Error handling and exceptions 430 ■ Help and documentation 430
FOCUS ON . . .
■ Relational database design and normalisation 405
■ Web-site design for B2C e-commerce 424
■ Object-oriented design (OOD) 431
CASE STUDY
11.1 Beaverbrooks the Jewellers 393
11.2 Systems management: driving innovation should be the main objective 401
Systems design
M11_BOCI6455_05_SE_C11.indd 391 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT392
The design phase of information systems development involves producing a specification or ‘blueprint’ of how the system will work. This forms the input specification for the final stage of building the system by programmers and database administrators. The design phase is also closely linked to the previous analysis phase, since the users’ requirements directly determine the characteristics of the system to be designed.
The systems design is given in a design specification defining the best structure for the application and the best methods of data input, output and user interaction via the user interface. The design specification is based on the requirements collected at the analysis stage.
Design is important, since it will govern how well the information system works for the end-users in the key areas of performance, usability and security. It also determines whether the system will meet business requirements – whether it will deliver the return on investment. The design specification will include the architecture of the system, how security will be implemented, and methods for entry, storage, retrieval and display of data.
Before the widespread adoption of Internet technologies from the mid-1990s onwards, system design for BIS tended to focus on the design of applications for the different functional areas of the business (as described in Chapter 6). In the era of e-business, design of such applications for purposes such as electronic procurement, supply chain management and customer relationship is still required. However, the adoption of standard solutions applications such as SAP and Oracle for enterprise applications has meant that system design has changed in its nature. Many of the challenges of design now involve tailoring the user interfaces and data storage and transfer for these standard applications. It is now less common for systems to be designed without the use of pre-existing software applications or components.
A further change in the emphasis of design has been caused by the increasing volume of unstructured information that it is made available to businesses and consumers via the Internet and the World Wide Web. A design challenge faced by all organisations, large and small, is managing this content in order to deliver relevant, timely information to their stakeholders whether they be employees, customers, suppliers, partners or government agencies. So designing an effective information architecture for an organisation to enable it to deliver content via the intranets, extranet and Internet networks introduced earlier (in Chapters 5 and 6) has become a major challenge.
In this chapter, we explore the elements both of traditional application design and of delivery of web-based content. We start by introducing the concepts of effective design which apply to all types of information system. We then look at how input design, output design and interfacing with other systems occurs for traditional applications and for those delivered via web browsers. Finally, we look at approaches to building information architecture. Throughout the chapter, we will refer to the example of information systems required by a bank to illustrate today’s challenges of system design.
INTRODUCTION
Systems design
The design phase of the lifecycle defines how the finished information system will operate. This is defined in a design specification of the best structure for the application and the best methods of data input, output and user interaction via the user interface. The design specification is based on the requirements collected at the analysis stage.
AIMS OF SYSTEM DESIGN
In systems design we are concerned with producing an appropriate design that results in a good-quality information system that:
■ is easy to use; ■ provides the correct functions for end-users;
M11_BOCI6455_05_SE_C11.indd 392 30/09/14 7:11 AM
393ChaPter 11 SYSTEMS DESIgN
■ is rapid in retrieving data and moving between different screen views of the data; ■ is reliable; ■ is secure; ■ is well integrated with other systems.
These factors are clearly all important to delivering a satisfactory experience to end- users and a satisfactory return on investment to the business. Consider an online banking service for customers which may also be accessed by staff – all these factors are vital to the success of the system and so the design is vital to the success of the system also.
Beaverbrooks the Jewellers is a family business established a century ago and with staff dispersed between branches and head office locations, increased paperwork, administration and information demands had become a significant issue. Much of Beaverbrooks’ business concerns providing special order items wedding and engagement rings, necklaces, engraved silverware and watches. While each shop stocks the full range of wedding and engagement rings, it is rare that they will have all sizes in stock at every store.
Patrick Walker, head of management information systems, says that most couples choose the design of the ring together and then the ring will be ordered either from a central warehouse, another local branch or directly from a supplier. ‘The result is that much of Beaverbrooks’ staff time is taken up following the progression of the order’, he says. ‘We needed a system that would manage this for us. In addition, we discovered that there was a mass of email going round and round the organisation and not always reaching the appropriate person.’ Collaboration between head office and the branches, a central repository for documentation and a framework to enable company information and knowledge to be exchanged were the desired outcomes that arose from a cross-company focus group.
All company data, regardless of its format or origin, is now held in one place on a central server where it can be easily shared, searched, retrieved, backed up and managed. KnowledgeWorker collaboration, search and
workflow tools sit over the top of the data, which is accessed locally or remotely through a web browser interface. Branch staff now use the central data system extensively for stock en-quires, placing special orders, sharing company information and making sure the merchandising in each branch conforms to the current company branding and directives.
For Beaverbrooks, integrating its information and processes into a central collaborative system has led to improved productivity throughout the organisation. ‘Improving our methods of storing information and then sharing it between employees has halved our administration time’, says Walker.
‘We keep finding more activities the system can help us with, so we are spending that extra time doing new things with our information to make us even more effective as a business.’
Source: Linda More, Computing, 25 October 2007, http://www. computing.co.uk/ctg/analysis/1821325/case-study-beaverbrooks- jewellers
Beaverbrooks the Jewellers Beaverbrooks the Jewellers created a system that collates data from all stores and places it in one location
CASE STUDY 11.1
QUESTIONS
1. Read the case study and identify the main design elements that needed to be considered.
2. Identify any design features that can be directly linked to specific business benefits.
M11_BOCI6455_05_SE_C11.indd 393 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT394
It is also important to think forward to future releases of the software. When the software is updated in the maintenance phase, it is important to have a system that can be easily modified. Good documentation is important to this, but equally import-ant is that the design be flexible enough to accommodate changes to its structure. To achieve flexibility, simplicity in design is a requirement. Many designers and developers adopt the maxim ‘KISS’ or ‘Keep It Simple, Stupid!’.
Whitten and Bentley (2006) point out that design does not simply involve producing an architectural and detailed design, but is also an evaluation of different implementation methods. For example, an end-user designing an application will consider whether to implement a system within an application such as Microsoft Access or develop a separate Visual Basic application. However, it is usually possible to take the ‘make-or-buy’ decision earlier in the software lifecycle, even when the detailed design constraints are unknown. The acquisition method is described in more detail earlier (in Chapters 7 and 8) on the start-up phases of a project.
CONSTRAINTS ON SYSTEM DESIGN
The system design is directly constrained by the user requirements specification, which has been produced as a result of systems analysis (as described in Chapter 10). This will describe the functions that are required by the user and must be implemented as part of the design. As well as the requirements mentioned in the previous section, there are environmental constraints on design which are a result of the hardware and software environment of implementation. These include:
■ hardware platform (PC, Apple or Unix workstation); ■ operating system (Windows XP, Apple or Unix/Linux); ■ web browsers to be supported (different versions of Microsoft Internet Explorer and
open-source rivals such as Opera, Mozilla, Firefox, etc.); ■ data links required between the application and other programs or a particular relational
database such as Oracle or Microsoft SQL Server; ■ design tools such as CASE tools; ■ methodologies or standards adopted by the organisation, such as SSADM; ■ industry standards such as data exchange using XML; ■ system development tools or development environments for programming, such as
open-source technology or proprietary tools such as Microsoft Visual Studio; ■ number of users to be supported concurrently and the performance required.
Hoffer et al. (2010) refer to a design strategy as the high-level statement defining how the development of an information system should proceed which addresses all the issues described above. They identify three different aspects of design:
1. Dividing requirements (from the analysis phase discussed in the previous chapter) into sets of essential requirements and optional requirements which may be built into future versions.
2. Enumerating different potential implementation environments (hardware, system software and network platforms: discussed in Chapters 3, 4 and 5).
3. Proposing different ways to source or acquire the various sets of capabilities, for example outsourcing, purchase of pre-existing applications software or development of new capabilities (as discussed in Chapters 7 and 8 of this text).
Design strategy
A high-level statement about the approach to developing an information system. It includes statements on the system’s functionality, hardware and system software platform, and the method of acquisition.
M11_BOCI6455_05_SE_C11.indd 394 30/09/14 7:11 AM
395ChaPter 11 SYSTEMS DESIgN
While this is a useful way of breaking down decisions that need to be taken about design of an information system, the reality is that by the time a systems development project enters the main design phase, all three of these areas will have been agreed. They are part of the feasibility analysis described earlier (in Chapter 8). So, in this chapter we focus on the approaches to detailed design needed to implement the system requirements. These include design of the user interface, database and security of a system within the technical environment and the acquisition method that has already been selected.
THE RELATIONSHIP BETWEEN ANALYSIS AND DESIGN
As Yeates and Wakefield (2003) point out, there is considerable overlap between analysis and design. To help ensure completion of the project on time, preliminary design of the architecture of the system will start while the analysis phase is progressing. Furthermore, the design phase may raise issues on requirements that may require further analysis with the end-users, particularly with the prototyping approach.
The distinction is often made between the logical representation of data or processes during the analysis stage and the physical representation at the design stage. Consider, for example, data analysis: here the entity relationship diagram of the analysis phase described earlier (in Chapter 10) will be transformed into a physical database table definition at the design stage as described later in this chapter. A logical entity ‘customer’ will be specified as a physical database table ‘Customer’ in which customer records are stored. Similarly, the dataflow diagram will be transformed into a structure chart indicating how the different submodules of the software will interact at the design stage.
ELEMENTS OF DESIGN
The different activities that occur during the design phase of an information systems project can be broken down in a variety of ways. In this section we consider different ways of approaching system design. These alternatives are often used in a complementary fashion rather than exclusively.
A common approach to design is to consider different levels of detail. In the next main section we start by considering an overall design for the architecture of the system. This is referred to as ‘system design’. Once this is established, we then design the individual modules and the interactions between them. This is known as ‘module design’. Through using this approach we are tackling design by using a functional decomposition or top- down approach, similar to that referred to earlier (in Chapter 9) on project management as the ‘work breakdown structure’. Major modules for an online banking system will be those for capturing and displaying data and interacting with the user, data access modules which interface to the bank’s legacy customer database, and security or user access modules.
Since many systems are made from existing modules or pre-built components that need to be constructed, the design approach that is most commonly employed is a top-down strategy. In this approach, it is best to consider the overall architecture first and then perform the detailed design on the individual functional modules of the system. The ‘divide and conquer’ approach can then be used to assign the design and implementation tasks for
Top-down or bottom-up?Top-down design
The top-down approach to design involves specifying the overall control architecture of the application before designing the individual modules.
M11_BOCI6455_05_SE_C11.indd 395 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT396
each module to different development team members. The description in this chapter will follow this approach by looking at the overall design first and then at the detailed module design.
The bottom-up approach to design starts with the design of individual modules such as the security module, establishing their inputs and outputs, and then builds an overall design from these modules.
Bottom-up design
The bottom-up approach to design starts with the design of individual modules, establishing their inputs and outputs, and then builds an over-all design from these modules.
An aspect of the design which is quite easy to overlook is testing that the design we produce is the right one. Checking the design involves validation and verification.
In validation we will check against the requirements specification and ask ‘Are we building the right product?’ In other words, we test whether the system meets the needs of the end-users identified during analysis such as functions required and speed of response. Validation will occur during testing of the system by the end-users; it highlights the value of prototyping in giving immediate feedback of whether a design is appropriate.
When undertaking verification we will ‘walk through’ the design and ask ‘Are we building the product right?’ Since there are a number of design alternatives, designers need to consult to ensure they are choosing the optimal solution. Verification is a test of the design to ensure that the one chosen is the best available and that it is error-free.
The two questions should be considered throughout the design process and also form the basis for producing a test specification to be used at the implementation stage.
Validation and verification
Validation
This is a test of the design where we check that the design fulfils the requirements of the business users which are defined in the requirements specification.
Verification
This is a test of the design to ensure that the design chosen is the best available and that it is error-free.
Scalability is the potential of an information system or piece of software or hardware to move from supporting a small number of users to supporting a large number of users without a marked decrease in reliability or performance.
When designing information systems, the design target must always be for the maximum anticipated number of users. Many implementations have failed, or have had to be redesigned at considerable cost, because the system used in the development and test environment with a small number of users does not scale to the live system with many more users.
If the system does not scale, there may be major problems with performance which makes the system unusable. Volume or capacity planning (Chapter 12), in which the anticipated workload of the live environment is simulated, can help us foresee problems of scalability.
Scalability
Scalability
The potential of an information system or piece of software or hardware to move from supporting a small number of users to a large number of users without a marked decrease in reliability or performance.
Another common approach to design is to consider data modelling and process modelling separately. The design of the data structures required to support the system, such as input and output files or database tables, are considered in relation to information collected at the analysis stage as the entity relationship diagram (ERD) and data requirements. In SSADM a separate stage is identified for data design which is followed by process design although the two are often combined.
Process modelling is the design of the different modules of the system, each of which is a process with clearly defined inputs and outputs and a transformation process. (Note that this term is also used as an approach to design in business process re-engineering.) Dataflow diagrams are often used to define system processes.
Data modelling and process modelling
Process modelling
Involves the design of the different modules of the system, each of which is a process with clearly defined inputs and outputs and a transformation process. Dataflow diagrams are often used to define processes in the system.
M11_BOCI6455_05_SE_C11.indd 396 30/09/14 7:11 AM
397ChaPter 11 SYSTEMS DESIgN
Data modelling considers how to represent data objects within a system, both logically and physically. The entity relationship diagram is used to model the data and a data dictionary is used to store details about the characteristics of the data, which is sometimes referred to as ‘metadata’.
The processes or program modules which will manipulate these data are designed based on information gathered at the analysis stage in the form of functional requirements and dataflow diagrams. This approach is used, for example, by Curtis (2008). While this is a natural division, there is a growing realisation that for a more efficient design these two aspects cannot be considered in isolation. Object-oriented techniques, which are increasing in popularity, consider the design of process and associated data as unified software objects. These are considered in more detail at the end of this chapter.
Other elements of design are required by the constraints on the system. To ensure that the system is easy to use we must design the user interface carefully.
To ensure that the system is reliable and secure, these capabilities must be designed into the system. User interface and security design are elements of design that will be considered at both the overall or system design phase and the detailed design phase.
Data modelling
Data modelling involves considering how to represent data objects within a system, both logically and physically. The entity relationship diagram is used to model the data.
In this chapter we will review the following major elements of systems design:
1. Overall design or system design. What are the best architecture and client/server infra- structure? The overall design defines how the system will be broken down into different modules and how the user will navigate between different functions and different views of the data.
2. Detailed design of modules and user interface components. This defines the details of how the system will operate. It will be reviewed by looking at user interface and input/output design.
3. Database design. How to design the most efficient structure using normalisation. 4. User interface design. How to design the interface to make it easy to learn and use. For
web-based systems this includes the information architecture. 5. Security design. Measures for restricting access data and safeguarding data against
deletion.
What needs to be designed?
System or outline design
A high-level definition of the different components that make up the architecture of a system and how they interact.
SYSTEM OR OUTLINE DESIGN
System or outline design involves specifying an overall structure or systems architecture for all the different components that will make up the system. It is a high-level overview of the different components that make up the architecture of a system and how they interact. The components include software modules that have a particular function such as a print module, the data they access, and the hardware components that may be part of the system. Hardware will include specifying the characteristics of the client PC and servers, plus any additional hardware such as an image scanner or specialised printer.
Designing the overall architecture involves specification of how the different hardware and software components of the system fit together. To produce this design, a good starting point is to consider the business process definition that will indicate which high-level tasks will be performed using the different components of the system. Flow process charts or process maps such as Figure 11.1 can be used to inform the architectural design directly since they help to identify the different components needed and how they link. Figure 11.2 concentrates on hardware, but also describes location of data and applications.
Systems architecture
The design relationship between software applications, hardware, process and data for an information system.
M11_BOCI6455_05_SE_C11.indd 397 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT398
Figure 11.1 Flow process chart for a workflow processing system
Yes
Write to request
additional info
No
No
No
Yes
YesYes Yes
No
Key
Inbound goodsDelayInspection/
measurementProcessDecision Transportation
Authorised?
Phone to request
more info
Is info available?
Case complete
Mail in
Awaiting sorting Categorise New
application? Key in
application
Action priority
task from queue
Assign priority
according to date
Associate to
case
Scan-in documen-
tation
More info required?
Mark item complete
Last task for case?
Manager checks
Process modelling Process modelling is used to identify the different activities required from a system, as explained in Chapter 10. These functions can be summarised using a flow process chart as shown in Figure 11.1.
The overall architecture description will also include details of the navigation between the main screens or views of data in the application which can be based on this type of diagram.
Screen functions needed in this software are to categorise the type of mail received, associate it with a particular ‘case’ or customer and review items of work in the workflow queue, marking them as complete where appropriate. Table 11.1 summarises what is achieved during the different types of design.
M11_BOCI6455_05_SE_C11.indd 398 30/09/14 7:11 AM
399ChaPter 11 SYSTEMS DESIgN
Figure 11.2 System architecture for a workflow processing system
Workflow and image
index database
Optical jukebox images
Processing PCScanner control PC Processing PC
Document scanner
Workflow server
Scanner PC: – Scanning client – Image client – Network and database connectivity
Processing PC: – Workflow client – Image client – Network and database connectivity – Terminal emulation software
Ethernet LAN
Mainframe
Printer (customer letters)
Hub
Tape backup
Table 11.1 Comparison between the coverage of system and detailed design
Design function System design Detailed design
Architecture Specification of different modules and communication between them; specification of hardware components and software tools
Internal design of modules
User interface Flow of control between different views of data
Detailed specification of input forms and dialogues
Database Data modelling of tables Normalisation
File structure Main file types and contents Detailed ‘record and field structure’
Security Define constraints Design security method
M11_BOCI6455_05_SE_C11.indd 399 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT400
Management of a business applications infrastructure involves delivering appropriate applications and levels of service to all users of information systems services. The objective of the designer, at the behest of the IS manager, is to deliver access to effective, integrated applications and data that are available across the whole company. Traditionally businesses have tended to develop applications silos or islands of information as depicted in Figure 11.3(a) – these correspond to the functional parts of the organisation. Figure 11.3(b) shows that these silos have three different levels of applications:
1. there may be different technology architectures or hardware used in different functional areas;
2. there may also be different applications and separate databases in different areas; 3. processes or activities followed in the different functional areas may also be different.
Designing enterprise applications
Figure 11.3 (a) Fragmented applications infrastructure, (b) integrated applications infrastructure
Source: Adapted from Hasselbring (2000)
Business process architecture
Application / data architecture
Technology architecture
Business process architecture
Application / data architecture
Technology architecture
Business process architecture
Application / data architecture
Technology architecture
Procurement and logistics Finance Marketing
Fu nc
tio na
l b ar
rie r
(a)
Fu nc
tio na
l b ar
rie r
Business process architecture
Application / data architecture
Technology architecture
Procurement and logistics Finance
Functional integration
Marketing
(b)
M11_BOCI6455_05_SE_C11.indd 400 30/09/14 7:11 AM
401ChaPter 11 SYSTEMS DESIgN
These applications silos are often a result of decentralisation or poorly controlled investment in information systems, with different departmental managers selecting different systems from different vendors. An operational example of the problems this may cause is if a customer phones a B2B company for the status of a bespoke item they have ordered, the person in customer support may have access to their personal details, but not the status of their job which is stored on a separate information system in the manufacturing unit.
To avoid the problems of a fragmented applications infrastructure, companies have been attempting, since the early 1990s, to achieve the more integrated position shown in 11.3(b). Here the technology architecture, applications and data architecture and process architecture are uniform and integrated across the organisation. To achieve this many companies turned to enterprise systems vendors such as SAP and Oracle. Here, they are effectively using a pre-existing design from the off-the-shelf package, and the design involves selecting appropriate modules and tailoring them for the revised business process. Enterprise systems software is discussed in more detail earlier (in Chapter 6).
A typical IT department is like an old house that has been mended and extended so that the original design and infrastructure have almost disappeared, says Peter Chadha, chief executive of DrPete, an IT strategy consultancy.
Nobody remembers the location of gas pipes and electricity cables or how the plumbing works, making the building difficult to adapt to a new purpose.
‘The ‘house’ is so difficult to update and ill-suited to modern living, it is now easier to put a Portakabin in the garden than continue using it,’ says Mr Chadha.
Similarly, meeting a business’s technology needs can often now be better accomplished using addi – tional products and services – such as smartphones, tablet computers, software as a service (SaaS) and outsourcing – rather than upgrading legacy IT systems.
For example, Mr Chadha helped to implement an iPad-based electronic reception logbook for The Office Group, the meetings and events organiser, using Google Apps with Google Scripting. ‘It gave reception a modern feel and meant anyone in the building could instantly know who was in,’ he says.
Business executives have seen how quickly apps can be implemented on their personal mobile devices, and they expect IT departments to be equally responsive to their business needs.
But most applications need to be integrated with exist- ing systems – a difficult, expensive, time-consuming process, as IT departments point out.
Executives often think of the IT department as blocking innovation, says Roop Singh, managing partner at Bangalore-based Wipro Consulting Services.
‘This is a bit unfair, because IT departments have to worry about complexity, security, risk and support,’ says Mr Singh.
‘Moreover, since 2009 they have been forced to focus on minimising costs, keeping the lights on rather than making a difference to the business.’
This showed in a recent UK survey of 1,000 senior IT decision makers, commissioned by KCom, a services communications provider.
Some 72.5 per cent of respondents said they had no plans to invest in IT systems during the coming year, and 26 per cent cited an inability to demonstrate that IT will help meet strategic objectives and provide a return on investment as the reason for holding back.
This lack of focus on business objectives is crucial, says Stephen Pratt, managing partner of worldwide consulting at Infosys in California. ‘It is very common for the business to say it wants something done in two years and for the IT department to say it will take four,’ says Mr Pratt.
‘Most executives say technology infrastructure is limiting their ability to achieve business goals,’ he adds. ‘It ought to be doing the opposite: driving the innovation that pushes the business faster than it’s comfortable with.’
Mr Pratt says IT should be split into two parts, infrastructure and innovation. Infrastructure should
Systems management: driving innovation should be the main objective By Jane Bird
CASE STUDY 11.2
➨
M11_BOCI6455_05_SE_C11.indd 401 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT402
provide a basic service, as do the office’s air conditioning and coffee machines.
Innovation should focus on ways to help the business achieve its strategic goals. You need two distinct personalities to lead these functions, Mr Pratt says.
‘People focused on innovation are more likely to look for progressive ways to deploy technology – they are more likely to embrace change than resist it.’
Outsourcing applications, such as enterprise resource planning, finance and supply chain, is a good way to focus on innovation, says Brendan O’Rourke, chief information officer of TelefÓnica Digital, the telecommunications company, which outsources many applications to IT services consultancy Cognizant.
‘An outsourcer can get on with the management, opera- tion and maintenance of IT, and is best placed to opti- mise costs,’ says Mr O’Rourke. He involves Cognizant closely in developing applications. ‘This means we can hand over application maintenance to Cognizant at an early stage, and eventually send it offshore.’
Mr Chadha cites applications that make it possible to implement customer relationship systems in ‘days or weeks rather than the months or years it might if organisations implemented it themselves’.
Cloud-based SaaS systems are also easier to try out, says Mr Chadha. This frees time to concentrate on improving processes and training people.
Wipro’s Mr Singh says: ‘IT needs to engage with the business and explain how difficult it is simply keeping the lights on.’
But he notes that IT departments are recruiting more people who understand the business side. Banks, for example, are hiring regulatory experts who can take proactive steps to ensure compliance.
In retail, communications experts are joining IT teams. This helps them not only respond better to business needs but also lets them articulate the benefits and prob- lems with technological developments more clearly. Such approaches will help teams communicate more effectively and so drive the business forward rather than hold it back.
The majority of modern information systems are designed with a client/server architecture. In the client/server model, the clients are typically desktop PCs which give the ‘front-end’ access point to business applications. The clients are connected to a ‘back-end’ server computer via a local- or wide-area network. As explained earlier (in Chapter 5), applications accessed through a web browser across the Internet are also client/server applications. These include e-commerce applications for online purchase and application service provider solutions such as remote e-mail management.
When it was introduced, the client/server model represented a radically new architecture compared with the traditional centralised processing method of a main-frame with character-based ‘dumb terminals’.
Client/server is popular since it provides the opportunity for processing tasks to be shared between one or more servers and the desktop clients. This gives the potential for faster execution, as processing is shared between many clients and the server(s), rather than all occurring on a single server or mainframe. Client/server also makes it easier for end-users to customise their applications. Centralised control of the user administration and data security and archiving can still be retained. With these advantages, there are also system management problems which have led to an evolution in client/server architecture from two- to three-tier as described below. The advantages and disadvantages of client/ server are discussed earlier (in Chapter 5).
When designing an information system for the client/server architecture, the designer has to decide how to divide tasks between the server and the client. These tasks include:
■ data storage; ■ query processing;
The client/server model of computing
Client/server model
This describes a system architecture in which end-user computers access data from more powerful server computers. Processing can be split in various ways between the server and client.
QUESTION
Evaluate the approach to systems management discussed in the case study.
Source: Bird, J. (2012) Systems management: driving innovation should be the main objective. Financial Times. 6 November. © The Financial Times Limited 2012. All Rights Reserved.
M11_BOCI6455_05_SE_C11.indd 402 30/09/14 7:11 AM
403ChaPter 11 SYSTEMS DESIgN
■ display; ■ application logic including the business rules.
Client/server design generally follows just two main approaches: two-tier and three-tier client/ server. Two-tier client/server is sometimes referred to as fat client, the application running on the PC being a large program containing all the application logic and display code. It retrieves data from a separate database server. Three-tier client/server is an arrangement in which the client is mainly used for display, with application logic and the business rules partitioned on a server as a second tier and the database server the third tier. Here the client is sometimes referred to as a thin client, because the size of the application’s executable program is smaller. It is important to understand the distinctions between these, since they involve two quite different design approaches that can have significant implications for application performance and scalability. These are the ‘thin client’ approach where the client only handles display and the ‘fat client’ approach where a larger program runs on the client and handles both display and application logic. In the ‘fat client’ model the client handles the display and local processing, with the server holding the data (typically in a database) and responsible for handling processing of queries on the back end. This model, which is known as two-tier client/server, is still widely used, but more recently the three-tier client/server has become widespread due to problems with unreliability and lack of scalability with two-tier systems.
Figure 11.4 shows a simple two-tier client/server arrangement. In this, a client application directly accesses the server to retrieve information requested by the user, such as a report of ‘aged debtors’ in an accounting system. In this two-tier model, the client handles all application logic such as control flow, the display of dialogues and formatting of views.
In a three-tier client/server model (Figure 11.4(b)), the GUI or ‘thin client’ forms the first tier, with the application and function logic separated out as a second tier and the data source forming the third tier. In this model there may be a separate application and database server, although these could reside on the same machine. Two-tier client/server may be the more rapid to develop in a RAD project, but it will not be the more efficient at run time or the easier to update. Through separating out the display coding and the business application into three tiers, it is much easier to update the application as business rules change (which will happen frequently). It also offers better security through fine-tuning according to the service required.
Two-tier client/server or fat client
Application running on the PC being a large program containing all the application logic and display code. It retrieves data from a separate database server.
Three-tier client/server or thin client
The client is mainly used for display with application logic and the business rules partitioned on a second- tier server and a third- tier database server.
Figure 11.4 (a) Two-tier and (b) three-tier client/server architecture compared
Database server
Client GUI application
and business logic
Database server
(a)
SQL query/ retrieval
Client GUI presentation
Application server
function/ logic
(b)
RPCs, object methods
SQL query/ retrieval
M11_BOCI6455_05_SE_C11.indd 403 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT404
The module and program structure will also be outlined at the system design stage. There are various notations used by programmers to indicate the structure that will be used. An example is the structure chart which is used in the design methodology JSD (Jackson system development – Jackson, 1983). An example of a structure chart is illustrated in Figure 11.5. A structure chart shows how the software will be broken down into different modules and gives an indication of how they will interact. Here the main control module is calling a variety of other modules with different functions. The interaction or exchange of data items between procedures is also shown. For example, the ‘edit customer’ module is passed the name (or customer code) of the customer to edit and if the user changes the data a ‘flag’ (True or False) parameter is passed back to the control module, indicating that the data were updated. Similarly, the credit check module is passed the name of the customer and a flag indicates whether the customer is creditworthy or not.
The interactions between modules will normally be defined at this stage rather than at the detailed design stage. For example, there may be a function to produce a customer report of credit history. Here, the function will need to know the customer and the time period for which a report is required. Thus the system design will specify the function with three parameters as shown in Figure 11.6:
Function: Print_Credit_history Parameters: Cust_id, Period_start, Period_end Return value: Print Successful
Program and module structure
Figure 11.5 Example of a program structure chart
Control program
Display all customer
Credit check
Permit mortgage
Display customer
history
Edit customer
Flag
parameter
Data item
parameter
NameName
Credit
worthy?
D at
e ra
ng e
Name
Data
updated? Nam
e
Figure 11.6 Part of a structure chart showing how parameters are passed from a control module to a module to print a credit history
Print_ credit_ history
Print_successful
e.g. True
Period_end
e.g. 30/04/99
Period_start
e.g. 01/04/99
Cust_id
e.g. 03574
Control module
M11_BOCI6455_05_SE_C11.indd 404 30/09/14 7:11 AM
405ChaPter 11 SYSTEMS DESIgN
DETAILED DESIGN (MODULE DESIGN)
Detailed design involves considering how individual modules will function and how information will be transferred between them. For this reason, it is sometimes referred to as module design. A modular design offers the benefit of breaking the system down into different units which will be easier to work on by the team developing the system. It will also be easier to modify modules when changes are required in the future.
Module design includes:
■ how the user interface will function at the level of individual user dialogues; ■ how data will be input and output from the system; ■ how information will be stored by the system using files or a database.
Detailed design is sometimes divided further into external and internal design. The external design refers to how the system will interact with users, while the internal design describes the detailed workings of the modules.
Detailed or module design
Detailed design involves the specification of how an individual component of a system will function in terms of its data input and output, user interface and security.
RELATIONAL DATABASE DESIGN AND NORMALISATIONFOCUS ON…
Business users are often involved in the design of relational databases, either in an advisory capacity (specifying what data they should contain) or when building a small personal database, perhaps of customer contacts. For this reason, the terminology used when working with databases and the process of producing a well-designed database are described in some detail.
Relational database terminology was introduced earlier (in Chapter 4), but it is restated here since understanding the terms is important to understanding the design process. In the previous chapter we saw how entity-relationship modelling is used to analyse the conceptual design of a database. In this section we look at the next stage, which is the creation of a logical data model and then a physical database where tables and fields are created and then populated with data in records. The example used is a sales order processing database for a clothing manufacturer, ‘Clothez’, and we illustrate the creation of tables within a database using Microsoft Access.
Databases are used for the management of information and data within organisations. The functions of a database, whether it is an address book on a phone or a corporate database supporting an entire organisation, are to enter, modify, retrieve and report information.
The terms defining the structure of a relational database can be considered as a hierarchy or tree structure. A single database is typically made up of several tables. Each table contains many records. Each record contains several fields. These terms can be related to the Clothez example as follows:
1. Database – all information for one business application (normally made up of many tables). Example: sales order database.
2. Table – a collection of records for a similar entity. Example: all customers of the company within the sales order database. Other tables in the database are product and order.
3. Record – information relating to a single instance of an entity (comprising many fields). Example: single customer such as Poole.
4. Field – an attribute of the entity. Example: customer name or address for a particular customer such as Poole.
Databases – fundamental terms
Database
All information for one business application (normally made up of many tables).
Table
Collection of records for a similar entity.
Record
Information relating to a single entity (comprising many fields).
Field
An attribute of the entity.
M11_BOCI6455_05_SE_C11.indd 405 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT406
This structure is represented as a diagram in Figure 11.7 for the Clothez database. It can be seen that the sales order processing database for Clothez could be designed and implemented as three tables: customer, order and product. Each table such as customer is made up of several records for different customers and then each record is divided down further into fields or attributes which describe the characteristics of the customers such as name and address. Note that this example database is simplified and this structure only permits one product to be ordered when each order is placed. The reason for this restriction is that the database has not been fully normalised by breaking down the order table into separate order-header and order-line tables which then allow more than one product to be placed per order. The normalisation process is described in a later section.
If the data were entered into a database such as Microsoft Access, the tables and their records and fields would appear as in Figure 11.8. All three tables are shown. Fields and records for the product table are shown in Figure 11.9.
Figure 11.7 Diagram illustrating the tree-like structure used to structure data within a relational database. This example refers to the Clothez database. The fields are only shown for the first record in each table
Database Tables Records Fields
Sales order processing
Customer
Cust_id:1
Address: 1, Kedleston Rd Address: 1,
Kedleston Rd
Num: 01332 622 222
First name: Mary
Last name: Poole
Smith
Poole
Legg
Judd
Brown
Order
Order_id:1
Quantity: 3
Cust_id:6
Date: 1/3/99
2
1
3
4
5 Order fulfilled: Yes
Product
Product_id:1
Cost: £45
Shirt
Jeans
Suit
Wedding dress
Denim Jeans
Description: Jeans
M11_BOCI6455_05_SE_C11.indd 406 30/09/14 7:11 AM
407ChaPter 11 SYSTEMS DESIgN
A further term that needs to be introduced is key field. This is the field by which each record is referred, such as customer number. The key field provides a unique code such as ‘001’ or ‘993AXR’, comprising numbers or letters or both. It is required to refer to each record to help distinguish between different customers (perhaps three different customers called Smith). Key fields are also used to link different tables, as explained in the next section.
Source: Screenshot frame reprinted by permission from Microsoft Corporation
Figure 11.8 Clothez database in Microsoft Access
Figure 11.9 Product table showing records and fields
Source: Screenshot frame reprinted by permission from Microsoft Corporation
Key field
This is a field with a unique code for each record. It is used to refer to each record and link different tables.
The term relational is used to describe the way the different tables in a database are linked to one another. Key fields are vital to this. In recognition of the importance of key fields, Microsoft uses the key as the logo or brand icon for the Access database.
What makes an Access database relational?
M11_BOCI6455_05_SE_C11.indd 407 30/09/14 7:11 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT408
In the Clothez databases, the key fields are: Customer_id, Product_id and Order_id (id is short for identifier; reference (ref) or code number (num) could also be used for these field names). These fields are used to relate the three tables, as shown in Figure 11.10.
Figure 11.10 shows how the highlighted record in the order table (Order_id = 4) uses key fields to refer to the customer, Mary Poole, who has placed the order (Cust_id = 1) and the product (Shirt) she has ordered (Prod_id = 2).
To understand how the key fields are used to link different tables, two different types of fields need to be distinguished: primary and foreign keys.
Primary keys provide a unique identifier for each table which refers directly to the entity represented in the table. For example, in the product table, the primary key is Prod_id. There is only one primary key per table, as follows:
Customer table: Customer_id Order table: Order_id Product table: Prod_id
Foreign keys are used to link tables by referring to the primary key of another table. For example, in the order table, the foreign key Cust_id is used to indicate which customer has placed the order. The order table also contains Prod_id as a foreign key, but neither of the other tables has foreign keys. There may be zero, one or more foreign key fields per table.
Figure 11.11 shows how the primary key fields in the customer and product tables are used to link to their corresponding foreign keys (Cust_id and Prod_id) in the order table when constructing a query in Microsoft Access. This is a summary query which summarises the details of orders by taking data from each table. The result of the query is shown in Figure 11.12. The highlighted record in Figure 11.12 is the example relationship which was used to illustrate the links between tables in Figure 11.10.
Primary key fields
These fields are used to uniquely identify each record in a table and link to similar foreign key fields (usually of the same name) in other tables.
Figure 11.10 Clothez database in Microsoft Access, showing how the Order table is related to Customer and Product
Source: Screenshot frame reprinted by permission from Microsoft Corporation
M11_BOCI6455_05_SE_C11.indd 408 30/09/14 7:11 AM
409ChaPter 11 SYSTEMS DESIgN
Figure 11.11 Query design screen for the summary query in the Clothez database
Source: Screenshot frame reprinted by permission from Microsoft Corporation
Figure 11.12 Summary query for orders placed from Clothez database
Source: Screenshot frame reprinted by permission from Microsoft Corporation
Rules for identifying primary and foreign keys
1. Primary keys
■ The primary key provides a unique identifier for each record.
■ There is usually one primary key per table (unless a compound key of several fields is used).
■ The name of the field is usually the name of the entity or table followed by code, reference, identifier or id.
2. Foreign key
■ The foreign key always links to a primary key in another table(s).
■ There may be 0, 1 or several foreign keys in each table.
M11_BOCI6455_05_SE_C11.indd 409 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT410
A relatively straightforward aspect of database design is deciding on the field definitions. Fields need to be defined in terms of:
■ field name; ■ field data type; ■ field data size; ■ field validation rules.
These are defined when the database is created, since storage space for each field is pre- allocated in a database. During analysis and design, the field characteristics are managed in a data dictionary, often referred to as the metadata or ‘data about data’, particularly with reference to data warehouses (Chapter 4).
Let us now consider each of the characteristics of a field in more detail:
1. Field name. Field names should clearly indicate the content of the field. It is conventional in some databases to use underscores rather than spaces to define the name, since some databases may not recognise spaces (e.g. Order_fulfilled rather than Order fulfilled). In some databases the number of characters is restricted to eight, but this is now rare.
2. Field data type. Data types define whether the field is a number, a word, a date or a specialised data type. The main data types used in a database such as Microsoft Access are:
■ Number. Whole number or decimal. (Most databases recognise a range of numeric data types such as integer, real, double, byte, etc.)
■ Currency. This data type is not supported for all databases.
■ Text. Often referred to as character, string or alphanumeric. Phone numbers are of this data type, since they may need to include spaces or brackets for the area code.
■ Date. Should include four digits for the year! Can also include time.
■ Yes/No. Referred to as Boolean or true/false in other databases.
Key fields can be defined as either number or text.
3. Field data size. Field data sizes need to be pre-allocated in many databases. This is to help minimise the space requirements. Field size is defined in terms of the number of digits or characters which the designer thinks is required. For example, a user may define 20 characters for a first name and 40 characters for an address. It is better to overestimate than to risk having to modify the field later.
4. Field validation rule. Validation rules are necessary to check whether the user has entered valid data. Basic types of validation are:
■ Is field essential? For example, postcodes are usually mandatory to help identify a customer’s address.
■ Is field format correct? For example, postcodes or ZIP codes usually follow a set format.
■ Is value within range? For example, an applicant for a mortgage would have to be more than 18 years of age.
■ Does field match a restricted list? An entry for marital status might need to be ‘married’, ‘divorced’ or ‘single’. Restricted list choices can be defined in separate ‘lookup tables’.
Defining field data types and sizes
Data dictionary
A repository that is used to store the details of the entities of the database. It will define tables, relations and field details which are sometimes referred to as metadata or ‘data about data’.
M11_BOCI6455_05_SE_C11.indd 410 30/09/14 7:12 AM
411ChaPter 11 SYSTEMS DESIgN
To maintain data quality validation is an important, but sometimes neglected, aspect of detailed design which is covered in more detail in the section on input design below.
Table 11.2 shows how the field definitions for a table can be summarised. Note that setting the key fields to a field size of six allows a maximum number of customers of 999,999.
Normalisation is a design activity that is used to optimise the logical storage of data within a database. It involves simplification of entities and removal of duplication of data.
It is one of the most important activities that occurs during database design. The main purpose of data normalisation is to group data items together into database structures of tables and records which are simple to understand, accommodate change, contain a minimum of redundant data and are free of insertion, deletion and update anomalies. These anomalies can occur when a database is modified, resulting in erroneous and/ or duplicate data. These anomalies are explained in the next section. Since this activity should be conducted when all databases are designed, and since databases are so widely used in business applications, we consider the process of normalisation in some detail.
Normalisation is essentially a simplifying process that takes complex ‘user views’ of data (such as end-user, customer and supplier) and converts them into a well-structured logical representation of the data.
Normalisation has its origins in the relational data model developed by Dr E.F. Codd from 1970 onwards and is based on the mathematics of set theory. In this section we present a brief, straightforward explanation of the steps involved in normalising data, which can be applied to simple and complex data structures alike. The description of normalisation involves a series of stages which convert unnormalised data to normalised data. There are a series of intermediate stages which are referred to as first, second, third and fourth normal forms.
What is normalisation?
Normalisation
This design activity is a procedure that is used to optimise the physical storage of data within a database. It involves simplification of entities and minimisation of duplication of data.
Table 11.2 Definition of field details for the order table in the Clothez database (with fields added to show range)
Field name Field type Field size Validation rule Key field
Order_id Number 6 Mandatory Primary
Cust_id Number 6 Mandatory Foreign
Prod_id Number 6 Mandatory Foreign
Date_placed Date 10 Mandatory, must be valid date
Order_fulfilled Yes/No 3 Restricted, must be Yes/No
Special_instructions Text 120 Not mandatory
Total_order_value Currency 10 Not mandatory
Before commencing the steps of normalisation, it is worth providing some key definitions in order to simplify the flow of the following sections. These definitions are summarised in Table 11.3.
Some definitions
M11_BOCI6455_05_SE_C11.indd 411 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT412
Table 11.3 Summary of terms used to describe databases and normalisation
term Definition
Normalisation The process of grouping attributes into well-structured relations between records linked with those in other tables
Table Used to store multiple records of different instances of the same type of entity such as customer or employee
Relation A named, two-dimensional table of data. An equivalent term for ‘table’ used in normalisation
Attribute The smallest named unit in a database; other names include ‘data item’ and ‘field’
Update anomaly The inability to change a single occurrence of a data item in a relation without having to change others in order to maintain data
Insertion anomaly The inability to insert a new occurrence (record) into a relation without having to insert one into another relation first
Deletion anomaly The inability to delete some information from a relation without also losing some other information that might be required
Functional dependency A functional dependency is a relationship between two attributes and concerns determining which attributes are dependent on which other attributes: ‘attribute B is fully functionally dependent on attribute A if, at any given point in time, the value of A determines the value of B’ – this can be diagrammed as A ➝ B
Determinant An attribute whose value determines the value of another attribute
Primary key An attribute or group of attributes that uniquely identifies other non-key attributes in a single occurrence of a relation
Foreign key An attribute or group of attributes that can be used to link different tables; the foreign key will link to the primary key in another table
Composite key A key made up of more than one key within a relation
Candidate key A candidate key is a determinant that can be used for a relation; a relation may have one or more determinants; determinants can be either single attributes or a composite key
Unnormalised data are characterised by having one or more repeating groups of attributes. Many user views of data contain repeating groups. Consider a customer order form for the Clothez company (Figure 11.13): there might be such information as customer name, customer address and order date recorded at the top of the form; there might also be a section in the main body of the form that allows multiple items to be ordered.
It is possible to represent the user view described above in diagrammatic form which is equivalent to a physical database table. Note that the example in Figure 11.14 uses a subset of the information shown in the order form example.
The possibility of entering multiple lines into a single order form is clearly a repeat-ing group, i.e. order no. is being used to identify multiple order lines within the view and so, therefore, is not a unique determinant of each order line and its details.
It might also be argued that address also represents a repeating group, because there are two address lines. However, in practice a set number of address lines would be given a unique data name for each line and could be identified by a customer number. (Address is
Unnormalised data
M11_BOCI6455_05_SE_C11.indd 412 30/09/14 7:12 AM
413ChaPter 11 SYSTEMS DESIgN
an example of a non-repeating ‘data aggregate’, whereas the line details are an example of a repeating data aggregate.)
By constructing such a diagram, it becomes much easier to identify repeating groups of data and thus pave the way to progressing to first normal form (1NF).
Figure 11.13 Customer order form for the Clothez company
Name: Address:
Post code:
Mary Poole 1 Kedleston Road Derby DE22 1GB
Order date: Tel no:
5/3/99 01332 622 222
Line no 1
Product no 2
Product description Shirt
Quantity 1
Price £12.00
Order no: Cust no:
4 1
Figure 11.14 Repeating groups for the Clothez database
Cust name
Cust no
Cust addr
Tel no
Order date
Order no
Prod no
Prod des
Prod qty
Price
At this stage it is not obvious why repeating groups of data are a bad thing! If Figure 11.14 is transformed into a table, however, updating it could result in errors or inconsistencies. Each of the three different types of anomalies is now explained in turn with reference to Table 11.4.
Insertion anomaly
If it was desired to enter a new customer into the table, it would not be possible without having an order to enter at the same time.
Insertion/update/deletion anomalies
Table 11.4 Table with example data for the structure shown in Figure 11.14
Customer no.
Customer name
Customer address
tel no. Order date
Order no.
Product no.
Product des
Product qty
Price
1 Poole 1, Ked 01332 5/03/99 4 2 Shirt 1 12
2 Smith 2, The 01773 2/03/99 6 5 Denim 3 60
3 Legg 3, The 01929 2/03/99 2 4 Wedding 2 199
3 Poole 1, Ked 01332 3/03/99 5 3 Suit 1 115
Insertion anomaly
It is not possible to insert a new occurrence record into a relation (table) without having to also insert one into another relation first.
M11_BOCI6455_05_SE_C11.indd 413 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT414
Update anomaly
An update anomaly indicates that it is not possible to change a single occurrence of a data item (a field) in a relation (table) without having to make changes in other tables in order to maintain the correctness of data.
If a customer such as ‘Poole’ had several orders in the table and that customer moved to a new address, all the entries in the table where that customer appeared would have to be updated if inconsistencies were not to appear.
Deletion anomaly
A deletion anomaly indicates it is not possible to delete a record from a relation without also losing some other information which might still be required.
If a customer such as ‘Smith’ had only one order in the table and that table entry were deleted, information about the customer would also be deleted.
The way to get round some of these problems is by normalising the data. Stage one of this process is the removal of repeating groups of data, i.e. proceeding to first normal form (1NF).
Update anomaly
It is not possible to change a single occurrence of a data item (a field) in a relation (table) without having to change others in order to maintain the correctness of data.
Deletion anomaly
It is not possible to delete a record from a relation without also losing some other information which might still be required.
First normal form (1NF)
Transforming unnormalised data into its first normal form state involves the removal of repeating groups of data.
In the example above, the repeating group comprises product number, product quantity and price. Removing these attributes into a separate table will not suffice, however. For example, how could each entry in the newly created table be related to the order to which it is attached? The answer lies in including a linking attribute (also known as a ‘foreign key’, as described earlier in the chapter) which is present in both the modified table and the new table. In this case, a sensible attribute to use would be order number. The first step in normalisation has thus resulted in the transformation of one table into two new ones. The two new tables are shown in Figure 11.15. The example shows the relationship between fields at the top and example records below.
Removing insertion/update/deletion anomalies
Even though repeating groups have been removed by splitting the unnormalised data into two tables (relations), anomalies of all three types still exist.
Insert anomaly
■ In the customer/order relation, an order cannot be entered without also entering the customer’s name and address details, even though they may already exist on another order; a customer cannot be added if there is no order to be placed.
■ In the order/product relation, an item cannot be added without also adding an order for that item.
Update anomaly
■ In the customer/order relation, a customer’s name and address details cannot be amended without needing to amend all occurrences (where the customer has more than one order).
■ In the order/product relation, an item description could appear on many order lines for many different customers – if the description of the item were to change, all occurrences
First normal form (1NF)
M11_BOCI6455_05_SE_C11.indd 414 30/09/14 7:12 AM
415ChaPter 11 SYSTEMS DESIgN
where that item appeared would have to be changed if database inconsistencies were not to appear.
Deletion anomaly
■ In the customer/order relation, an order cannot be deleted without also deleting the customer’s details.
■ In the order/product relation, an order line cannot be deleted without also deleting the item number and description.
Figure 11.15 The revised table structure and example data for two tables
Order no
Prod no
Prod des
Prod qty
Price
24
Cust name
Cust no
Cust addr
Tel no
Order date
Customer/order relation
Order no
Prod no
Prod des
Prod qty
Price
Order no
Primary key Foreign key
Order/product relation
Cust name
Cust no
Cust addr
Tel no
Order date
Order no
Customer/order relation
Poole1 1 Ked 01332 5/03/99 4
Smith2 2 The 01773 2/03/99 6
Legg3 3 The 01929 2/03/99 2
Poole3 1 Ked 01332 3/03/99 5
Shirt
56 Denim
42 Weddin
35
1 12
3 60
2 199
1 115Suit
Order/product relation
This activity shows a prototype database that has been produced by an employee of a toy manufacturer relating to its customers and sales activities. The designer, a business user, is not aware of the need for normalisation and has stored all the data in a single table. This has resulted in some fields like customer number and customer address repeating unnecessarily.
Identification and removal of insertion, deletion and update anomaliesActivity 11.1
➨
M11_BOCI6455_05_SE_C11.indd 415 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT416
Customer no.
Customer name
Customer address
Order no.
Product code
Product description
Quantity ordered
Price per item
total cost
Order date
Salesperson no.
100 Fred's Toys
7 High Street
10001 324 Action Ma 3 13.46 40.38 7/10/99 007
100 Fred's Toys
7 High Street
10001 567 Silly Dog 6 5.15 30.9 7/10/99 007
100 Fred's Toys
7 High Street
10001 425 Slimy Hand 12 1.39 16.68 7/10/99 007
100 Fred's Toys
7 High Street
10001 869 Kiddy Doh 4 0.68 2.72 7/10/99 007
200 Super Toys
25 West Mall
13001 869 Kiddy Doh 12 0.68 8.16 7/17/99 021
200 Super Toys
25 West Mall
13001 637 Risky 3 17.42 52.26 7/17/99 021
200 Super Toys
25 West Mall
13001 567 Silly Dog 2 32.76 43.52 7/17/99 021
300 Cheapo Toys
61 The Arcade
23201 751 Diplomat 24 5.15 123.6 6/21/99 007
QUESTIONS
1. Identify an insertion anomaly which might cause a problem when adding a new product to the range.
2. Identify two deletion anomalies which would occur if Cheapo Toys cancelled its order and a record was removed.
3. Identify an update anomaly if the product Silly Dog was renamed Fancy Dog.
4. How could the table be split up to remove the anomalies? Define the fields which would be placed in each table and define the foreign keys which would be used to link the tables.
Second normal form (2NF) states that ‘each attribute in a record (relation) must be functionally dependent on the whole key of that record’. To continue the normalisation process to second normal form, it is necessary to explore further some of the terms defined in the introductory section.
Functional dependencies
Within each of the relations produced above, a set of functional dependencies exists. These dependencies will be governed by the relationships that exist between different data items, which in turn will depend on the ‘business rules’, i.e. the purposes for which data are held and how they are used.
Once the functional dependencies have been established, it is then possible to select a candidate key for the relation.
Candidate keys
The process of analysing the functional dependencies within a relation will reveal one or more possible candidate keys – a candidate key is the minimum number of determinants (key fields) which uniquely determines all the non-key attributes. Consider the following record:
Second normal form (2NF)
Second normal form (2NF)
Second normal form states that ‘each attribute in a record (relation) must be functionally dependent on the whole key of that record’.
It is anomalies of this kind which indicate that the normalisation process needs to be taken a step further – that is, we must now proceed to second normal form (2NF).
M11_BOCI6455_05_SE_C11.indd 416 30/09/14 7:12 AM
417ChaPter 11 SYSTEMS DESIgN
An example
Consider the following record. Note that this example is different from that given in first normal form, since it illustrates the principles better.
The functional dependencies are as follows:
Part no and supplier no → Price Supplier no → Supplier name Supplier no → Supplier details
A possible candidate key might be thought to be supplier number. However, supplier number alone cannot be a determinant of price, since a supplier may supply many items.
Similarly, part number alone cannot be a determinant of price, because a part may be supplied by many different suppliers at different prices.
The candidate key is, therefore, a composite key comprising part number and supplier number.
We can express this more clearly by employing a dependency diagram (Figure 11.16). Two additional properties relating to candidate keys can now be introduced:
1. For every record occurrence, the key must uniquely identify the relation. 2. No data item in the key can be discarded without destroying the property of unique
identification.
The dependency diagram in Figure 11.16 indicates a number of problems:
■ If supplier number is discarded, it will no longer be possible to identify the remaining attributes uniquely, even though part number remains.
■ Details of a supplier cannot be added until there is a part to supply; if a supplier does not supply a part, there is no key.
■ If supplier details are to be updated, all records which contain that supplier as part of the key must be accessed – i.e. there are redundant data.
This situation is known as a partial key dependency and is resolved by splitting the record into two or more smaller records (Figure 11.17).
Part Supplier Supplier Supplier Price
No No Name details
Figure 11.16 Example of a dependency diagram for supplier example
Supplier no
Part no
Supplier name
Supplier details
Price
Figure 11.17 Revised dependency diagram for supplier example
Supplier no
Supplier name
Supplier details
Supplier no
Part no
Price
M11_BOCI6455_05_SE_C11.indd 417 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT418
A record is, therefore, in at least second normal form when any partial key dependencies have been removed.
Removing insertion/update/deletion anomalies
Consider the record structure shown in Figure 11.18. If it is assumed that an employee only works on one project at a time; then employee number is a suitable candidate key, in that all other attributes can reasonably be said to be fully functionally dependent on it.
Note: the record is already in second normal form because there is only one key attribute (therefore partial key dependencies cannot exist). However, some problems still exist:
■ Insertion anomaly: before any employees are recruited for a project, the completion date for the project cannot be recorded because there is no employee record.
■ Update anomaly: if a project completion date is changed, it will be necessary to search all employee records and change those where an employee works on that project.
■ Deletion anomaly: if all employees are deleted for a project, all records containing a project completion date would be deleted also.
To resolve these anomalies, a record in second normal form must be converted into a number of third normal form records.
Figure 11.18 Example of a structure diagram – employee details
Employee name
Employee no
Salary Project no
Completion date
Transitive dependency
A data item that is not a key (or part of a key) but which itself identifies other data items is a transitive dependency.
Third normal form (3NF): a record is in third normal form if each non-key attribute ‘depends on the key, the whole key and nothing but the key’.
An example
Consider the previous example. To convert the record into two third normal form records, any transitive dependencies must be removed. When this is done the result is the two records in Figure 11.19.
Third normal form (3NF)
Third normal form (3NF)
A record is in third normal form if each non- key attribute ‘depends on the key, the whole key and nothing but the key’.
Figure 11.19 Dependency diagram for employee example and revised structure
Employee name
Employee no
Salary Project no
Completion date
Employee name
Employee no
Salary Project no
Completion date
Project no
M11_BOCI6455_05_SE_C11.indd 418 30/09/14 7:12 AM
419ChaPter 11 SYSTEMS DESIgN
Removing insertion/update/deletion anomalies
If a record has only one candidate key and both partial key and transitive dependencies have been removed, then no insertion, update or deletion anomalies should result.
However, if a record has more than one candidate key problems can still arise. In this situation we can take the normalisation process still further.
Further normalisation may be necessary for some applications. In these normalisation can proceed to the fourth and fifth normal forms. These are described in Hoffer et al. (2013): ‘In 4NF multi-valued dependencies are removed. A multi-valued dependency exists when there are at least three attributes in a relation and for each value of A there is a well-defined set of values of B and a well-defined set of values of C. However, the set of values of B is independent of set C and vice versa.’
In 5NF it is necessary to account for the potential of decomposing some relations from an earlier stage of normalisation into more than two relations. In most practical applications, decomposition to 3NF gives acceptable database performance and is often easier to design and maintain.
Fourth normal form (4NF) and fifth normal form (5NF)
As well as the logical design of the database there are aspects of physical database design that should be taken into account. These are specialised functions performed by a database administrator or DBA. A company which does not employ a specialist risks a poor performance system or, worse still, a loss or corruption of data. These design and database implementation tasks include:
1. Design of optimal database performance. Use of specialist techniques such as indexes or stored procedures will accelerate the display of common user views such as a list of all customer orders. Queries can also be optimised, but this is mainly performed automatically by the database engines such as Oracle, Microsoft SQL Server or Informix. To verify the design is good, volume testing is essential to ensure that the system can cope with the number of transactions that will occur.
2. Designing for multi-user access. When defining a new system, it is important to consider what happens when two users want to access the same data, such as the same customer record. If access to records is unlimited, then there will be anomalous data in the database if users save data about the customer at a similar time. Since multi-user access will not be frequent, the best method for dealing with it will be to implement record locking. Here, the first user to access a record will cause the database to restrict subsequent users to read-only access to the record rather than read–write. Subsequent users should be informed that a lock is in place and access is read-only.
Other significant database design issues
This activity builds on the ABC case study from Chapter 10. It is not necessary to have completed the Chapter 10 exercise to be able to undertake this one. You should use the extract in Chapter 10 describing ABC and in particular the paper forms of the existing system to identify which fields are required in the database.
Database design exercise using the ABC case studyActivity 11.2
➨
M11_BOCI6455_05_SE_C11.indd 419 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT420
QUESTIONS
1. Either:
(a) Use normalisation to third normal form to identify tables and fields for an ABC database; or
(b) Assume the following entities for the ABC database:
■ customer details; ■ salesperson details; ■ sales order header details; ■ sales order line details; ■ item details.
2. For each table in the database, define details of:
■ table names; ■ primary and foreign key fields for each table; ■ name of each field; ■ data type of each field; ■ size of each field; ■ any validation rules which may apply to each field (e.g. a limit on maximum price or
quantity etc.).
You may find it most efficient to summarise the database definition using a table (in your word processor).
3. Planning for failed transactions. Recovery methods can be specified in the design for how to deal with failed transactions which may occur when there is a software bug or power interruption. Databases contain the facility to ‘roll back’ to the situation before a failure occurred.
4. Referential integrity. The database must be designed so that when records in one table are deleted, this does not adversely affect other tables. Impact should be minimal if normalisation has occurred. Sometimes it is necessary to perform a ‘cascading delete’, which means deleting related records in linked tables.
5. Design to safeguard against media, hardware or power failure. A backup strategy should be designed to ensure that minimal disruption occurs if the database server fails. The main design decision is whether a point-in-time backup is required or whether restoring to the previous day’s data will be sufficient. Frequently, a point- in-time backup will be required. Of course, a backup strategy is not much use if it cannot be used to restore the data, so backup and recovery must be well tested. To reduce the likelihood of having to fall back on a backup, using a fault-tolerant server is important. Specifying a server with an uninterruptible power supply, disk mirroring or RAID level 2 is essential for any corporate system. The frequency of archiving also will be specified.
6. Replication. Duplication and distribution of data to servers at different company locations and for mobile users is supported to different degrees by different database vendors.
7. Database sizing. The database administrator will size the database and perform capacity planning to ensure that sufficient space is available on the server for the system to remain functional.
8. Data migration. Data migration will occur at the system build phase, but it must be planned for at the design stage. This will involve an assessment of the different data sources which will be used to populate the database.
M11_BOCI6455_05_SE_C11.indd 420 30/09/14 7:12 AM
421ChaPter 11 SYSTEMS DESIgN
Most modern information systems use relational database management systems (RDBMS) for the storage of data. RDBMS provide management facilities which means that programmers or users do not have to become directly involved with file management. Because of this, most business users will not hear these terms unless eavesdropping on systems designers and this section is therefore kept brief. How-ever, some older systems and large-scale transaction processing systems requiring superior performance do not use RDBMS for data storage.
DESIGN OF INPUT AND OUTPUT
File-based systems are alternatives to database systems which are traditionally used for accessing data from a file directly from program code rather than a database query. Note though that when databases are designed, these are themselves made up of many files from which data are accessed directly. Database users and database programmers are shielded from this complexity. Designers will specify systems that access data that are stored in a file using two main methods:
1. Sequential access. The program reading or writing a file will start processing the file record by record (usually from the beginning). Sequential access is often used when batch processing a file which involves processing each record. Sequential file access involves reading or writing each record in a file in a set order.
2. Direct (random) access. Access can occur to any point (record) in the file without the need to start at the beginning. Direct access is preferable when finding a subset of records such as in a query, since it is much faster. Random or direct file access allows any record to be read or written.
File access methods
Sequential and random or direct file access methods
Sequential file access involves reading or writing each record in a file in a set order. Random or direct file access allows any record to be read or written.
To enable rapid retrieval of data in a random access file (and also a database table), it is conventional to use an index which will find the location of the record more rapidly. These files are sometimes referred to as ‘indexed sequential files’. A file index is an additional file that is used to ‘point’ to records in a direct access file for more rapid access. An index file for a customer file would contain two fields only for each record – the indexed item such as a customer number and the number of the record in the parent file (also known as the ‘offset’ or ‘pointer’) which contains details on this customer.
Indexing
Index
A file index is an additional file which is used to ‘point’ to records in a direct access file for more rapid access.
In transaction processing systems which use standard native files accepted directly by programs for processing rather than through the operating system rather like RDBMS, there are additional terms that are used to describe the types of files. These types include:
1. The master file. This is used to store relatively static information that does not change frequently. An example would be a file containing product details.
2. The transaction file. This contain records of particular exchanges, usually related to a transaction such as a customer placing an order or an invoice being produced. This file has records added more frequently.
File descriptions
M11_BOCI6455_05_SE_C11.indd 421 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT422
3. Archive file. To reduce storage requirements and improve performance, transactions that occurred some time ago to which businesses are unlikely to wish to refer are removed from the online system as an archive which is usually stored on a tape or optical disk. It will be still available for reference, but access will be slower.
4. Temporary files. These provide temporary storage space for the system which might be used during batch processing, when comparing data sets for example. The information would not be of value to a business user.
5. Log file. The log file is a system file used to store information on updates to other files. Its information would not be of value to a business user.
Table 11.5 Methods of file organisation
Organisation method
access method application Brief description
Sequential Sequential Batch process of a customer master file
An ordered sequential access file, e.g. ordered by customer number
Serial Sequential A sequential access file, but without any ordering
random Random + index Querying data for decision support; unsuitable for frequent updates due to overhead of updating index
Organisation is provided by index
Indexed sequential
Sequential + index Querying data for decision support and sequential batch processing
Best compromise between methods above
Information can be organised in file-based systems in a variety of ways, which are not of general relevance to the business user, so the terms are only summarised in tabular form (Table 11.5). Note that the indexed-sequential technique offers the best balance between speed of access to individual records and for achieving updates.
File organisation
When designing information processing systems, designers have to decide which is the more appropriate method for handling transactions:
■ Batch – data are ‘post-processed’ after collection, usually at times of low system workload.
■ Real-time or online processing – data are processed instantaneously on collection.
Table 11.6 compares the merits of batch and real-time systems according to several criteria.
There is a general trend from batch systems to real-time processing, but it can be seen from the table that batch processing is superior in some areas, not least cost. For a system such as a national lottery, a real-time system must be used, but it is expensive to set up the necessary infrastructure.
Batch and real-time processingBatch system
A batch system involves processing of many transactions in sequence. This will typically occur over some time after the transactions have occurred.
Real-time system
In a real-time system processing occurs immediately data are collected. Processing follows each transaction.
M11_BOCI6455_05_SE_C11.indd 422 30/09/14 7:12 AM
423ChaPter 11 SYSTEMS DESIgN
Batch systems are still widely used, since they are appropriate for data processing before analysis. For example, batch processing is used in data warehousing when transferring data from the operational system to the warehouse (Chapter 6). A batch process can be run overnight to transfer the data from one location to another and to perform aggregation such as summing sales figures across different market or product segments.
Table 11.6 A comparison of batch and real-time data processing
Factor Batch real-time
Speed of delivery to information user
Slower – depends on how frequently batch process is run – daily, weekly or monthly
Faster – effectively delivered immediately
ability to deal with failure Better – if a batch process fails overnight there is usually sufficient time to solve the problem and rerun the batch
Worse – when a real-time system is offline there is major customer disruption and orders may be lost
Data validation Worse – validation can occur, but it is time-consuming to correct errors
Better – validation errors are notified and corrected immediately
Cost Better – performance is less critical, so cheaper hardware communications can be purchased
Worse – high-specification databases and infrastructure are necessary to achieve the required number of transactions per second
Disruption to users when data processing needs to be performed
Better – can occur in slack periods such as at weekends or overnight
Worse – can disrupt customers if time-consuming calculations occur as each record is processed
USER INTERFACE DESIGN
The design of the user interface is key to ensuring that information systems are easy to use and that users are productive. User interface design involves three main parts: first, defining the different views of the data such as input forms and output tables; second, defining how the user moves or navigates from one view to another; and, third, providing options for the user.
Each module can be broken down into interface elements such as forms which are used to enter and update information such as a customer’s details, views which tabulate results as a report or graphically display related information such as a ‘to-do’ list and dialogs which are used for users to select options such as a print options dialog box. Menus provide selection of different options. Figure 11.20 gives an example of these different interface components.
User interface design is a specialist field which is the preserve of graphic designers and psychologists. This field is often known as human–computer interaction (HCI) design. HCI involves the study of methods for designing the input and output of information systems to ensure they are ‘user-friendly’. It is covered well in Rogers et al. (2011) and Yeates and Wakefield (2003). Many of the design parameters can be assisted by a knowledge of HCI.
Form
An on-screen equivalent of a paper form which is used for entering data and will have validation routines to help improve the accuracy of the entered data.
Data views
Different screens of an application which review information in a different form such as table, graph, report or map.
Dialog
An on-screen window (box) which is used by a user to input data or select options.
Menu
Provides user selection of options for different application functions.
Human–computer in- teraction (HCI) design
HCI involves the study of methods for designing the input and output of information systems to ensure they are ‘user-friendly’.
M11_BOCI6455_05_SE_C11.indd 423 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT424
This ‘Focus on’ looks at a number of issues relating to web site design. The intention is not to give an in-depth explanation of web site design specifics, but rather to look at those elements that go to make up a well-designed web site.
Cox and Dale (2002) identify a number of key quality factors that help to create web sites that meet customer needs and expectations. These include:
■ Clarity of purpose – it must be clear to the customer whether the site is providing just information or whether it enables the customer to make transactions online; the information should be clearly and logically organised and clear instructions should be provided directly from the home page to avoid confusion and frustration.
■ Design – a key objective here is to ensure that the image that the company is appropriately projected and that the customer will remember and return to site. Specific design factors include: links – valid links are needed to enable a customer to navigate around the web site and should readily enable easy navigation between the pages that the customer is most likely to want to view; consistency, menus and site maps – since web sites vary considerably from site to site, it is important for any one site to be internally consistent so that the same procedures occur for similar or related things wherever the user may be within the site; the use of such features as site maps, menus and a ‘home page’ button on every page can help guide the user around the site; pages, text and clicks – it is suggested the pages on a web site should ideally be short, or where this is not feasible, headings and paragraphs and other navigation aids (e.g. a button to scroll to the top of the page; for web sites that enable customer transactions, customers
Figure 11.20 Microsoft Access showing key elements of interface design
Source: Screenshot frame reprinted by permission from Microsoft Corporation
WEB SITE DESIGN FOR B2C E-COMMERCEFOCUS ON…
M11_BOCI6455_05_SE_C11.indd 424 30/09/14 7:12 AM
425ChaPter 11 SYSTEMS DESIgN
should be able to make purchases quickly with minimum pages in the checkout process; communication and feedback – in essence, the user needs to be advised what is happening inside the system in response to their interaction (e.g. confirming order details, or informing the user of a mistake by writing the information in red next to the relevant box or area); in addition, the use of graphics should be such that web page loads are not slowed down (not all users have broadband!) and that animations should not distract users from the content of the page and the information they are looking for; search – search mechanisms to navigate a web site are one of the first strategies used by customers to a web site, often before they use links and menus; therefore search tools should cover the whole site and return the search findings in order of relevance; fill-in forms – the layout of forms for personal detail entry (e.g. for site registration and ordering) should be self-explanatory and relevant to the nationality of the customers using the web site.
■ Accessibility and speed – this refers to the ability for customers to access and navigate an organisation’s web site; factors here include the speed of the home-page download, the accessibility of the web site 24 hours a day, 7 days a week, 365 days of the year and the availability of sufficient bandwidth to cope with customer demand at peak periods.
■ Content – this refers to the information that an organisation is actually offering through its web site; important factors here include selection (the range of products and services on offer and the ease with which they can be found by the customer; product/ service information and availability including a clear picture with all the necessary information on brand, size, colour, capabilities and price so that the customer is not misled together with a clear statement of stock availability so that the customer knows before ordering whether an item is in stock; delivery information – this should be made accessible from the home page or with the product information so that customers are aware of the prices; in addition, customers should also be made aware of probable delivery times and any delays that may occur (e.g. during peak periods); policies, charges, terms and conditions – customers should be aware of all the company terms and conditions before committing to a purchase; security and reliability – lack of security is one of the main barriers to customers shopping online and so it is crucial that a B2C e-commerce web site offers a secure payment method online (either directly or through a third party).
■ Customer service – customer service plays an important part in delivering service quality to the customer and since face-to-face interaction is non-existent in e-commerce transactions, services such as ‘call-u-back’ during office hours and e-mailing queries are needed (contact details should be on every page of the web site and not just on the home page and during the transaction process); frequently asked questions (FAQ) arranged by topic can also help to guide the customer.
■ Customer relationships – the key to success for B2C e-commerce is to attract and retain customers that use the site and keep returning to make purchases: recognition – by asking customers to fill in a user ID (research suggests that it is simpler for customers if they are asked to use their e-mail address as their ID) an organisation can tailor the web sites to a particular customer; it also means that customer information such as the billing and shipping addresses do not have to be filled in again; customer feedback platforms – features such as product reviews (ebuyer.com is a good illustration of this) helps to create a community for customers and is more likely to lead to enhanced customer loyalty; frequent buyer incentives – these can include discounts, free delivery or benefits of promotions; extra services – examples here include a currency conversion rate mechanism on those sites engaging in international B2C e-commerce, extra or related information on the products being sold, links to other partner sites, and those that aid the customer in buying or finding the right product.
M11_BOCI6455_05_SE_C11.indd 425 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT426
Huang et al. (2006) in an analysis of web features and functions identify a number of factors that can impact positively on the customer experience. These clearly overlap with a number of those given above and include:
■ speeding up online tasks ■ establishing multiple communication channels ■ providing suitable access to contacts ■ making the web site personal ■ provision of company information and advertising online ■ facilitation of customer feedback ■ the ability of customers to control information detail.
Cao et al. (2005) also point out that the features that go to make for a good customer experience also have implications for web interface design. For example, in addition to the software considerations, the capabilities of the hardware (both the organisation’s and the customer’s) need to be taken account of (e.g. page loading times).
INPUT DESIGN
User interface design can also be subdivided into input design and output design, but these terms are used more generally to refer to all methods of data entry and display, so they warrant a separate section.
Input design includes the design of user input through on-screen forms, but also other methods of data entry such as import by file, transfer from another system or specialised data capture methods such as bar-code scanning and optical or voice recognition techniques.
Data input design involves capturing data that have been identified in the user requirements analysis via a variety of mechanisms. These have been described earlier (in Chapter 3) and include:
■ keyboard – the most commonly used method; ■ optical character recognition and scanning; ■ voice input; ■ directly from a monitoring system such as a manufacturing process, or from a phone
system when a caller line ID is used to identify the customer phoning and automatically bring their details on screen;
■ input from a data file that is used to store data; ■ import of data from another system via a batch process (for example a data warehouse
will require import of data from an operational system).
Input design
Input design includes the design of user input through on-screen forms, but also other methods of data entry such as import by file, transfer from another system or specialised data capture methods such as bar-code scanning and optical or voice recognition techniques.
One of the key elements in input by all these methods is ensuring the quality of data. This is achieved through data validation. This is a process to ensure the quality of data by checking they have been entered correctly; it prompts the user to inform them of incorrect data entry.
Validation is important in database systems and databases usually supply built-in input validation as follows:
■ Data type checking. When tables have been designed, field types will be defined such as text (alphanumeric), number, currency or date. Text characters will not be permitted in
Data validation
Data validation
Data validation is a process that ensures the quality of data by checking they have been entered correctly.
M11_BOCI6455_05_SE_C11.indd 426 30/09/14 7:12 AM
427ChaPter 11 SYSTEMS DESIgN
a number field and when a user enters a date, for example, the software will prompt the user if it is not a valid date.
■ Data range checking. Since storage needs to be pre-allocated in databases, designers will specify the number of digits required for each field. For example, a field for holding the quantity of an item ordered would typically only need the range 1–999. So three digits are required. If the user made an error and entered four digits, then they would be warned that this was not possible.
■ Restricted value checking. This usually occurs for text values that are used to describe particular attributes of an entity. For example, in a database for estate agents, the type of house would have to be stored. This would be a restricted choice of flat, bungalow, semi-detached, etc. Once the restricted choices have been specified, the software will ensure that only one of these choices is permitted, usually by prompting the user with a list of the available alternatives.
Some additional validation checks may need to be specified at the design phase which will later be programmed into the system. These include:
■ Input limits. This is another form of range checking when the input range cannot be specified through the number of digits alone. For example, if the maximum number of an item that could be ordered is 5, perhaps because of a special offer, this would be specified as a limit of 1–5. Note that the user would not be permitted to enter 0.
■ Multiple field validation. If there are business rules that mean that allowable input is governed by more than one field, then these rules must be programmed in. For example, in the estate agent database, there could be a separate field for commission shown as a percentage of house price, such as 1.5 per cent, and a separate field showing the amount, such as £500. In this situation the programmer would have to write code that would automatically calculate the commission amount depending on the percentage entered.
■ Checksum digits. A checksum involves the use of an extra digit for ensuring the validity of long code numbers. The checksum digit is calculated from an algorithm involving the numbers in the code and their modulus (by convention modulus 11). These can be used to ensure that errors are not made in entering long codes such as a customer account number (although these would normally be generated automatically by the computer). They are often used in bar codes.
Checksum digits
A checksum involves the use of an extra digit for ensuring the validity of long code numbers. The checksum digit is calculated from an algorithm involving the numbers in the code and their modulus (by convention modulus 11).
The checksum digit is calculated using the modulus of the weighted products of the number, as follows:
1. Code number without check digit 5 293643.
2. Calculate the sum of weighted products by multiplying the least significant digit by 2, the next by 3 and so on. For this example:
(7 3 2) 1 (6 3 9) 1 (5 3 3) 1 (4 3 6) 1 (3 3 4) 1 (2 3 3) 5 14 1 54 1 15 1 24 1 12 1 6 5 125
3. Remainder when sum divided by 11 (modulus 11) 5 125/11 5 11 remainder 4.
4. Subtract remainder from 11 to find check digit (1124) 5 7. (If the remainder is 0, check digit is 0; if 1, check digit is X.)
5. New code number with check digit 5 2936437.
Checksum digits exampleActivity 11.3
M11_BOCI6455_05_SE_C11.indd 427 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT428
Output design specifies how production of on-screen reports and paper-based reports will occur. Output may occur to database or file for storing information entered or also for use by other systems.
Output data are displayed by three methods:
1. They may be directly displayed from input data. 2. They may be displayed from previously stored data. 3. They may be derived data that are produced by calculation.
Design involves specifying the source of data (which database tables and fields map to a point on the report), what processing needs to occur to display data such as aggregation, sorting or calculations, and the form in which the information will be displayed – graph, table or summary form.
Output design is important for decision support software to ensure that relevant information can be chosen, retrieved and interpreted as easily as possible. Given that output design involves these three factors, it will also relate to input design (to select the report needed) and database design (to retrieve the information quickly).
OUTPUT DESIGN
Output design
Output design involves specifying how production of on-screen reports and paper-based reports will occur. Output may occur to database or file for storing information entered or also for use by other systems.
DESIGNING INTERFACES BETWEEN SYSTEMS
A major challenge for the designer of today’s systems is systems integration. Systems integration includes both linking the different modules of a new system together and linking the new system with existing systems often known as ‘legacy systems’. For applications that span a whole organisation this challenge is referred to as enterprise application integration (EAI). Designing how the systems interoperate involves consideration of how data are exchanged between applications and how one application controls another. A special class of software, middleware or messaging software, is used to achieve this control and data transfer. In a banking system, middleware is used to transfer data between an online banking service and a legacy account system. For example, if a user wishes to transfer money from one account to another using a web-based interface this web application must instruct the legacy system to make the transfer. The web-based interface will also need to access data from the legacy system on the amount of money available in the accounts. This illustrates the role of middleware in control messaging and data transfer messaging.
XML (eXtensible Markup Language) is a standard that has been widely adopted for the transfer of information between e-business systems. XML is increasingly used to share data between partners. For example, Chem eStandards, is an XML standard for the chemical industry, which covers 700 data elements and 47 transactions and is sponsored by the Chemical Industry Data Exchange (CIDX, www.cidx.org). A more widely applicable application of XML is ebXML (www.ebxml.org). One application developed using ebXML is to enable different accounting packages to communicate with online order processing systems. For designers to ensure future flexibility of their systems it is important to ensure that interfaces with external systems can support different XML data exchange standards.
Enterprise application integration (EAI)
The process of designing software to facilitate communications between business applications including data transfer and control.
Middleware
Software used to facilitate communications between business applications including data transfer and control.
XML (eXtensible Markup Language)
A standard for transferring structured data, unlike HTML which is purely presentational.
DEFINING THE STRUCTURE OF PROGRAM MODULES
The detailed design may include a definition for programmers, indicating how to structure the code of the module. The extent to which this is necessary will depend on the complexity of the module, how experienced the programmer is and how important it is to document
M11_BOCI6455_05_SE_C11.indd 428 30/09/14 7:12 AM
429ChaPter 11 SYSTEMS DESIgN
the method of programming. A safety-critical system (Chapter 4) will always be designed in this detail before coding commences. Structured English is one of the most commonly used methods of defining pro-gram structure. Standard flow charts can be used, but these tend to take longer to produce.
Structured English
A technique for producing a design specification for programmers which indicates the way individual modules or groups of modules should be implemented.
Structured English is a technique for producing a design specification for programmers which indicates the way individual modules or groups of modules should be implemented. It is more specific than a flow chart. It uses keywords to describe the structure of the program, as shown in the example box. Structured English is sometimes known as ‘pseudocode’ or ‘program design language’. Data action diagrams use a similar notation.
Structured English has the disadvantage that it is very time-consuming to produce a detailed design. But it has the advantage that to move from here to coding is very straightforward and the likelihood of errors is reduced.
Structured English
Example: Structured English This example moves through each record of a database table totalling all employees’ salaries. (Note that this could be accomplished more quickly using an SQL statement.)
DO WHILE NOT end of table
IF hoursrworked> basicrhours
SET pay = (hours*basicrrate) + (overtimerhours*overtimerrate)
ELSE
SET pay = (hours*basicrrate)
END if
SET totalrpay = totalrpay + pay
move to next record
ENDDO
SECURITY DESIGN
Data security is, of course, a key design issue, particularly for information systems that contain confidential company information which is accessed across a wide-area network or the Internet. The four main attributes of security which must be achieved through design are:
1. Authentication ensures that the sender of the message, or the person trying to access the system, is who they claim to be. Passwords are one way of providing authentication, but are open to abuse – users often tend to swap them. Digital certificates and digital signatures offer a higher level of security. These are available in some groupware products such as Lotus Notes.
2. Authorisation checks that the user has the right permissions to access the information that they are seeking. This ensures that only senior personnel managers can access salary figures, for example.
M11_BOCI6455_05_SE_C11.indd 429 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT430
3. Privacy – in a security context, privacy equates to scrambling or encryption of messages so that they cannot easily be decrypted if they are intercepted during transmission. Credit card numbers sent over the Internet are encrypted in this way.
4. Data integrity – security is also necessary to ensure that the message sent is the same as the one received and that corruption has not occurred. A security system can use a checksum digit to ensure that this is the case and the data packet has not been modified.
Data must also be secure in the sense of not being subject to deletion, or available to people who don’t have the ‘need to know’. Methods of safeguarding data are covered in more detail in later (in Chapter 15).
DESIGN TOOLS: CASE (COMPUTER-AIDED SOFTWARE ENGINEERING) TOOLS
CASE (computer-aided software engineering) tools are software that helps the systems analyst and designer in the analysis, design and build phases of a software project. They provide tools for drawing diagrams such as entity relationship diagrams (ERDs) and storing information about processes, entities and attributes.
CASE tools are primarily used by professional IS developers and are intended to assist in managing the process of capturing requirements, and converting these into design and program code. They also act as a repository for storing information about the design of the program and help make the software easy to maintain.
CASE (computer-aided software engineering) tools
Software that helps the systems analyst and designer in the analysis, design and build phases of a software project. They provide tools for drawing diagrams such as ERDs and storing information about processes, entities and attributes.
ERROR HANDLING AND EXCEPTIONS
The design will include a strategy for dealing with bugs in the system or problems resulting from changes to the operating environment, such as a network failure. When an error is encountered the design will specify that:
■ users should be prompted with a clear but not alarming message explaining the problem;
■ the message should contain sufficient diagnostics that developers will be able to identify and solve the problem.
HELP AND DOCUMENTATION
It is straightforward using tools to construct a Windows help file based on a word-processed document. The method of generating help messages for users will also be specified in the design. Help is usually available as:
■ an online help application similar to reading a manual, but with links between pages and a built-in index;
■ context-sensitive help, where pressing the help button of a dialogue will take the user straight to the relevant page of the online user guide;
■ ToolTip help, where the user places the mouse over a menu option or icon and further guidance is displayed in the status area;
■ help associated with error messages; this is also context-sensitive.
M11_BOCI6455_05_SE_C11.indd 430 30/09/14 7:12 AM
431ChaPter 11 SYSTEMS DESIgN
Object-oriented design is a popular design technique which involves basing the design of software on real-world objects that consist of both data and the procedures that process them, rather than traditional design where procedures operate on separate data. Many software products are labelled ‘object-oriented’ in a bid to boost sales, but relatively few are actually designed using object-oriented techniques. What makes the object approach completely different?
■ Traditional development methods are procedural, dealing with separate data that are transformed by abstract, hierarchical programming code.
■ OOD is a relatively new technique involving objects (which mirror real-world objects consisting of integrated data and code).
Examples of objects that are commonly used in business information systems include customer, supplier, employee and product. You may notice that these are similar to the entities referred to earlier (in Chapter 10), but a key difference is that an object will not only consist of different attributes such as name and address, but will also comprise procedures that process them. For example, a customer object may have a procedure (known as a ‘method’) to print these personal details.
The main benefits of using object orientation are said to be more rapid development and lower costs which can be achieved through greater reuse of code. Reuse in object-oriented systems is a consequence of the ease with which generic objects can be incorporated into code. This is a consequence of inheritance, where a new object can be derived from an existing object and its behaviour modified (polymorphism).
Some further advantages of the object-oriented approach are:
■ easier to explain object concepts to end-users since they are based on real-world objects; ■ more reuse of code – standard, tested business objects; ■ faster, cheaper development of more robust code.
Object-oriented design is closely linked to the growth in use of software components for producing systems. Developers writing programs for Microsoft Windows on a PC will now commonly buy pre-built objects with functionality such as displaying a diary, a project schedule or different types of graph. Such object components are referred to as Visual Basic controls and object controls (OCX). Through using these, developers can implement features without having to reinvent the wheel of writing graphical routines.
An example of a class hierarchy is shown in Figure 11.21. The base class is a person who attends the college. All other classes are derived from this person.
OBJECT-ORIENTED DESIGN (OOD)FOCUS ON…
Object-oriented design
This is a design technique which involves basing the design of software on real-world objects which consist of both data and the procedures that process them rather than traditional design where procedures operate on separate data.
Figure 11.21 A class hierarchy for different types of people at a university
Sta�
Admin Academic
Student
Undergrad Postgrad
Person
M11_BOCI6455_05_SE_C11.indd 431 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT432
How widely is the object-oriented approach used?
There was a rapid growth in the use of object-oriented techniques in the 1990s, although original research using the Simula language dates back to the late 1960s. This growth in interest is reflected by the increase in the number of jobs advertised by companies looking to develop software using object-oriented methods, such as Smalltalk, C++ and Java which is now one of the main methods for developing interactive web sites. Specialised methodologies exist for designing object-oriented systems. One of the most commonly used is the object modelling technique (OMT) (see Blaha and Rumbaugh, 2005). This shares some elements with DFD and ERD, but differs in that a hierarchical class breakdown is an additional perspective on designing the system.
What are the main characteristics of an object-oriented system? 1. An object consists of data and methods that act on them. A customer object would contain data such as
their personal details and methods that act on them such as ‘print customer details’.
2. Objects communicate using messages which request a particular service from another object, such as a ‘print current balance’ service. These services are known as ‘methods’ and are equivalent to functions in traditional programming.
3. Objects are created and destroyed as the program is running. For example, if a new customer opens an account, we would create a new instance of the object. If a customer closes an account, the object is destroyed.
4. Objects provide encapsulation – an object can have private elements that are not evident to other objects. This hides complex details and gives a simple public object interface for external use by other objects. A real-world analogy is that it is possible to use a limited number of functions on a television without knowing its inner workings. In object-oriented parlance the television controls are providing different public methods which can be used by other objects. ‘Abstraction’ refers to the simplified public interface of the object.
5. Objects can be grouped into classes which share characteristics. For example, an organisation might contain an employee class. The classes can be subdivided using a hierarchy to create subclasses such as ‘manager’ or ‘administrator’. Classes can share characteristics with other classes in the hierarchy, which is known as inheritance. This refers to the situation when an object inherits the behaviour of other objects. A specialised part-time staff class could inherit personal details data items from the employee class. If the method for calculating salary were different, then the part-time staff could override its inherited behaviour to define its own method ‘calculate salary’. This is known as polymorphism, where an object can modify its inherited behaviour.
Despite the growth of OOD, non-object or procedural systems vastly outnumber object systems. So if OOD is nirvana, why doesn’t everyone use it? The following are all practical barriers to growth:
■ Millions of lines of procedural legacy computer code exist in languages such as COBOL. ■ Many programmers’ skills are procedural – OOD requires retraining to a different way
of thinking. ■ Methodologies, languages and tools are developing rapidly, requiring constant retraining
and making reuse different when using different tools and languages, for example the most popular object-oriented method has changed from Small-talk to C++ to Java in just 10 years.
■ Limited libraries are available for reuse. ■ When initially designing projects, it is often slower and more costly – the benefits of
OOD take several years to materialise.
The experience of early adopters has shown that the benefits do not come until later releases of a product and that initial object-oriented design and development may be more expensive than traditional methods.
M11_BOCI6455_05_SE_C11.indd 432 30/09/14 7:12 AM
433ChaPter 11 SYSTEMS DESIgN
Stage summary: systems design
Purpose: Defines how the system will work
Key activities: Systems design, detailed design, database design, user interface design
Input: Requirements specification
Output: System design specification, detailed design specification, test specification
1. The design phase of the systems development lifecycle involves the specification of how the system should work.
2. The input to the design phase is the requirements specification from the analysis phase. The output from the design phase is a design specification that is used by programmers in the build phase.
3. Systems design is usually conducted using a top-down approach in which the overall architecture of the system is designed first. This is referred to as the systems or outline design. The individual modules are then designed in the de-tailed design phase.
4. Many modern information systems are designed using the client/server architecture. Processing is shared between the end-user’s clients and the server, which is used to store data and process queries.
5. Systems design and detailed design will specify how the following aspects of the system will work:
■ its user interface; ■ method of data input and output (input and output design); ■ design of security to ensure the integrity of confidential data; ■ error handling; ■ help system.
6. For systems based on a relational database and a file-based system, the design stage will involve determining the best method of physically storing the data. For a database system, the technique for optimising the storage is known as ‘normalisation’.
7. Object-oriented design is a relatively new approach to design. It has been adopted by some companies attracted by the possibility of cheaper development costs and fewer errors, which are made possible through reuse of code and a different design model that involves data and process integration.
SUMMARY
1. Define systems design.
2. What distinguishes systems design from systems analysis?
3. Describe the purpose of validation and verification.
4. What are process modelling and data modelling? Which diagrams used to summarise requirements at the analysis phase are useful in each of these types of modelling?
5. Explain the client/server model of computing.
6. What parts of the system need to be designed at the detailed design stage?
7. Describe the purpose of normalisation.
EXERCISES
Self-assessment exercises
M11_BOCI6455_05_SE_C11.indd 433 30/09/14 7:12 AM
8. Explain insertion, update and deletion anomalies.
9. What are the differences between the sequential and direct (random) file access methods? In which business applications might they be used? What is the purpose of a file index?
10. Explain the difference between a batch and a real-time system. Which would be the more appropriate design for each of the following situations:
■ periodic updating of a data warehouse from an operational database; ■ capturing information on customer sales transactions?
11. What are the different types of input validation that must be considered in the design of a user input form?
12. Describe the main differences between the analysis and design phases within the systems development lifecycle.
Essay questions
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT434
Discussion questions
1. ‘The client/server model of computing has many disadvantages, but these do not outweigh the advantages.’ Discuss.
2. ‘The distinction between system design and detailed design is an artificial one since a bottom-up approach to design is inevitable.’ Discuss.
1. Explain, using an example from a human resources management database, the norm- alisation process from unnormalised data to third normal form (3NF).
2. Table 11.7, from a relational database, contains a number of rows and columns. When data are entered into the table, all columns must have data entered. Information about product descriptions, prices, product groups and rack locations is not held elsewhere. Explain how, because of its design, the table contains data duplicated in fields and contains the potential for insertion, update and deletion anomalies. What is meant by these anomalies and what could be done to prevent them?
3. A business-to-consumer company (B2C), a kitchenware retailer, wants to set up an e-commerce site, but first wants to produce a prototype in Microsoft Access. The data analysis has been performed and is shown in the expanded entity relationship diagram in Figure 11.22. Produce this database in Access based on the ERD. Include 4 or 5 sample records for each table.
Table 11.7 Table from a relational database
Product code
Product description
Product group
Group description
Cost retail price
rack location
Quantity
0942 Small Green KD Kiddy Doh 0.19 1.29 A201 16
0439 Large Red KD Kiddy Doh 0.31 1.89 W106 35
0942 Small Green KD Kiddy Doh 0.19 1.29 E102 0
0902 Small Green KD Kiddy Doh 0.19 1.29 J320 56
1193 Spinning Top PS Pre-School 1.23 12.49 X215 3
2199 Burger Kit KD Kiddy Doh 3.25 17.75 D111 0
M11_BOCI6455_05_SE_C11.indd 434 30/09/14 7:12 AM
1. Explain the difference between validation and verification. Why are they important elements of systems design?
2. What benefits does three-tier client/server offer over two-tier client/server?
3. What are the main elements of system design?
4. Explain normalisation and how it can help remove different types of anomaly when modifying a database.
5. Which criteria are important in deciding whether to use a batch or real-time system?
6. What are the important aspects of user interface design?
7. Which different types of validation need to occur on data input to a system to ensure information quality?
8. What are the four main attributes of information security which need to be attained in an information system?
9. What is meant by the terms ‘input design’, ‘output design’ and ‘database design’? Illustrate each of them with an example.
435ChaPter 11 SYSTEMS DESIgN
Customer
• Customer id * • Title
• First name • Last name
• Address line 1 • Address line 2
• City • Post / Zip code
• County • Password
• User id • E-mail
• Registration date
Primary key
Secondary key
one-to-many relationship
Key
*
+ 1 M
Order hdr
• Order id * • Order date
• Dispatch date • Total amount • Shipping cost • Order credit card number
• Customer id +
places
contains
co nt
ai ns
Order line
• Line id * • Order id + • Quantity
• Price • Product id +
Product
• Product id * • Short description • Long description
• Picture • Size
• Category • Manufacturer id +
• Standard price • Number in stock
• Reorder level • Next available date
Figure 11.22 The expanded ERD for a kitchenware retailer
Examination questions
References
Blaha, M.R. and Rumbaugh, J. (2005) Object Oriented Modeling and Design with UML, 2nd edition, Prentice-Hall, Englewood Cliffs, NJ
Cao, M., Zhang, Q. and Seydel, J. (2005) ‘B2C e-commerce web site quality: an empirical examination’, Industrial Management and Data Systems, 105, 5, 645–61
Cox, J. and Dale, B.G. (2002) ‘key quality factors in Web site design and use: an examination’, International Journal of Quality and Reliability Management, 19, 7, 862–88
M11_BOCI6455_05_SE_C11.indd 435 30/09/14 7:12 AM
Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT436
Curtis, G. and Cobham, D. (2008) Business Information Systems: Analysis, Design and Practice, 6th edition, Addison-Wesley, Harlow
Hoffer, J.A., George, J. and Valacich, J. (2010) Modern Systems Analysis and Design, 6th edition, Prentice-Hall, Upper Saddle River, NJ
Hoffer, J.A., Ramesh, V. and Topi, H. (2013) Modern Database Management, 11th edition, Prentice-Hall, Upper Saddle River, NJ
Huang, W., Le, T., Li, X. and Gandha, S. (2006) ‘Categorizing web features and functions to evaluate commercial web sites: an assessment framework and an empirical investigation of Australian companies’, Industrial Management and Data Systems, 106, 4, 523–39
Jackson, M.A. (1983) System Development, Prentice Hall, London
Rogers, Y., Sharp, H. and Preece, J. (2011) Interaction Design: Beyond Human-Computer Interaction, 3rd edition, Addison-Wesley, Wokingham
Whitten, J.L. and Bentley, L.D. (2006) Systems Analysis and Design Methods, 7th edition, Mcgraw-Hill Irwin, Boston, MA
Yeates, D. and Wakefield, T. (2003) Systems Analysis and Design, 2nd edition, Financial Times Pitman Publishing, London
Further reading
Booch, G. (2011) Object Oriented Analysis and Design with Applications, 2nd edition, Addison-Wesley, Upper Saddle River, NJ
Hasselbring, W. (2000) ‘Information system integration’, Communications of the ACM, June, 43, 6, 33–8
Hoffer, J.A., Ramesh, V. and Topi, H. (2013) Modern Database Management, 11th edition, Prentice-Hall, Upper Saddle River, NJ. A comprehensive text on the process of database design and normalisation together with applications such as data warehousing.
Hoffer, J.A., George, J. and Valacich, J. (2010) Modern Systems Analysis and Design, 6th edition, Prentice-Hall, Upper Saddle River, NJ. A complementary text to Modern Database Management, this has specific chapters on issues involved with designing user interfaces and Internet systems.
Kendall, K. and Kendall, J. (2013) Systems Analysis and Design, 9th edition, Prentice-Hall, Upper Saddle River, NJ. A longer text than the other two partly due to the extensive case on designing a student record system that runs through the book.
Rosenfeld, L. and Morville, P. (2007) Information Architecture for the World Wide Web, 3rd edition, O’Reilly & Associates, Sebastopol, CA. An excellent guide to analysis and design approaches to defining structured storage and access to information using web-based information systems.
Web links
www.cio.com CIO.com for chief information officers and IS staff has many articles related to analysis and design topics.
http://database.ittoolbox.com Channel of IT Toolbox giving topical news and whitepapers on database design, for example data quality, security design, data warehousing. Also has a series of useful introductory articles.
Usability and accessibility
www.uie.com/articles This site focuses on usability, but offers a counterpoint with different views based on research of user behaviour.
www.rnib.org.uk/accessibility Royal National Institute for the Blind web accessibility guidelines.
www.w3.org/WaI World Wide Web Consortium web accessibility guidelines.
M11_BOCI6455_05_SE_C11.indd 436 30/09/14 7:12 AM
