Section 0: Module Objectives or Competencies
Course Objective or Competency | Module Objectives or Competency |
---|---|
The student will be able to list and explain the fundamental concepts behind the implementation, testing, conversion, and maintenance of a system. | Students will be able to explain that implementation involves not only coding, but also testing and optimizing for efficiency. |
The student will be able to list and explain a variety of approaches for conversion. |
Section 1: Overview
Implementation Phase
The implementation phase encompasses the construction/documentation, testing, and installation/conversion of the new system.
- Once the analysis and design phases have been completed, the new system can be constructed.
- The construction phase includes the tasks of coding, testing, and optimizing for efficiency.
- Meanwhile, the site must be readied for the computer.
- Later, during the installation/conversion phase, all data must be transferred to the new system, and the new system put into operation.
- Testing and verification determine whether the system is working properly.
Phase Name: Implementation | |
---|---|
Major Function: | Construct, test, and install the new system. |
Input: | Design specification |
Output: | Functional system |
Principal Tools: |
|
Personnel & Tasks: |
Analyst
|
Section 2: Implementation
As noted, the construction phase includes the tasks of coding, testing, and optimizing for efficiency.
The purpose of this phase is to convert final physical system specifications into working and reliable software.
- At the same time, the work that is being done must be documented.
Because you have experienced software construction in programming courses, and will experience it in detail in INFO 4482 (Systems Development & Implementation Methodologies), we won't detail the process.
Users must continue to be involved, and one vehicle is the structured walkthrough.
- What?
- An inspection intended to expose defects in the product.
- Helps to ensure the correctness of the new system.
- Also referred to as a formal technical review.
- One of the most important tools for discovering defects at the earliest possible stage.
- Helps to uncover places where standards have been ignored, where inefficient algorithms have been used, or where a technically correct program might nonetheless present future maintenance problems.
- When?
- Done at all stages of a project, from the models of the analysis phase to the structure charts of the design phase, and on to the programs and documentation of the implementation phase.
- Why?
- Organizations that subject all systems to stringent walkthroughs can reportedly reduce the program error rate from the industry average of three to five defects per hundred lines of code to a more manageable average of three to five defects per thousand lines of code.
- Who?
- The team members may be analysts and/or programmers, depending on the type of product being reviewed, and users.
- How?
- User representatives verify that the product performs as requested.
- The responsibility of the team is to give an accurate appraisal, good or bad, of the product being reviewed.
- The participants should identify defects, but they should not attempt to correct them, since correction is the responsibility of the author.
- Result?
- The outcome of the walkthrough is the walkthrough report.
- The report details all the problems that need attention.
- The product is accepted as it is or with minor revisions, or it is rejected.
- Rejection is broken down into three categories:
- The product has so many serious flaws that it must be completely rebuilt,
- The product needs major revisions.
- The review was incomplete and must be continued later.
Video: IT Software Development Lifecycle Part 4 - Implementation Phase
Section 3: Testing
Process
- All of the system's newly written or modified application programs, as well as new procedural manuals, new hardware, and all system interfaces, must be tested thoroughly.
- Testing is done throughout systems development, not just at the end.
- It is meant to turn up previously unknown problems, not to demonstrate the perfection of programs, documentation, or equipment.
Incremental Testing
- Using the incremental approach, a module is tested by itself, then immediately added to the group of existing modules and tested again.
- This process is carried out for each module.
- By the time the last module has been added, all of the defects should have been found except for those related to that last module.
- The process begins as soon as the second module is completely coded, thereby interleaving and overlapping the coding and testing.
- Overlapping these activities gives the advantage of providing more time to correct any defects found early in the testing, perhaps allowing programmers to fix related defects in the remaining modules as they are coded.
- Each module that is added must be related to the group of existing modules; a module cannot be added to an unrelated group of modules if the test is expected to be successful; instead, testing should start in one area of the structure chart and then gradually work its way to other areas.
Top-Down Incremental Testing
- Testing begins with the highest-level module.
- First, all of the subordinates are simulated with stubs, or dummy modules that do little more than report that they were entered and that they received the appropriate parameters.
- When the main module is working properly, the subordinate stubs can be replaced with real modules, creating stubs to be the subordinates of these new modules.
- Testing works down the structure chart, replacing the remaining stubs with coded modules.
Bottom-Up Incremental Testing
- Testing begins with the lowest modules, and dummy modules are used to simulate the supervisor (or mid-management) modules.
- Testing works up the structure chart by replacing the dummies one at a time, with real modules.
Sandwich Incremental Testing
- One team starts at the top of the structure chart and moves down, while another team starts at the bottom and moves up.
- This method, even more than the others, requires extensive planning if the two teams are to meet in the middle on common ground.
Choosing a Testing Approach
- Both of the top-down approaches test the upper-level interfaces repeatedly, and the bottom-up approach tests the lower-level interfaces repeatedly.
- It should be decided prior to testing which areas of the system have the most potential for serious error, and then the testing methodology that covers those areas most thoroughly should be used.
- For instance, a system with a very complex database might be more likely to encounter problems at the bottom, where the input and output occur, and, in such a case, the bottom-up approach might be most appropriate.
- If the upper-level modules have very complicated logic patterns and the input and output are straightforward, a top-down approach makes more sense.
- If a system seems to be weighted equally in complexity, the sandwich approach might be the best choice.
Creating Test Data
- Once the methodology of testing has been selected, test data that will expose as many defects as possible must be created.
- Data must be devised to test every decision – ifs, loops, and case structures – in the system.
- For each of these types of decisions, data is created to carry out both normal path and exception path tests.
Normal Path Test
- A normal path test employs data that are considered to be valid for the given decision.
- Values are generally chosen to test the upper boundary, the lower boundary, and an area somewhere in the middle.
- Example: If ACCOUNT-BALANCE is greater than $100 and less than $500.
- Possible Normal Path Test Data: $100.01, $386.74, and $499.99.
Exception Path Test
- The exception path test utilizes values that are not valid for a given decision.
- Values are generally chosen to test the upper boundary, the lower boundary, and an area somewhere in the middle.
- Example: If ACCOUNT-BALANCE is greater than $100 and less than $500.
- Possible Exception Path Test Data: $100.00, $500.00, and $896.64.
Designing Test Data
-
If each decision could be tested on an isolated basis, testing in general
would not be too difficult.
- However, the decisions in the system do not occur in isolation; each one depends on the result of the decision that came before, and each affects the decision to follow.
- If there are only 100 if statements in a program, 2100, or 1,267,650,600,228,229,401,496,703,205,376 tests would be required.
-
Ideally test data should be put together in such a way that they test
every possible combination of values.
- If, for instance, there are six values to test for the first decision and six for the decision that follows, a complete test data set would account for the 36 possible combinations.
- Considering the number of decisions that occur even in a small system, it is apparent that a vast number of test data transactions will be required to test the system fully.
- Unfortunately, such comprehensive testing is usually both physically and economically impossible, so instead data is generally created that tests every decision, but not every possible combination of decision values.
Full Systems Testing
- After each module has been tested both individually and as part of the overall system, the system as a complete entity must be tested.
- At this stage, operators and end users become actively involved in testing.
- Test data, as discussed above, are used.
- There are a number of factors to consider when systems testing with
test data:
- Examining whether operators have adequate documentation in procedure manuals (hard copy or on line) to afford correct and efficient operation.
- Checking whether procedure manuals are clear enough in communicating how data should be prepared for input.
- Ascertaining if work flows necessitated by the new or modified system actually "flow."
- Determining if output is correct and whether users understand that this is, in all likelihood, how output will look in its final form.
- Systems testing includes reaffirming quality standards for system performance that were set up when initial system specifications were made.
- Everyone involved should once again agree how to determine whether the system is doing what it is supposed to do.
- This will include measures of error, timeliness, ease of use, proper ordering of transactions, acceptable down time, understandable procedure manuals, and so on.
Live Data
- When systems tests using test data prove satisfactory, it is a good idea to try the new system with several passes on what is called "live data" – data that have been successfully processed through the existing system.
- This step allows an accurate comparison of the new system's output with what you know to be correctly processed output, as well as a good feel for how actual data will be handled.
- Obviously, this step is not possible when creating entirely new outputs.
- As with test data, only small amounts of live data are used in this kind of system testing.
User Involvement
- As we have seen again and again, user involvement throughout the systems project is critical to successful system development.
- Testing is an important period for assessing how end users and operators actually interact with the system.
- Although much thought is given to user-system interaction, you can never fully predict the wide range of differences in the way users will actually interact with the system.
- It is not enough to interview users about how they are interacting with the system; you must observe them firsthand.
- Items to watch for are ease of learning the system and user reaction to system feedback.
Documentation Testing
- Documentation and context-sensitive help also needs to be tested.
- Although documents can be proofread by support staff and checked for technical accuracy by the systems analysis team, the only real way to test them is to have users and operators try them, preferably during full systems testing with live data.
Acceptance Test
- The acceptance test verifies to the user that the system does what it is supposed to do.
-
The acceptance test should not uncover any defects.
- If all prior testing has been carried out correctly, virtually no defects should remain.
- Instead, the acceptance test should establish that all functions promised in the acceptance criteria of the problem specification have been delivered and that such considerations as turnaround and response time are adequate.
Parallel Testing
- For the final system test, parallel testing, the new system is run
concurrently with the old, both with the same real data.
- The aim of parallel testing is to find out whether the legacy system and the new system are behaving the same or differently.
- The outputs from the old system are compared to those of the new system, and if at any time there is a discrepancy, the new system must be investigated to see if there is a problem.
- Do not automatically assume that the new system is at fault; it is not unheard of for a new system to expose a longstanding defect in an old one.
Supplemental Slides
Video: IT Software Development Lifecycle Part 3 - Testing Phase
Section 4: Installation/Conversion
- Conversion involves transferring all necessary data from the old to the new system and bringing the new system into operation.
- The analyst is actively involved in planning the conversion so that it will cause the least possible disruption to the daily operations of the business.
- There are many conversion strategies available to analysts, and there is also a contingency approach that takes into account several organizational variables in deciding which conversion strategy to use.
- There is no single best way to proceed with conversion.
- The importance of adequate planning and scheduling of conversion (which often takes many weeks), file backup, and adequate security cannot be overemphasized.
- Conversion strategies include direct changeover, parallel conversion, gradual conversion, and phased conversion.
Direct Changeover
- Conversion by direct changeover means that, on a specified date, the old system is dropped and the new system is put into use.
- Direct changeover can only be successful if extensive testing is done beforehand (with live data) and it works best when some delays in processing can be tolerated.
- Direct changeover is considered a risky approach to conversion, and its disadvantages are numerous.
- For instance, long delays might ensue if errors occur, since there is no alternate way to accomplish processing.
- Additionally, users may resent being forced into using an unfamiliar system without recourse.
- Finally, there is no adequate way to compare new results with old.
Parallel Conversion
- This refers to running the old system and the new system at the same time, in parallel.
- This is the most frequently used conversion approach.
- Both systems are run simultaneously for a specified period of time, and the reliability of results is examined.
- When the same results can be gained over time, the new system is put into use, and the old one is stopped.
- The advantages of running both systems in parallel include the possibility of checking new data against old data in order to catch any errors in processing in the new system.
- Parallel processing also offers a feeling of security to users, who are not forced to make an abrupt change to the new system.
- There are many disadvantages to parallel conversion including the cost of running two systems at the same time and the burden on employees of virtually doubling their workload during conversion.
- Another disadvantage is that unless the system being replaced is a
manual one, it is difficult to make comparisons between outputs of the
new system and the old one.
- If the new system was created to improve on the old one, then outputs from the systems should differ.
- Finally, it is understandable that employees who are faced with a choice between two systems will continue using the old one because of their familiarity with it.
Gradual Conversion
- Gradual conversion attempts to combine the best features of the earlier two plans, without incurring all of the risks.
- The volume of transactions handled by the new system is gradually increased as the system is phased in.
- The advantages of this approach include allowing users to get involved with the system gradually and the possibility of detecting and recovering from errors without a lot of down time.
- Disadvantages of gradual conversion include taking too long to get the new system in place and its inappropriateness for conversion of small, uncomplicated systems.
Phased Conversion
- This refers to a situation in which many installations of the same system are contemplated.
- One entire conversion is done (with any of the approaches considered already) at one site.
- When that conversion is successfully completed, other conversions are done for other sites.
- An advantage of phased conversion is that problems can be detected and contained, rather than inflicted simultaneously on all sites.
- A disadvantage is that even when one conversion is successful, each site will have its own peculiarities to work through, and these must be handled accordingly.
A slightly different take...
Video: SDLC Implementation Phase
Best Choice?
The analyst must consider many factors (including the wishes of clients) in choosing a conversion strategy.
- No particular conversion approach is equally suitable for every system implementation.
- Regardless of the conversion method, a changeover date, usually at the end of a business period, must be chosen so as to minimize disruption to the organization.
- There must be some type of a plan to fall back on in case major defects
appear in the new system.
- The fallback plan must make provision for not only the system to be used, but also the personnel and supplies that the system needs.
- With parallel operation, use of the old system can be resumed.
- With direct conversion, however, the old system may not be up to date, and therefore going back to it may be difficult.
Supplemental Slides
Section 5: Summary
The implementation phase involves the steps below:
- Construction/Documentation
- Testing
- Installation/Conversion